Confuddlement: Download Confluence Spaces as Markdown, Summarise with Ollama

Confuddlement on Github I was tired of manually downloading Confluence pages and converting them to Markdown, so I wrote a small command-line tool designed to simplify this process. Confuddlement is a Go-based tool that uses the Confluence REST API to fetch page content and convert it to Markdown files. It can fetch pages from multiple spaces, skip pages that have already been fetched, and summarise the content of fetched pages using the Ollama API. $ go run ./main.go Confuddlement 0.3.0 Spaces: [COOLTEAM, MANAGEMENT] Fetching content from space COOLTEAM COOLTEAM (Totally Cool Team Homepage) Retrospectives Decision log Development Onboarding Saved page COOLTEAM - Feature List to ./confluence_dump/COOLTEAM - Feature List.md Skipping page 7. Support, less than 300 characters MANAGEMENT (Department of Overhead and Bureaucracy) Painful Change Management Illogical Diagrams Saved page ./confluence_dump/Painful Change Management.md Saved page Illogical Diagrams to ./confluence_dump/Ilogical Diagrams.md Done! $ go run ./main.go summarise Select a file to summarise: 0: + COOLTEAM - Feature List 1: + Painful Change Management 2: + Illogical Diagrams Enter the number of the file to summarise: 1 Summarising Painful Change Management... "Change management in the enterprise is painful and slow. It involves many forms and approvals." go run main.go -q 'who is the CEO?' -s 'management' -r 2 Querying the LLM with the prompt 'who is the CEO?'... "The CEO of the company is Peewee Herman." Usage Running the Program Copy .env.template to .env and update the environment variables. Run the program using the command go run main.go or build the program using the command go build and run the resulting executable. The program will fetch Confluence pages and save them as Markdown files in the specified directory. Querying the documents with AI You can summarise the content of a fetched page using the Ollama API by running the program with the summarise argument: go run main.go summarise To perform a custom query, you can use the query argument: -q: The query to to provide to the LLM. -s: The search term to match documents against. -r: The number of lines before and after the search term to include in the context to the LLM. go run main.go -q 'who is the CEO?' -s 'management' -r 2 Querying the LLM with the prompt 'who is the CEO?'... "The CEO of the company is Peewee Herman." ...

May 23, 2024 · 3 min · 605 words · Sam McLeod

SuperPrompter - Supercharge your text prompts for AI/LLM image generation

SuperPrompter is a Python-based application that utilises the SuperPrompt-v1 model to generate optimised text prompts for AI/LLM image generation (for use with Stable Diffusion etc…) from user prompts. See Brian Fitzgerald’s Blog for a detailed explanation of the SuperPrompt-v1 model and its capabilities / limitations. Features Utilises the SuperPrompt-v1 model for text generation. A basic (aka ugly) graphical user interface built with tkinter. Customisable generation parameters (max new tokens, repetition penalty, temperature, top p, top k, seed). Optional logging of input parameters and generated outputs. Bundling options to include or exclude pre-downloaded model files. Unloads the models when the application is idle to free up memory. Prebuilt Binaries Check releases page to see if there are any prebuilt binaries available for your platform. ...

March 22, 2024 · 2 min · 422 words · Sam McLeod

Llamalink - Ollama to LM Studio LLM Model Linker

Two of my most commonly used LLM tools are Ollama and LM Studio. Unfortunately they store their models in different locations and filenames. Manually copying or linking files was a pain, so I wrote a simple command-line tool to automate the process. This is why I created Llamalink. Ollama is a cross-platform model server that allows you to run LLMs and manage their models in a similar way to Docker containers and images, while LM Studio is a macOS app that provides a user-friendly interface for running LLMs. ...

March 21, 2024 · 3 min · 427 words · Sam McLeod

Open source, locally hosted AI powered Siri replacement

Offline AI / LLM Assistant More info on this soon but the basic idea was to use Willow, Home Assistant and local LLM models to create a locally hosted, offline, AI powered Siri replacement and interface it with ESP32 S3 Box 3 devices. ...

November 20, 2023 · 1 min · 192 words · Sam McLeod

Introduction to AI and Large Language Models (LLMs)

This is a high level intro to LLMs that I’m writing for a few friends that are new to the concept. It is far from complete, definitely contains some errors and is a work in progress. This is a work in progress and a living document. Language models, or LLMs, are a type of artificial intelligence that can generate text based on a given prompt. They work by learning patterns in large amounts of text data and using those patterns to generate new text. LLMs can be used for a variety of tasks, such as generating chatbots, answering questions, and creating art. ...

January 26, 2023 · 13 min · 2741 words · Sam McLeod

LLM FAQ

“Should I run a larger parameter model, or a higher quality smaller model of the same family?” TLDR; Larger parameter model [lower quantisation quality] > Smaller parameter model [higher quantisation quality] E.g: Qwen2.5 32B Q3_K_M > Qwen2.5 14B Q8_0 Caveats: Don’t go lower than Q3_K_M, or IQ2_M, especially if the model is under 30B~ parameters. This is in the context of two models of the same family and version (e.g. Qwen2.5 Coder). Longer answer: Check out the Code Chaos and Copilots slide deck. ...

5 min · Sam McLeod

LLM vRAM Estimator

0 min · 0 words · Sam McLeod