LLM Wiki – Build a Self-Updating Knowledge Base from Your Documents
GitHub RepoImpressions1.3k

LLM Wiki – Build a Self-Updating Knowledge Base from Your Documents

@githubprojectsPost Author

Project Description

View on GitHub

LLM Wiki: Turn Your Dumb Documents Into a Smart, Self-Updating Knowledge Base

You know that feeling when you’ve got a pile of PDFs, markdown files, or meeting notes, and you just know the answer is in there somewhere—but you can’t find it? Or worse, you find it, but it’s outdated because nobody updated the doc from last quarter?

That’s the exact problem LLM Wiki solves. It takes your static documents and turns them into a live, searchable knowledge base that updates itself. No manual tagging, no database schema, no sysadmin nightmares. Just point it at your files and let the LLM do the heavy lifting.

What It Does

At its core, LLM Wiki is a CLI tool that watches a folder of documents (PDFs, plain text, markdown, you name it) and builds a searchable index using an LLM (like GPT or local models). When you ask a question, it retrieves the most relevant chunks from your docs, feeds them to the LLM, and spits out a contextual answer.

But here’s the twist: it doesn’t just build a one-time index. It watches for changes—new files, edits, deletions—and updates the knowledge base automatically. Think of it as a private, always-fresh Wikipedia for your team or project.

Why It’s Cool

Most “RAG” (Retrieval-Augmented Generation) tools require you to manually re-index every time you add a document. LLM Wiki says “nah, I got this.” The auto-update feature is genuinely useful because:

  • No cron jobs or scripts. It uses filesystem watchers (like inotify on Linux) to detect changes in real time.
  • Zero configuration to start. Point it at a folder. Done. It even ships sensible defaults for chunk size and overlap.
  • Local-first option. Don’t want to send your docs to OpenAI? It works with local models through Ollama or llama.cpp. Privacy fiends, rejoice.
  • Plug-and-play with your existing workflow. Drop a markdown file into the watched folder, and within seconds it’s searchable.

Use cases? Perfect for:

  • Product teams who keep their specs in a shared drive
  • Open-source maintainers documenting their own repos
  • Anyone who has a “knowledge base” that’s really just a dump of random notes

How to Try It

The repo is straightforward, Python-based, and requires minimal dependencies.

# Clone it
git clone https://github.com/nashsu/llm_wiki
cd llm_wiki

# Install (preferably in a virtual env)
pip install -r requirements.txt

# Point it at a folder of docs and run
python llm_wiki.py /path/to/your/docs

You can choose your LLM backend via environment variables or a simple config file. The README has examples for OpenAI, Anthropic, and local models.

There’s also a live demo link in the repo (check the README) if you just want to play with it before installing.

Final Thoughts

LLM Wiki doesn’t try to be Notion or Confluence. It’s a lean utility that solves one specific pain: keeping your knowledge base fresh without extra human effort. If you’ve ever spent an afternoon updating a wiki that nobody reads, you’ll appreciate how this just works.

The real magic is the auto-update. Most devs I know have a “docs folder” that’s a mess. This tool makes that mess useful. Try it on a small project first—throw in a few markdown files and ask it a question. You’ll probably think “why didn’t I do this sooner.”


Share this with a dev who needs a better way to manage their docs. Follow @githubprojects for more tools like this.

Back to Projects
Project ID: 9d1567ab-10ac-4d33-8341-b34582a43478Last updated: April 29, 2026 at 06:58 AM