Finally, an AI That Knows Where You Put That File
Ever spent 20 minutes searching for a config file you just had open? Or tried to remember where you stashed that experimental script three months ago? We’ve all been there, context-switching and grepping our way through directories, breaking our flow. What if your shell could just… understand what you’re looking for and tell you where it is?
Enter llama-fs
. It’s an experimental tool that uses a local LLM to understand your natural language descriptions and instantly finds the files you’re thinking of. It’s like Ctrl+P
for your entire filesystem, but powered by semantic understanding instead of just fuzzy filename matching.
What It Does
In simple terms, llama-fs
(Llama File System) is a command-line tool that indexes your files. Once indexed, you can ask it questions in plain English (or other languages). Instead of relying on exact file names or directory structures, it uses the power of a locally-running Large Language Model to understand the content and purpose of your files to find what you need.
You ask it something like "find my docker compose file for the backend api" or "where is that python script that cleans up log files," and it returns the most relevant file paths.
Why It's Cool
The clever part isn't just the AI—it's the implementation. llama-fs
is built on a RAG (Retrieval-Augmented Generation) pipeline, which is a fancy way of saying it's smart and efficient. It doesn't send your data to the cloud; everything runs locally on your machine, keeping your code private.
First, it generates semantic embeddings for your files and stores them in a local vector database. When you ask a question, it finds the files most semantically similar to your query and then uses the LLM to reason about which one is the best match. This two-step process is what makes it so accurate. It’s not just a dumb keyword search; it genuinely understands the context.
The potential use cases are huge for developers:
- Onboarding onto a new codebase without needing to pester colleagues.
- Finding legacy scripts or assets that haven't been touched in years.
- Quickly locating specific configuration files across multiple projects.
How to Try It
Ready to stop cd
-ing around blindly? Here’s how to get started.
Prerequisites: You'll need to have Python installed, along with ollama
to run the local LLM.
-
Clone the repo:
git clone https://github.com/iyaja/llama-fs.git cd llama-fs
-
Install the dependencies:
pip install -r requirements.txt
-
Pull an embedding model and a LLM via Ollama: The project recommends:
ollama pull nomic-embed-text ollama pull llama3.1
-
Index a directory: Point it at the project you want to search.
python main.py index /path/to/your/codebase
-
Start asking questions! Run the query command:
python main.py query "Where is the main application entry point?"
Head over to the llama-fs GitHub repository for more detailed instructions and to check out the code.
Final Thoughts
llama-fs
is a brilliant example of applying modern AI to solve a genuine, everyday developer pain point. It’s still experimental, so it might not be perfect for every massive codebase yet, but the concept is incredibly powerful. This feels like a glimpse into the future of developer tooling—where our tools understand intent, not just syntax. It’s absolutely worth a few minutes of setup to try it out on your local projects. You might just find it saves you more time than you think.
—
Follow @githubprojects for more cool projects.