Vane: Your Own Open Source, Self-Hostable Perplexity Alternative
Turn any local or cloud LLM into a conversational search engine.
Intro
You know that feeling when you use Perplexity AI and think, "I wish I could run this on my own hardware, without paying per query, and with full control over my data"? Yeah, me too. That's exactly the problem Vane solves.
Vane is an open source, AI-powered search engine that you can self-host. It’s built to work with local LLMs (via Ollama) or cloud models (OpenAI, Anthropic, Groq, etc.), and it gives you that conversational search experience—ask a question, get a synthesized answer with citations—but entirely under your control. No API key limits (well, depending on your setup), no data leaving your network unless you want it to.
What It Does
Vane takes a natural language query, searches the web (or your configured sources), and then uses an LLM to summarize the results into a coherent answer with source links. Think of it as a DIY Perplexity: you type in a question, it goes out, fetches relevant information, and comes back with a readable, cited response.
Under the hood, it uses DuckDuckGo or maybe a configurable search provider (check the repo for the latest), combined with an LLM of your choice. The whole thing runs in a Docker container or directly with Node.js, so spinning it up on a VPS, a Raspberry Pi, or your local machine is straightforward.
Why It’s Cool
Beyond the obvious "self-hosted and private" angle, Vane has a few clever bits:
- Model flexibility: You can swap between local models (like Llama 3, Mistral, or whatever you run through Ollama) and cloud APIs (GPT-4, Claude, Groq) with a simple config change. This is huge for devs who want to compare performance or avoid vendor lock-in.
- Customizable search depth: You control how many sources to fetch and how much context the LLM gets. Less for speed, more for thoroughness.
- Minimalist, clean UI: It doesn't try to be a full product. It’s a single-page app (I think?) that gets out of your way. You type, it answers.
- Open source, MIT licensed: No hidden subscriptions, no telemetry. Fork it, modify it, embed it.
A use case that stands out: If you're building an internal knowledge base or research bot, Vane gives you a ready-made frontend and pipeline. You could swap the web search for a local database or vector store with some tweaks, and suddenly you have a private research assistant for your team.
How to Try It
The GitHub repo has everything you need. Here's the fast track:
-
Clone the repo:
git clone https://github.com/ItzCrazyKns/Vane cd Vane -
Set up environment – copy the
.env.exampleto.envand add your LLM config (Ollama URL or API key for cloud models). -
Run with Docker (recommended):
docker compose upOr if you prefer bare metal, install dependencies and start the server with
npm. -
Open
http://localhost:3000(or wherever it runs) and start asking questions.
Check the README for precise instructions—it's kept up to date with any changes. There's also a demo link (usually deployed on the author's server) if you want a quick peek before installing.
Final Thoughts
Vane isn't trying to beat Perplexity at scale. It's a tool for developers, tinkerers, and anyone who values ownership over convenience. The ability to point it at your own models and search sources makes it a fantastic playground for experimenting with RAG (retrieval-augmented generation) patterns on real web data.
If you've been eyeing conversational search but didn't want to pay per query or trust someone else with your history, this is worth a weekend spin. Fork it, improve it, and maybe contribute a search provider or a UI tweak. That's the open source way.
Found this useful? Follow @githubprojects for more open source gems.
Repository: https://github.com/ItzCrazyKns/Vane