Free LLM APIs: A Community Maintained List of Inference Endpoints
If you've ever tried to build something with a large language model, you know the pain. You either pay per token (which adds up fast) or run it locally (which needs a GPU you probably don't have lying around). But there's a third option: free inference APIs.
The problem? They're scattered across Discord servers, Twitter threads, and random blog posts. Finding one that works is a treasure hunt. This GitHub repo fixes that.
What It Does
free-llm-api-resources is exactly what it sounds like: a curated, community maintained list of LLM inference providers that offer free API access. No signup required for many of them. No credit card. Just endpoints you can hit with standard OpenAI compatible clients.
The repo checks each API endpoint periodically and updates a badge in the README showing whether it's currently working. If a provider goes down, the badge turns red. If it comes back, it flips green. No stale links, no wasted debugging.
Why It's Cool
The obvious value is the list itself. But the clever part is the automation. The repo runs a GitHub Action that pings each API every 24 hours. If an endpoint returns a 200 status, it gets a green checkmark. If it fails, the badge updates to reflect that. This means you're never looking at a dead link from six months ago.
The list also includes:
- The actual API endpoint URL
- The model name it serves
- Rate limits (if known)
- Whether it requires an API key or uses a free tier
Some of these are from big names like Groq, Hugging Face, and Together. Others are smaller providers you've never heard of. All of them work (or at least, they did last time the action ran).
Another nice touch: many of these endpoints support the OpenAI SDK format, so you can swap openai.chat.completions.create calls with a different base URL and model name. No custom clients needed.
How to Try It
Here's the quickest way to test one:
- Open the repo: github.com/cheahjs/free-llm-api-resources
- Look for a green badge next to an API provider
- Copy the endpoint URL and model name
- Use curl or the OpenAI Python library
For example, with a provider that supports OpenAI format:
curl https://api.provider.com/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "model-name",
"messages": [{"role": "user", "content": "hello"}]
}'
Or in Python:
from openai import OpenAI
client = OpenAI(
base_url="https://api.provider.com/v1",
api_key="sk-..." # if needed
)
response = client.chat.completions.create(
model="model-name",
messages=[{"role": "user", "content": "hello"}]
)
No installation. No Docker. Just HTTP requests.
Final Thoughts
This repo is a good example of turning a simple idea into something useful. A list of free APIs is helpful. A list that auto-validates itself is genuinely valuable. If you're prototyping an AI feature, running a hackathon project, or just want to play around with different models without spending money, bookmark this.
One honest caveat: free APIs come with tradeoffs. Rate limits are low. Reliability varies. Some providers disappear after a few months. But for development, testing, and small experiments, this list saves a lot of searching.
If you find a new provider or an endpoint that's down, the repo accepts contributions. That's the best part of community maintained lists like this.
from @githubprojects