Schedule and execute tasks across a cluster with zero effort
GitHub RepoImpressions918

Schedule and execute tasks across a cluster with zero effort

@githubprojectsPost Author

Project Description

View on GitHub

Hatchet: Zero-Effort Task Scheduling Across Your Cluster

Ever had a bunch of background jobs that need to run across multiple machines, but setting up a scheduler feels like a project in itself? You've probably looked at Celery, Airflow, or just plain cron, and thought "why is this so much boilerplate?" Hatchet is a tool that aims to make this feel like it's not even a thing you need to think about.

It's a distributed task queue and scheduler that handles execution across a cluster with minimal setup. Think of it as a smart, resilient cron that scales horizontally without you having to become a Kubernetes expert.

What It Does

Hatchet lets you define and run scheduled or event-driven tasks across a group of machines. You write a function, tell it when to run (cron expression, delay, or event), and Hatchet handles the rest: dispatching to workers, retrying on failure, and managing state. The whole thing is built around a simple Go server and a client SDK.

Under the hood, it uses a Postgres-backed queue with a scheduler that distributes work. You don't need a separate message broker like Redis or RabbitMQ – just a database. That alone makes it lighter than a lot of alternatives.

Why It’s Cool

  • Dead simple setup. Install the server, connect a database, and you're up. The client libraries (Go, Python, TypeScript) are minimal and don't require a PhD in distributed systems.
  • Built-in retries and failure handling. If a task fails, Hatchet can retry it automatically, or you can set custom failure handlers. No extra code needed.
  • Real-time status. You can query task status, logs, and history without setting up a separate monitoring stack.
  • No middleware lock-in. It's not tied to a specific cloud or message broker. Run it on a single server or spread it across VMs – your choice.
  • Workflows, not just tasks. You can chain tasks together using a simple DAG (directed acyclic graph) syntax, making it useful for data pipelines or multi-step jobs.

How to Try It

  1. Clone the repo

    git clone https://github.com/hatchet-dev/hatchet.git
    cd hatchet
    
  2. Run the server (requires Docker and Go)

    docker compose up -d
    go run cmd/hatchet-server/main.go
    
  3. Install the SDK (Python example)

    pip install hatchet-sdk
    
  4. Define a task

    from hatchet_sdk import Hatchet
    
    hatchet = Hatchet()
    
    @hatchet.task()
    def my_task():
        print("Hello from the cluster!")
    
    # Schedule it every minute
    my_task.schedule("* * * * *")
    
  5. Run a worker

    python worker.py
    

That's it. Your task will start running across any connected workers.

Final Thoughts

Hatchet is one of those tools that solves a real pain point (scheduling distributed work) without adding a ton of complexity. It's not trying to replace everything – for massive scale you'd still look at something like Temporal or Apache Airflow – but for most teams, this is more than enough. I could see it being great for ETL jobs, periodic health checks, or even automating deployment triggers.

If you've been putting off setting up a proper task scheduler because it felt like too much headache, this is worth a shot. It's refreshingly pragmatic.


Follow us at @githubprojects for more cool open-source finds.

Back to Projects
Project ID: ccc34301-f7d6-4667-a4ae-dcf9ce2dabb5Last updated: April 23, 2026 at 11:39 AM