A self-hostable, self-improving agent runtime. Chat from your terminal, connect to Telegram / Discord / Slack / LINE / WhatsApp, or call over HTTP β all from a single 10 MB binary.
Capabilities
Saves your preferences and corrections to persistent memory. Gets smarter every session without repeating yourself.
Detects Thai, Chinese, Japanese, Arabic, Korean and more automatically. No configuration needed.
Save multi-step workflows as Markdown skills. Hot-reloaded on every call β edit a file and the next message picks it up.
Swap between Anthropic, OpenRouter, Bedrock, Ollama, vLLM, or any OpenAI-compatible endpoint with one env var.
HTTP gateway with blocking, streaming, and WebSocket endpoints. Prometheus metrics included. Deploy anywhere Docker runs.
Every piece is an independent crate. Add a tool, platform, or transport in under 100 lines without touching anything else.
Get started
Platform adapters
Every adapter runs in the same process. Set the tokens and start the server β that's it.
Security
Every layer is independently hardened β so a compromise at one level can't cascade to the next.
Commands run inside isolated containers with --cap-drop ALL, pid limits, and ephemeral /tmp.
Recursive root deletion, mkfs, fork bombs, block-device writes, and system shutdown are unconditionally refused.
Retrieved memories are tagged <untrusted_memory> so injected jailbreaks can't masquerade as trusted instructions.
API keys are scrubbed from tool output before reaching the model. Output is capped at 50 KB to prevent context flooding.
Writes to ~/.ssh, ~/.aws, shell dotfiles, and system files are always blocked.
LLM Providers
Switch providers with a single environment variable. No code changes required.
ANTHROPIC_API_KEY
Direct Messages API
OPENROUTER_API_KEY
200+ models default
AWS_ACCESS_KEY_ID
SigV4 auth, Converse API
OLLAMA_BASE_URL
Local, no key required
VLLM_BASE_URL
OpenAI-compatible server
GARUDUST_BASE_URL
Generic transport
One binary. No cloud lock-in. Runs on your laptop or your server.