Home Page | Documentation | Discord
Private. Open source. One-click install. Developer-friendly API.
Install in one line:
curl -fsSL https://raw.githubusercontent.com/AtomicBot-ai/atomic-agent/main/scripts/install.sh | shA local operator agent for llama.cpp — same product class as OpenClaw Operator and Hermes Agent, but shipped as a standalone SEA binary and tuned to squeeze the most out of small local models. Your data, traces, browser profile, memory, and model traffic stay on your machine by default.
- 🧊 Built for local inference — stable cache-hot prompt prefix, externalized state in SQLite, GBNF grammar-constrained tool calls, parallel tool batches
- 🌐 Operates a real desktop — system browser via compact ARIA snapshots, shell, filesystem, documents (PDF/DOCX/XLSX/etc.), git, clipboard, HTTP, notifications
- 🧰 Skills, memory, tasks — local Markdown playbooks loaded on demand, FTS5 note recall, durable cron and webhook-triggered work
- 🔌 Multiple surfaces — TUI, CLI, OpenAI-compatible HTTP server, and a Tauri sidecar speaking newline-delimited JSON for desktop integrations
- 🛡️ Approvals & traces — dangerous actions gated by policy, append-only NDJSON traces with prompt-drift replay
- 💸 No SaaS meter — once you have local hardware and model files, no per-prompt, per-token, or per-seat fees
The easiest way to run OpenClaw. A native desktop app that turns the open-source AI agent framework (330k+ stars) into a personal AI assistant — no terminal, no config files, no Docker. Download, open, pick your AI provider, and start working.
Atomic Bot connects the best AI models to your everyday tools and actually takes action: drafts emails, schedules meetings, summarizes docs, automates browser tasks, and runs 13,000+ skills from ClawHub.
- 🔒 Private — runs locally, your data stays on your device
- 🆓 Free — use your own LLM API keys and pay nothing
- 🤖 Multi-model — Claude, GPT, Gemini, and more — switch on the fly
- 💬 Multi-messenger — one AI across Telegram, Slack, Discord, WhatsApp
- 🎙️ Voice — built-in Whisper transcription, local or cloud
- 🧠 Memory — persistent context across sessions and tasks
- 🔄 Auto-updates — always on the latest OpenClaw release
The agent that grows with you. A native AI assistant — not a browser tab, not a CLI wrapper. An autonomous agent with hands, eyes, memory, and a real workspace. Powered by the Hermes Agent core by Nous Research and tuned for self-improvement, time-travel file history, and fully offline operation.
- 👁️ Computer Use that actually lands the click — every screenshot is paired with native OCR (Apple Vision / Windows.Media.Ocr) so the agent gets pixel-accurate coordinates instead of guessing
- 🕰️ Time-travel file history — every file the agent touches is silently snapshotted before and after; diff or restore any version with one click, no git required
- 🧠 Self-improving skills & memory — the agent writes its own procedures after complex tasks and decides what's worth remembering across sessions
- ☁️ Local or cloud, one click — bundled inference engine downloads a model that fits your hardware, or connect 20+ cloud providers (OpenRouter, Anthropic, OpenAI, Gemini, DeepSeek, and more)
- 💬 One agent, 16+ messengers — Telegram, Discord, Slack, WhatsApp, Signal, iMessage, Email, Matrix, Teams, and others — all sharing the same memory
- 🛠️ 40+ tools, MCP-native — file ops, web search, code execution, subagents, cron scheduling, browser automation, Skills Hub from agentskills.io
- 🛡️ Approval gates — native modals confirm dangerous shell commands and writes before they run
Open-source ChatGPT alternative. Run local LLMs or connect cloud models — with full control and privacy. Download and run models from HuggingFace, or connect to OpenAI, Anthropic, Mistral, Groq, and others. Available on macOS, Windows, and iOS.
- 🧠 Local AI Models — download and run LLMs (Llama, Gemma, Qwen, and more) from HuggingFace
- ☁️ Cloud Integration — connect to OpenAI, Anthropic, Mistral, Groq, MiniMax, and others
- 🤖 Custom Assistants — create specialized AI assistants for your tasks
- 🔌 OpenAI-Compatible API — local server at
localhost:1337for other applications - 🔗 Model Context Protocol — MCP integration for agentic capabilities
- 📱 Cross-platform — desktop apps for macOS and Windows, plus a native iOS app
- 🔒 Privacy First — everything runs locally when you want it to
Framework-agnostic desktop automation for AI agents. Give your agent eyes and hands — screenshot the screen, click, type, scroll, drag, read text via OCR, and more. Works with any tool-calling LLM framework, MCP server, or custom pipeline.
The library that powers computer use in Atomic Hermes — also published standalone for anyone to use:
- 📦
@atomicbotai/computer-use— the core TypeScript library: OCR, actions, overlay, session lock - 🔌
@atomicbotai/computer-use-mcp— MCP server, drop it into Claude Desktop, Cursor, Windsurf, or any MCP client
Key features:
- 🔍 Zero-dependency OCR — native engines (Apple Vision on macOS, Windows.Media.Ocr on Windows) extract text with pixel-precise coordinates and confidence scores. No Tesseract, no cloud, no API keys
- 📸 Screenshot + anchors — turns a blurry downscaled screenshot into a structured map of the UI:
"Send" at (1450, 890)instead of guessing where the button is - 🖱️ Full action set — click / double / triple, type, press, scroll, drag, hold key, clipboard, open/switch app, list displays
- 🟢 Native overlay — Swift on macOS, PowerShell on Windows — visual indicator whenever the agent is driving the mouse and keyboard
- 🔒 Session lock — file-based lock prevents two agents from fighting over the desktop
- 🛡️ Guardrails — prevents misclicks in dock/launcher and submit zones
- 🐛 Debug artifacts — save screenshots, OCR results, and tool outputs per action for inspection
The official ClawHub API is limited — many endpoints are missing, and getting the full dataset means querying the Convex database directly.
ClawHub Layer API solves this by providing a complete, standalone REST API that aggregates everything into clean, ready-to-use endpoints:
- 🗂️ Full data coverage — skills, metadata, and everything the official API doesn't expose
- 🔄 Auto-syncing — periodically pulls and caches the latest data from Convex, always up to date
- 🛠️ Standard REST — clean endpoints, no Convex knowledge required
- ⚡ Fast & self-contained — no direct dependency on upstream infrastructure at query time
- 🔗 Integration-ready — designed to plug into your apps, bots, and workflows
Building on top of the OpenClaw / ClawHub ecosystem? This API is the simplest way to get the data you need — without going through Convex yourself.
We're building in the open and would love your help!
© 2026 Atomic · Built with ❤️ · atomicbot.ai



