For local development, install the package in editable mode:
python3 -m pip install -e .Then run:
teaagent --helpUse a JSON config file for common optional defaults:
teaagent --config .teaagent/config.json model smoke gptIf --config is omitted and .teaagent/config.json exists, it is loaded automatically.
Supported keys include root, model, provider, and permission_mode. Positional arguments such as agent run <provider> remain explicit.
Profiles can override top-level defaults:
{
"model": "gpt-4o-mini",
"profiles": {
"ci": {"model": "gpt-4o-mini", "permission_mode": "read-only"}
}
}teaagent --profile ci model smoke gptPrint shell completion snippets:
teaagent completion bash
teaagent completion zsh
teaagent completion fishInspect and prune audit logs:
teaagent audit list --limit 20
teaagent audit show <run_id>
teaagent audit prune --days 30 --keep 20
teaagent audit prune --allaudit prune requires an explicit deletion selector: --days, --keep, or --all.
You can also run without installing the console script:
python3 -m teaagent.cli --helpRun all environment checks:
teaagent doctor all --provider gptCheck the GraphQLite runtime:
teaagent doctor graphqliteRun a smoke query:
teaagent graphqlite smokeRun a Cypher query:
teaagent graphqlite query "MATCH (n:SmokeTest) RETURN n.name"Use a persistent SQLite file:
teaagent graphqlite smoke --database ./graph.db
teaagent graphqlite query "MATCH (n) RETURN n" --database ./graph.dbStart the interactive terminal UI:
teaagent tuiOr without installing the console script:
python3 -m teaagent.cli tuiInside the TUI:
help
doctor
clarify Improve the CLI
provider gpt
model gpt-4o-mini
route-model on
route review this patch
root /path/to/repo
destructive off
progress on
permission prompt
approve write-file-1
approvals
ask Inspect this repo and summarize the test suite
ask --clarify Update docs/cli.md to document clarify and verify tests pass
memory add Prefer read-only mode for audit tasks
memory search audit tasks
smoke
query MATCH (n:SmokeTest) RETURN n.name
use ./graph.db
exit
Start with a persistent database:
teaagent tui --database ./graph.dbStart with model and workspace defaults:
teaagent tui --provider claude --model claude-3-5-sonnet-latest --root /path/to/repoAllow destructive tools inside ask commands:
teaagent tui --allow-destructiveUse a permission mode for ask commands:
teaagent tui --permission-mode read-only
teaagent tui --permission-mode workspace-write
teaagent tui --permission-mode prompt
teaagent tui --permission-mode allow
teaagent tui --permission-mode danger-full-accessList supported providers:
teaagent model providersCheck provider configuration:
teaagent doctor model claude
teaagent doctor model gpt
teaagent doctor model gemini
teaagent doctor model openrouter
teaagent doctor model opencodezen-goRun a smoke prompt:
teaagent model smoke gpt --prompt "Reply with exactly: ok"
teaagent model smoke claude --prompt "Reply with exactly: ok"
teaagent model smoke gemini --prompt "Reply with exactly: ok"Preview deterministic task routing for a provider:
teaagent model route "review this patch for regressions" --provider gpt
teaagent model route "update docs/cli.md" --provider claudeRouting classifies tasks into review, test, code, docs, search, or general, then chooses a provider-specific model. Explicit --model still wins.
Environment variables:
export ANTHROPIC_API_KEY=...
export OPENAI_API_KEY=...
export GEMINI_API_KEY=...
export OPENROUTER_API_KEY=...
export OPENCODEZEN_API_KEY=...Optional base URL overrides:
export ANTHROPIC_BASE_URL=https://api.anthropic.com/v1
export OPENAI_BASE_URL=https://api.openai.com/v1
export GEMINI_BASE_URL=https://generativelanguage.googleapis.com/v1beta
export OPENROUTER_BASE_URL=https://openrouter.ai/api/v1
export OPENCODEZEN_BASE_URL=https://opencode.ai/zen/go/v1List the repo-operation tool metadata that will be exposed to the agent runner:
teaagent workspace toolsUse another workspace root:
teaagent workspace tools --root /path/to/repoRegistered tools:
workspace_read_file: read UTF-8 files under the root.workspace_read_file_hashed: read UTF-8 files withLINE#HASH|contentanchors.workspace_write_file: write UTF-8 files under the root; destructive.workspace_apply_patch: replace one exact text span; destructive.workspace_edit_at_hash: edit one line only if its hash anchor still matches; destructive.workspace_list_files: list files by glob.workspace_search_text: regex search text files.workspace_git_status: rungit status --short.workspace_run_shell_inspect: run inspect-safe shell commands without destructive permission.workspace_run_shell_mutate: run arbitrary shell commands; destructive.workspace_run_shell: compatibility alias forworkspace_run_shell_mutate; destructive.
All path-based tools reject paths that escape the configured workspace root.
Score a task for ambiguity before invoking a model:
teaagent clarify "Improve this project"The result includes an ambiguity score, missing fields, and at most one next question.
Use the same gate before an autonomous run:
teaagent agent run gpt "Improve this project" --clarifyIf key details are missing, TeaAgent returns status: needs_clarification and does not call the model. If the task is concrete enough, TeaAgent injects a structured task specification into the agent prompt.
Inside TUI:
clarify Improve this project
ask --clarify Update docs/cli.md to document clarify and verify tests pass
Store reusable workspace observations under .teaagent/memory.jsonl:
teaagent memory add "Prefer read-only mode for audit tasks" --tag policy
teaagent memory list
teaagent memory search "audit tasks"
teaagent memory show <memory_id>Use another workspace root:
teaagent memory add "GraphQLite requires pysqlite3 on macOS" --tag graphqlite --root /path/to/repoAgent runs search the catalog with the task text and inject matching memories into the model prompt.
Inside TUI:
memory add Prefer read-only mode for audit tasks
memory list
memory search audit tasks
memory show <memory_id>
Run an agent task as a detached background worker that survives the parent shell:
teaagent ultrawork start gpt "Long-running task" --root /path/to/repo --heartbeat 5 --label nightlyThe store under .teaagent/ultrawork/ keeps a JSON record per worker plus a per-worker log file. Inspect or stop workers:
teaagent ultrawork list --root /path/to/repo
teaagent ultrawork show <worker_id> --root /path/to/repo
teaagent ultrawork stop <worker_id> --root /path/to/repolist reports alive based on a PID liveness check; stop sends SIGTERM (then SIGKILL if it does not exit within the timeout).
Emit a periodic heartbeat audit event while a run is in progress so observers can confirm liveness:
teaagent agent run gpt "Long-running task" --heartbeat 5Inspect liveness for a persisted run id:
teaagent agent status <run_id> --root /path/to/repoThe status payload reports status (running / completed / failed:*) and the most recent heartbeat tick and timestamp.
Inside TUI:
heartbeat 5
ask Long-running task
status <run_id>
Serve the workspace tool pack to other MCP clients over stdio JSON-RPC:
teaagent mcp serve --root /path/to/repoOr over Streamable HTTP transport on loopback:
teaagent mcp serve --http --root /path/to/repo --port 7330Streamable HTTP details:
- POST
/mcp: send one JSON-RPC request or a batch. Server responds withapplication/json. - GET
/mcp: open atext/event-streamkeep-alive (no server-initiated notifications yet). - DELETE
/mcp: terminate the session. initializereturns a freshMcp-Session-Idresponse header. Every later request must echo it.- Default bind is
127.0.0.1only. Use--host 0.0.0.0deliberately and pair it with auth. --auth-token TOKENrequiresAuthorization: Bearer TOKENon every request.--allowed-origin URLmay be repeated to whitelist browser Origin headers. Default: allow all.
Supported methods: initialize, tools/list, tools/call. Each tool is exposed with its inputSchema and read-only / destructive / idempotent annotations. Tool errors are returned as result.isError = true rather than JSON-RPC errors so the client can recover.
Expose a subagent tool so the model can delegate one focused sub-task to a fresh agent run that shares the same workspace tools, ApprovalPolicy, RunBudget, and permission mode:
teaagent agent run gpt "Plan and execute the cleanup" --subagent --max-subagent-depth 1Each sub-run is persisted under .teaagent/runs/*.jsonl with its own run_id so it can be inspected or resumed.
Inside TUI:
subagent on
ask Plan and execute the cleanup
Summarize clarification, routing, matching memories, permission state, and tool count without calling a model:
teaagent agent preflight gpt "review this patch for regressions in the test suite" --route-modelExit code is 0 when the task is concrete enough and 2 when it still needs clarification. Pair with --permission-mode workspace-write or --memory-limit 10 as needed.
Inside TUI:
route-model on
preflight review this patch for regressions in the test suite
Run one model-driven task with the workspace tool pack:
teaagent agent run gpt "Inspect this repo and summarize the test suite"Use another provider:
teaagent agent run claude "List the Python files"
teaagent agent run gemini "Search for GraphQLite usage"
teaagent agent run openrouter "Explain pyproject.toml"
teaagent agent run opencodezen-go "Inspect workspace tools"Use a specific workspace root:
teaagent agent run gpt "Read AGENTS.md" --root /path/to/repoEnable task-based model routing for one run:
teaagent agent run gpt "review this patch for regressions" --route-modelInside TUI, use route-model on to apply routing to later ask commands. Use route <task> to preview the selected category and model.
By default, destructive tools are blocked. To allow file writes, patching, or shell commands:
teaagent agent run gpt "Create a TODO.md summary" --allow-destructiveApprove one exact destructive tool call id while staying in prompt mode:
teaagent agent run gpt "Create a TODO.md summary" --approve-call-id write-todo-1The model decision must use the approved call_id for that exact destructive tool call. Other destructive calls remain blocked.
For interactive HITL approval during a CLI run, use:
teaagent agent run gpt "Create a TODO.md summary" --hitl-approvalWithout --hitl-approval, an unapproved destructive tool in prompt mode returns pending_approval with the required call_id. Re-run with --approve-call-id <call_id> or use agent resume with the same approval token.
Prefer explicit permission modes for regular use:
teaagent agent run gpt "Inspect this repo" --permission-mode read-only
teaagent agent run gpt "Update one markdown file" --permission-mode workspace-write
teaagent agent run gpt "Run tests and patch failures" --permission-mode prompt
teaagent agent run gpt "Run approved automation" --permission-mode allowPermission modes:
read-only: blocks every destructive tool.workspace-write: allows file write/patch/hash-edit tools, blocks shell mutation.prompt: destructive tools pause for HITL approval or require an approval token.allow: allows destructive tools for the session.danger-full-access: allows destructive tools; reserve for trusted automation.
The model must return JSON decisions internally:
{"type":"tool","tool_name":"workspace_read_file","arguments":{"path":"AGENTS.md"},"call_id":"read-agents"}
{"type":"final","content":"Done"}Agent runs are persisted under .teaagent/runs/*.jsonl in the selected workspace root.
List recent runs:
teaagent agent runs --root /path/to/repoShow one run record:
teaagent agent show <run_id> --root /path/to/repoResume the original task from a persisted run id with optional new approval tokens or settings:
teaagent agent resume gpt <run_id> --root /path/to/repo --approve-call-id write-1By default, resume replays already-completed tool_call_completed observations into the new run's context so the model does not have to redo prior tool calls. If the original run paused with pending_approval, the pending call_id is auto-added to the approval list and reported back as auto_approved_call_id in the response payload.
Pass --fresh-restart to skip replay and re-run the original task from scratch.
Inside TUI:
ask write TODO.md # prompts y/N when a destructive call is proposed
approve write-todo-1
approvals
unapprove write-todo-1
runs
show <run_id>
resume <run_id>