Deploy an autonomous engineering team on your infrastructure—ship production code while slashing cloud & staffing costs
|
13 specialized AI agents covering backend, frontend, QA, security, and DevOps—working 24/7 |
Deploy on your own infrastructure: bare-metal servers, on-prem, or any cloud—no vendor lock-in |
Cut cloud bills with bare-metal deployment + reduce engineering headcount for routine tasks |
| Traditional Approach | With CTO |
|---|---|
| $150k-250k/yr per engineer × 5-10 | ~$500-2k/mo model usage (or self-host for near-zero) |
| $5k-50k/mo managed cloud services | 60-80% savings on bare-metal |
| 24/7 on-call rotation costs | Automated self-healing |
| Weeks to onboard new team members | Instant agent deployment |
Local Model Support: Run Ollama, vLLM, or other local inference—bring your own GPUs and pay only for electricity.
- Your API keys — Anthropic, OpenAI, Google, etc. stored securely in your infrastructure
- Your cloud credentials — AWS, GCP, Azure keys never leave your cluster
- Secret management with OpenBao — Open-source HashiCorp Vault fork for enterprise-grade secrets
- Zero vendor lock-in — Switch providers anytime, no data hostage situations
| Feature | Technology | What It Does |
|---|---|---|
| Cloudflare Tunnels | cloudflared |
Expose services publicly without opening firewall ports — no public IPs needed, automatic TLS, global edge CDN |
| Kilo VPN | WireGuard | Secure mesh VPN for remote cluster access — connect from anywhere with encrypted tunnels |
| OpenBao | Vault fork | Centralized secrets management with dynamic credentials and audit logging |
Cloudflare Tunnels is a game-changer: your entire platform can run on air-gapped infrastructure while still being accessible from anywhere. No ingress controllers, no load balancers, no exposed ports—just secure outbound tunnels through Cloudflare's network.
Replace expensive managed cloud services with open-source Kubernetes operators:
| Operator | Replaces | Savings |
|---|---|---|
| CloudNative-PG | AWS RDS, Cloud SQL, Azure PostgreSQL | ~70-80% |
| Strimzi Kafka | AWS MSK, Confluent Cloud | ~60-70% |
| MinIO | AWS S3, GCS, Azure Blob | ~80-90% |
| Redis Operator | ElastiCache, Memorystore | ~70-80% |
| OpenSearch | AWS OpenSearch, Elastic Cloud | ~60-70% |
| ClickHouse | BigQuery, Redshift, Snowflake | ~70-80% |
| QuestDB | TimescaleDB Cloud, InfluxDB Cloud | ~70-80% |
Bolt automatically deploys, monitors, and maintains these operators—giving you managed-service reliability at self-hosted prices.
Public launch: January 1st, 2025 🚀
The platform is in beta and being refined based on production usage.
Current Status:
✅ Core platform architecture implemented
✅ MCP server with dynamic tool registration
✅ Kubernetes controllers with self-healing
✅ GitHub Apps + Linear integration
✅ Bare-metal deployment (Latitude, Hetzner, OVH, Vultr, Scaleway, Cherry, DigitalOcean)
✅ Cloudflare Tunnels for public access without exposed interfaces
✅ Infrastructure operators (PostgreSQL, Kafka, Redis, MinIO, OpenSearch, ClickHouse, QuestDB)
🔄 Documentation and onboarding improvements
Thirteen specialized agents with distinct personalities working together 24/7—your full-stack engineering department in a box
|
🐕 Personality: Articulate & organized Morgan orchestrates project lifecycles—syncing GitHub Issues with Linear roadmaps, decomposing PRDs into sprint-ready tasks, and keeping stakeholders aligned through |
|
🦀 Stack: Rust, Tokio, Axum Rex builds high-performance APIs, real-time services, and systems-level infrastructure. When microseconds matter, Rex delivers. |
🐻 Stack: Go, gRPC, PostgreSQL Grizz builds backend services, REST/gRPC APIs, CLI tools, and Kubernetes operators. From simple CRUD to distributed systems—battle-tested reliability is his signature. |
✨ Stack: Node.js, TypeScript, Fastify Nova builds REST/GraphQL APIs, serverless functions, and third-party integrations. Speed-to-market is her specialty. |
|
🎨 Stack: React, Next.js, shadcn/ui Blaze creates stunning web applications with modern component libraries. From dashboards to marketing sites, she delivers polished experiences. |
📱 Stack: Expo, React Native, NativeWind Tap builds native-quality iOS and Android apps from a single TypeScript codebase. App Store ready, always. |
⚡ Stack: Electron, Tauri, React Spark crafts cross-platform desktop applications with native integrations, system tray support, and offline-first architectures. |
|
🔍 Personality: Meticulous & wise Cleo refactors for maintainability, enforces patterns, and ensures enterprise-grade code quality across every PR. |
🛡️ Personality: Vigilant & protective Cipher runs security audits, dependency scans, and ensures OWASP compliance across all workflows. |
🕵️ Personality: Curious & thorough Tess creates comprehensive test suites—unit, integration, and e2e—ensuring reliability before every merge. |
|
🧵 Personality: Meticulous & tireless Stitch provides automated code review on every pull request—catches bugs, suggests improvements, and ensures consistency across your entire codebase. |
🔗 Personality: Systematic & reliable Atlas manages PR merges, rebases stale branches, and ensures clean integration with trunk-based development. |
⚡ Personality: Fast & action-oriented Bolt handles GitOps deployments, monitors rollouts, and ensures production health with automated rollbacks. |
Watch the magic happen when they work together:
|
📚 Phase 1 via |
⚡ Phase 2 via |
🛡️ Phase 3 via |
🔗 Phase 4 via |
🚀 Phase 5 via |
💡 Project Flexibility:
|
**🦀 Backend Projects** Rex (Rust) • Grizz (Go) • Nova (Node.js) |
**🎨 Frontend Projects** Blaze (Web/shadcn) • Tap (Mobile/Expo) • Spark (Desktop/Electron) |
|
**🚀 Full-Stack Projects** Mix backend + frontend agents seamlessly |
**🛡️ Quality Always** Cleo reviews • Tess tests • Cipher secures • Stitch code-reviews |
Fast • Elegant • Tested • Documented • Secure
It's like having a senior development team that never sleeps, never argues, and always delivers! 🎭
The Cognitive Task Orchestrator provides a complete AI engineering platform:
Morgan processes PRDs, generates tasks, and syncs with your project management tools.
- Parses PRD and generates structured task breakdown
- Linear Integration: Two-way sync with Linear roadmaps and sprints
- GitHub Projects: Auto-creates issues and project boards
- Enriches context via Firecrawl (auto-scrapes referenced URLs)
- Creates comprehensive documentation (task.md, prompt.md, acceptance-criteria.md)
- XML Prompts: Structured prompts optimized for AI agent consumption
- Agent routing: automatically assigns frontend/backend/mobile tasks
- Works with any supported model (Claude, GPT, Gemini, local models)
The entire team orchestrates complex multi-agent workflows with event-driven coordination.
- Phase 1 - Intake: Morgan documents requirements and architecture
- Phase 2 - Implementation: Backend (Rex/Grizz/Nova) or Frontend (Blaze/Tap/Spark)
- Phase 3 - Quality: Cleo reviews, Tess tests, Cipher secures
- Phase 4 - Integration: Stitch code-reviews, Atlas merges and rebases
- Phase 5 - Deployment: Bolt deploys and distributes
- Event-Driven Coordination: Automatic handoffs between phases
- GitHub Integration: Each phase submits detailed PRs
- Auto-Resume: Continues from where you left off (task_id optional)
Control and monitor your AI development workflows:
jobs()- List all running workflows with statusstop_job()- Stop any running workflow gracefullydocs_ingest()- Intelligently analyze and ingest documentation from GitHub reposregister_tool()- Dynamically register new MCP tools at runtime
The platform includes comprehensive self-healing capabilities:
- Platform Self-Healing: Monitors CTO's own health—detects stuck workflows, pod failures, step timeouts, and auto-remediates
- Application Self-Healing: Extends healing to your deployed apps—CI failures, silent errors, stale progress alerts
- Alert Types: Comment order issues, silent failures, approval loops, post-Tess CI failures, pod failures, step timeouts, stuck CodeRuns
- Automated Remediation: Spawns healing agents to diagnose and fix issues automatically
All operations run as Kubernetes jobs with enhanced reliability through TTL-safe reconciliation, preventing infinite loops and ensuring proper resource cleanup.
- Access to any AI coding assistant (Claude Code, Cursor, Factory, Codex, OpenCode, etc.)
- GitHub repository for your project
This is an integrated platform with crystal-clear data flow:
CTO works with your favorite AI coding assistant:
| CLI | Description | Status |
|---|---|---|
| Claude Code | Anthropic's official CLI | ✅ Full support |
| Cursor | AI-first code editor | ✅ Full support |
| Codex | OpenAI's coding assistant | ✅ Full support |
| Factory | Code Factory CLI | ✅ Full support |
| Gemini | Google's AI assistant | ✅ Full support |
| OpenCode | Open-source alternative | ✅ Full support |
| Dexter | Lightweight AI CLI | ✅ Full support |
Dynamic MCP tool registration with 60+ pre-configured tools:
| Category | Tools |
|---|---|
| Kubernetes | Pod logs, exec, resource CRUD, events, metrics, Helm operations |
| ArgoCD | Application sync, logs, events, GitOps management |
| GitHub | PRs, issues, code scanning, secret scanning, repository management |
| Context7 | Library documentation lookup and code examples |
| OpenMemory | Persistent memory across agent sessions |
Frontend Stack: shadcn/ui components, Tailwind CSS, React patterns built-in
Component Architecture:
- MCP Server (
cto-mcp): Handles MCP protocol calls from any CLI with dynamic tool registration - Controller Service: Kubernetes REST API that manages CodeRun/DocsRun CRDs via Argo Workflows
- Healer Service: Self-healing daemon monitoring platform and application health
- Argo Workflows: Orchestrates agent deployment through workflow templates
- Kubernetes Controllers: Separate controllers for CodeRun and DocsRun resources with TTL-safe reconciliation
- Agent Workspaces: Isolated persistent volumes for each service with session continuity
- GitHub Apps + Linear: Secure authentication and project management integration
- CloudFront Tunneling: Expose services publicly without opening firewall ports
Access your services from anywhere without exposing your infrastructure:
- Zero External Interface: No public IPs or open firewall ports required
- Automatic TLS: End-to-end encryption via CloudFront
- Global Edge: Low-latency access from anywhere in the world
- Secure by Default: Traffic routes through AWS infrastructure
Data Flow:
- Any CLI calls MCP tools (
intake(),play(), etc.) via MCP protocol - MCP server loads configuration from
cto-config.jsonand applies defaults - MCP server submits workflow to Argo with all required parameters
- Argo Workflows creates CodeRun/DocsRun custom resources
- Dedicated Kubernetes controllers reconcile CRDs with idempotent job management
- Controllers deploy configured CLI agents as Jobs with workspace isolation
- Agents authenticate via GitHub Apps and complete work
- Agents submit GitHub PRs with automatic cleanup
- Healer monitors for issues and auto-remediates failures
CTO runs anywhere you have Kubernetes—from bare-metal servers to managed cloud:
| Deployment Type | Providers | Best For |
|---|---|---|
| Bare-Metal | Latitude, Hetzner, OVH, Vultr, Scaleway, Cherry, DigitalOcean | Maximum cost savings, data sovereignty |
| On-Premises | Any server with Talos Linux | Air-gapped environments, full control |
| Cloud | AWS, Azure, GCP | Existing cloud infrastructure |
Save 60-80% vs cloud by running on dedicated servers:
# Bootstrap a Talos cluster on bare-metal (Latitude example)
cto-metal init --provider latitude --region MIA --plan c3-large-x86 --nodes 3
# Or use your own hardware
cto-metal init --provider onprem --config ./my-servers.yaml
# Deploy CTO platform
helm repo add 5dlabs https://5dlabs.github.io/cto
helm install cto 5dlabs/cto --namespace cto --create-namespaceSupported Bare-Metal Providers:
- Latitude.sh - Global bare-metal cloud
- Hetzner - European dedicated servers
- OVH - European cloud & bare-metal
- Vultr - Global bare-metal & cloud
- Scaleway - European cloud provider
- Cherry Servers - European bare-metal
- DigitalOcean - Droplets & bare-metal
# Add the 5dlabs Helm repository
helm repo add 5dlabs https://5dlabs.github.io/cto
helm repo update
# Install Custom Resource Definitions (CRDs) first
kubectl apply -f https://raw.githubusercontent.com/5dlabs/cto/main/infra/charts/cto/crds/platform-crds.yaml
# Install the cto
helm install cto 5dlabs/cto --namespace cto --create-namespace
# Setup agent secrets (interactive)
wget https://raw.githubusercontent.com/5dlabs/cto/main/infra/scripts/setup-agent-secrets.sh
chmod +x setup-agent-secrets.sh
./setup-agent-secrets.sh --helpRequirements:
- Kubernetes 1.19+
- Helm 3.2.0+
- GitHub Personal Access Token (or GitHub App)
- API key for your preferred model provider (Anthropic, OpenAI, Google, or local)
What you get:
- Complete cto platform deployed to Kubernetes
- Self-healing infrastructure monitoring
- REST API for task management
- Separate Kubernetes controllers for CodeRun/DocsRun resources with TTL-safe reconciliation
- Agent workspace management and isolation with persistent volumes
- Automatic resource cleanup and job lifecycle management
- MCP tools with dynamic registration
- CloudFront tunneling for secure public access
Kilo is an open-source WireGuard-based VPN that provides secure access to cluster services. It's deployed automatically via ArgoCD.
Client Setup:
- Install WireGuard and kgctl:
# macOS
brew install wireguard-tools
go install github.com/squat/kilo/cmd/kgctl@latest
# Linux
sudo apt install wireguard-tools
go install github.com/squat/kilo/cmd/kgctl@latest-
Generate your WireGuard keys and create a Peer resource (see
docs/vpn/kilo-client-setup.md) -
Connect to access cluster services:
sudo wg-quick up ~/.wireguard/kilo.confThis enables direct access to:
- ClusterIPs (e.g.,
curl http://10.x.x.x:port) - Service DNS (e.g.,
curl http://service.namespace.svc.cluster.local)
See docs/vpn/kilo-client-setup.md for full setup instructions.
For CLI integration (Cursor, Claude Code, etc.), install the MCP server:
# One-liner installer (Linux/macOS)
curl --proto '=https' --tlsv1.2 -LsSf https://github.com/5dlabs/cto/releases/download/v0.2.0/tools-installer.sh | sh
# Verify installation
cto-mcp --help # MCP server for any CLIWhat you get:
cto-mcp- MCP server that integrates with any CLI- Multi-platform support (Linux x64/ARM64, macOS Intel/Apple Silicon)
- Automatic installation to system PATH
Create a cto-config.json file in your project root to configure agents, models, tool access, and workflow defaults:
{
"version": "1.0",
"defaults": {
"docs": {
"model": "claude-opus-4-1-20250805",
"githubApp": "5DLabs-Morgan",
"includeCodebase": false,
"sourceBranch": "main"
},
"play": {
"model": "claude-sonnet-4-20250514",
"cli": "claude",
"implementationAgent": "5DLabs-Rex",
"qualityAgent": "5DLabs-Cleo",
"testingAgent": "5DLabs-Tess",
"repository": "your-org/your-repo",
"service": "your-service",
"docsRepository": "your-org/your-docs-repo",
"docsProjectDirectory": "docs"
},
"intake": {
"githubApp": "5DLabs-Morgan",
"primary": { "model": "opus", "cli": "claude" },
"research": { "model": "gpt-4o", "cli": "codex" },
"fallback": { "model": "gemini-pro", "cli": "gemini" }
}
},
"agents": {
"morgan": {
"githubApp": "5DLabs-Morgan",
"cli": "claude",
"model": "claude-sonnet-4-20250514",
"tools": {
"remote": [
"memory_create_entities",
"memory_add_observations",
"brave_search_brave_web_search"
],
"localServers": {
"filesystem": {
"enabled": true,
"tools": ["read_file", "write_file", "list_directory", "search_files", "directory_tree"]
},
"git": {
"enabled": true,
"tools": ["git_status", "git_diff", "git_log", "git_show"]
}
}
}
},
"rex": {
"githubApp": "5DLabs-Rex",
"cli": "codex",
"model": "gpt-5-codex",
"tools": {
"remote": [
"memory_create_entities",
"memory_add_observations"
],
"localServers": {
"filesystem": {
"enabled": true,
"tools": ["read_file", "write_file", "list_directory", "search_files", "directory_tree"]
},
"git": {
"enabled": true,
"tools": ["git_status", "git_diff", "git_log", "git_show"]
}
}
}
},
"cleo": {
"githubApp": "5DLabs-Cleo",
"cli": "claude",
"model": "claude-sonnet-4-20250514",
"tools": {
"remote": ["memory_create_entities", "memory_add_observations"],
"localServers": {
"filesystem": {"enabled": true, "tools": ["read_file", "write_file", "list_directory", "search_files", "directory_tree"]},
"git": {"enabled": true, "tools": ["git_status", "git_diff", "git_log", "git_show"]}
}
}
},
"tess": {
"githubApp": "5DLabs-Tess",
"cli": "claude",
"model": "claude-sonnet-4-20250514",
"tools": {
"remote": ["memory_create_entities", "memory_add_observations"],
"localServers": {
"filesystem": {"enabled": true, "tools": ["read_file", "write_file", "list_directory", "search_files", "directory_tree"]},
"git": {"enabled": true, "tools": ["git_status", "git_diff"]}
}
}
}
}
}Agent Configuration Fields:
githubApp: GitHub App name for authenticationcli: Which CLI to use (claude,cursor,codex,opencode,factory)model: Model identifier for the CLItools(optional): Fine-grained tool access controlremote: Array of remote tool names from ToolslocalServers: Local MCP server configurations- Each server specifies
enabledand whichtoolsthe agent can access
- Each server specifies
Benefits:
- CLI Flexibility: Different agents can use different CLIs
- Model Selection: Each agent can use its optimal model
- Tool Profiles: Customize tool access per agent
- Security: Restrict agent capabilities as needed
After creating your configuration file, configure your CLI to use the MCP server.
For Cursor, create a .cursor/mcp.json file in your project directory:
{
"mcpServers": {
"cto-mcp": {
"command": "cto-mcp",
"args": [],
"env": {}
}
}
}For Claude Code, add to your MCP configuration (typically in ~/.config/claude/mcp.json):
{
"mcpServers": {
"cto-mcp": {
"command": "cto-mcp",
"args": []
}
}
}Usage:
- Create the
cto-config.jsonfile in your project root with your specific settings - Configure your CLI's MCP integration as shown above
- Restart your CLI to load the MCP server
- All MCP tools will be available with your configured defaults
Benefits of Configuration-Driven Approach:
- Simplified MCP Calls: Most parameters have sensible defaults from your config
- Dynamic Agent Lists: Tool descriptions show available agents from your config
- Consistent Settings: All team members use the same model/agent assignments
- Easy Customization: Change defaults without modifying MCP server setup
The platform supports multiple AI coding assistants with the same unified architecture. Choose the CLI that best fits your workflow:
|
Official Anthropic CLI
|
Popular AI editor
|
Multi-model support
|
Open-source CLI
|
Autonomous AI CLI
|
How It Works:
- Each agent in
cto-config.jsonspecifies itscliandmodel - Controllers automatically use the correct CLI for each agent
- All CLIs follow the same template structure
- Seamless switching between CLIs per-agent
Example Multi-CLI Configuration:
{
"agents": {
"morgan": {
"githubApp": "5DLabs-Morgan",
"cli": "claude",
"model": "claude-opus-4-20250514",
"tools": {
"remote": ["brave_search_brave_web_search"]
}
},
"rex": {
"githubApp": "5DLabs-Rex",
"cli": "factory",
"model": "gpt-5-factory-high",
"tools": {
"remote": ["memory_create_entities"]
}
},
"blaze": {
"githubApp": "5DLabs-Blaze",
"cli": "opencode",
"model": "claude-sonnet-4-20250514",
"tools": {
"remote": ["brave_search_brave_web_search"]
}
},
"cleo": {
"githubApp": "5DLabs-Cleo",
"cli": "cursor",
"model": "claude-sonnet-4-20250514",
"tools": {
"localServers": {
"filesystem": {"enabled": true, "tools": ["read_file", "write_file"]}
}
}
},
"tess": {
"githubApp": "5DLabs-Tess",
"cli": "codex",
"model": "gpt-4o",
"tools": {
"remote": ["memory_add_observations"]
}
}
}
}Each agent independently configured with its own CLI, model, and tool access.
The platform includes built-in MCP tools, but you can add ANY external MCP servers or custom tools you need:
addTool()— Dynamically add any MCP server by GitHub URL — agents instantly gain access to new capabilitiesintake()— Project onboarding — initializes new projects with proper structure and configurationdocs()— Documentation generation — Morgan analyzes projects and creates comprehensive docsplay()— Full orchestration — coordinates the entire team through build/test/deploy phases
Process PRDs, generate tasks, and create comprehensive documentation in one operation.
// Minimal call - handles everything
intake({
project_name: "my-awesome-app"
});
// Customize with options
intake({
project_name: "my-awesome-app",
enrich_context: true, // Auto-scrape URLs via Firecrawl
include_codebase: false, // Include existing code context
model: "your-preferred-model" // Any supported model
});What unified intake does:
✅ Parses PRD and generates structured task breakdown
✅ Enriches context by scraping URLs found in PRD (via Firecrawl)
✅ Creates comprehensive documentation (task.md, prompt.md, acceptance-criteria.md)
✅ XML Prompts: Generates task.xml with structured prompts optimized for AI agents
✅ Adds agent routing hints for frontend/backend task assignment
✅ Submits single PR with complete project structure
✅ Works with any supported model provider
Executes complex multi-agent workflows with event-driven coordination.
// Minimal call - auto-resumes from where you left off
play();
// Or specify a task
play({
task_id: 1 // optional - auto-detects if omitted
});
// Customize agent assignments
play({
implementation_agent: "rex",
quality_agent: "cleo",
repository: "myorg/my-project"
});What the team does:
✅ Phase 1 - Intake: Morgan documents requirements and architecture
✅ Phase 2 - Implementation: Rex/Blaze builds the feature
✅ Phase 3 - Quality: Cleo reviews, Tess tests, Cipher secures
✅ Phase 4 - Integration: Stitch code-reviews, Atlas merges and rebases
✅ Phase 5 - Deployment: Bolt deploys and distributes
✅ Event-Driven: Automatic phase transitions
✅ Auto-Resume: Continues from where you left off
List all running Argo workflows with simplified status info.
// List all workflows
jobs();
// Filter by type
jobs({
include: ["play", "intake"]
});
// Specify namespace
jobs({
namespace: "cto"
});Returns: List of active workflows with type, name, phase, and status
Stop any running Argo workflow gracefully.
// Stop a specific workflow
stop_job({
job_type: "play",
name: "play-workflow-abc123"
});
// Stop with explicit namespace
stop_job({
job_type: "intake",
name: "intake-workflow-xyz789",
namespace: "cto"
});Workflow types: intake, play, workflow
Intelligently analyze GitHub repos and ingest documentation.
// Ingest repository documentation
docs_ingest({
repository_url: "https://github.com/cilium/cilium",
doc_type: "cilium"
});
---
## **📋 Complete MCP Tool Parameters**
### `docs` Tool Parameters
**Required:**
- `working_directory` - Working directory for the project (e.g., `"projects/simple-api"`)
**Optional (with config defaults):**
- `agent` - Agent name to use (defaults to `defaults.docs.githubApp` mapping)
- `model` - Model to use for the docs agent (defaults to `defaults.docs.model`)
- `source_branch` - Source branch to work from (defaults to `defaults.docs.sourceBranch`)
- `include_codebase` - Include existing codebase as context (defaults to `defaults.docs.includeCodebase`)
### `play` Tool Parameters
**All parameters are optional** — the platform auto-resumes from where you left off:
- `task_id` - Task ID to implement (auto-detected if omitted)
**Optional (with config defaults):**
- `repository` - Target repository URL (e.g., `"5dlabs/cto"`) (defaults to `defaults.play.repository`)
- `service` - Service identifier for persistent workspace (defaults to `defaults.play.service`)
- `docs_repository` - Documentation repository URL (defaults to `defaults.play.docsRepository`)
- `docs_project_directory` - Project directory within docs repository (defaults to `defaults.play.docsProjectDirectory`)
- `implementation_agent` - Agent for implementation work (defaults to `defaults.play.implementationAgent`)
- `quality_agent` - Agent for quality assurance (defaults to `defaults.play.qualityAgent`)
- `testing_agent` - Agent for testing and validation (defaults to `defaults.play.testingAgent`)
- `model` - Model to use for play-phase agents (defaults to `defaults.play.model`)
---
## **🎨 Template Customization**
The platform uses a template system to customize agent behavior, settings, and prompts. Templates are Handlebars (`.hbs`) files rendered with task-specific data at runtime. Multi-CLI support lives alongside these templates so Claude, Codex, and future CLIs follow the same structure.
**Model Defaults**: Models are configured through `cto-config.json` defaults (and can be overridden via MCP parameters). We ship presets for Claude (`claude-sonnet-4-20250514`), Codex (`gpt-5-codex`), and Factory (`gpt-5-factory-high`), but any supported model for a CLI can be supplied via configuration.
### Template Architecture
All templates now live under `infra/charts/controller/agent-templates/` with CLI-specific subdirectories:
**Docs Tasks (Multi-CLI Support)**
- **Prompts**: Rendered from `docs/{cli}/prompt.md.hbs` into the ConfigMap
- **Settings**: `docs/{cli}/settings.json.hbs` controls model, permissions, tools
- **Container Script**: `docs/{cli}/container.sh.hbs` handles Git workflow and CLI execution
**Code Tasks (multi-CLI)**
- **Claude**: `code/claude/**`
- Settings: `code/claude/settings.json.hbs`
- Container: `code/claude/container.sh.hbs`
- **Codex**: `code/codex/**`
- Agents memory: `code/codex/agents.md.hbs`
- Config: `code/codex/config.toml.hbs`
- Container scripts: `code/codex/container*.sh.hbs`
- **Factory**: `code/factory/**`
- Agents memory: `code/factory/agents*.md.hbs`
- Config: `code/factory/factory-cli-config.json.hbs`
- Container scripts: `code/factory/container*.sh.hbs`
- **Shared assets**: `code/mcp.json.hbs`, `code/coding-guidelines.md.hbs`, and `code/github-guidelines.md.hbs`
**Play Workflows**: Multi-agent orchestration with event-driven coordination
- **Workflow Template**: `play-workflow-template.yaml` defines the multi-phase workflow
- **Phase Coordination**: Each phase triggers the next phase automatically
- **Agent Handoffs**: Seamless transitions between implementation → QA → testing phases
### How to Customize
#### 1. Changing Agent Settings
Edit the settings template files for your chosen CLI:
```bash
# For docs agents (Claude Code example)
vim infra/charts/controller/agent-templates/docs/claude/settings.json.hbs
# For code agents (Claude Code example)
vim infra/charts/controller/agent-templates/code/claude/settings.json.hbs
# For code agents (Codex example)
vim infra/charts/controller/agent-templates/code/codex/config.toml.hbs
# For code agents (Factory example)
vim infra/charts/controller/agent-templates/code/factory/factory-cli-config.json.hbsSettings control:
- Model selection (CLI-specific model identifiers)
- Tool permissions and access
- MCP tool configuration
- CLI-specific settings (permissions, hooks, etc.)
Refer to your CLI's documentation for complete configuration options:
- Claude Code Settings
- Factory CLI Documentation
- Other CLIs: Refer to their respective documentation
For docs tasks (affects all documentation generation):
# Edit the docs prompt template for your CLI
vim infra/charts/controller/agent-templates/docs/{cli}/prompt.md.hbs
# Examples:
vim infra/charts/controller/agent-templates/docs/claude/prompt.md.hbs
vim infra/charts/controller/agent-templates/docs/cursor/prompt.md.hbsFor code tasks (affects specific task implementation):
# Edit task-specific files in your docs repository
vim {docs_project_directory}/tasks/task-{id}/prompt.md
vim {docs_project_directory}/tasks/task-{id}/task.md
vim {docs_project_directory}/tasks/task-{id}/acceptance-criteria.mdFor play workflows (affects multi-agent orchestration):
# Edit the play workflow template
vim infra/charts/controller/templates/workflowtemplates/play-workflow-template.yamlThe play workflow template controls:
- Phase sequencing and dependencies
- Agent assignments for each phase
- Event triggers between phases
- Parameter passing between phases
Hooks are shell scripts that run during agent execution. Add new hook files beneath the CLI you are extending:
# Create new hook script (docs/Claude Code example)
vim infra/charts/controller/agent-templates/docs/claude/hooks/my-custom-hook.sh.hbs
# Create new hook script (code/Codex example)
vim infra/charts/controller/agent-templates/code/codex/hooks/my-custom-hook.sh.hbs
# Create new hook script (code/Factory example)
vim infra/charts/controller/agent-templates/code/factory/hooks/my-custom-hook.sh.hbsHook files are automatically discovered and rendered. Ensure the hook name matches any references in your settings templates.
Refer to your CLI's documentation for hook configuration:
- Claude Code Hooks Guide
- Other CLIs: Refer to their respective documentation for hook/script support
After editing any template files, redeploy the cto:
# Deploy template changes
helm upgrade cto infra/charts/controller -n cto
# Verify ConfigMap was updated (fullname = <release>-controller)
kubectl get configmap cto-controller-agent-templates -n cto -o yamlImportant: Template changes only affect new agent jobs. Running jobs continue with their original templates.
Common variables available in templates:
{{task_id}}- Task ID for code tasks{{service_name}}- Target service name{{github_user}}- GitHub username{{repository_url}}- Target repository URL{{working_directory}}- Working directory path{{model}}- Claude model name{{docs_repository_url}}- Documentation repository URL
- Configure
cto-config.jsonfirst to set up your agents, models, tool profiles, and repository defaults - Use
intake()for new projects to parse PRD, generate tasks, and create documentation in one operation - Choose the right tool for the job:
- Use
intake()for new project setup from PRDs (handles docs automatically) - Use
play()for full-cycle development (implementation → QA → testing) - Use
jobs()/stop_job()for workflow management
- Use
- Mix and match CLIs - assign the best CLI to each agent based on task requirements
- Customize tool access - use the
toolsconfiguration to control agent capabilities - Use minimal MCP calls - let configuration defaults handle most parameters
- Review GitHub PRs promptly - agents provide detailed logs and explanations
- Update config file when adding new agents, tools, or changing project structure
# Build from source
git clone https://github.com/5dlabs/cto.git
cd cto/controller
# Build MCP server
cargo build --release --bin cto-mcp
# Verify the build
./target/release/cto-mcp --help # MCP server
# Install to your system (optional)
cp target/release/cto-mcp /usr/local/bin/- Check GitHub PRs for detailed agent logs and explanations
- Verify
cto-config.jsonconfiguration and GitHub Apps authentication setup - Ensure Argo Workflows are properly deployed and accessible
This project is licensed under the GNU Affero General Public License v3.0 (AGPL-3.0). This means:
- You can use, modify, and distribute this software freely
- You can use it for commercial purposes
⚠️ If you deploy a modified version on a network server, you must provide source code access to users⚠️ Any derivative works must also be licensed under AGPL-3.0
The AGPL license is specifically designed for server-side software to ensure that improvements to the codebase remain open source, even when deployed as a service. This protects the open source nature of the project while allowing commercial use.
Source Code Access: Since this platform operates as a network service, users interacting with it have the right to access the source code under AGPL-3.0. The complete source code is available at this repository, ensuring full compliance with AGPL-3.0's network clause.
For more details, see the LICENSE file.
- 5D Labs - Building the future of AI-powered software development.
| ⭐ Star Support project |
🍴 Fork Build with us |
💬 Discord Join community |
🐦 X Get updates |
| 📺 YouTube Watch tutorials |
📖 Docs Learn more |
🐛 Issues Report bugs |
💡 Discuss Share ideas |
Built with ❤️ and 🤖 by the 5D Labs Team
The platform runs on Kubernetes and automatically manages multi-CLI agent deployments, workspace isolation, and GitHub integration. All you need to do is call the MCP tools and review the resulting PRs.