Skip to content

5dlabs/cto

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 

Repository files navigation

5D Labs Logo

Cognitive Task Orchestrator

Your AI Engineering Team in a Box 🚀

GitHub Stars Discord License Kubernetes

💎 Self-Hosted AI Development Platform • Bare-Metal Ready • MCP Native 💎

Deploy an autonomous engineering team on your infrastructure—ship production code while slashing cloud & staffing costs


💰 Why CTO?

🏗️ Full Engineering Team

13 specialized AI agents covering backend, frontend, QA, security, and DevOps—working 24/7

🔧 Self-Hosted & Bare-Metal

Deploy on your own infrastructure: bare-metal servers, on-prem, or any cloud—no vendor lock-in

💸 Massive Cost Savings

Cut cloud bills with bare-metal deployment + reduce engineering headcount for routine tasks

💵 Cost Comparison

Traditional Approach With CTO
$150k-250k/yr per engineer × 5-10 ~$500-2k/mo model usage (or self-host for near-zero)
$5k-50k/mo managed cloud services 60-80% savings on bare-metal
24/7 on-call rotation costs Automated self-healing
Weeks to onboard new team members Instant agent deployment

Local Model Support: Run Ollama, vLLM, or other local inference—bring your own GPUs and pay only for electricity.

🔐 Bring Your Own Keys (BYOK)

  • Your API keys — Anthropic, OpenAI, Google, etc. stored securely in your infrastructure
  • Your cloud credentials — AWS, GCP, Azure keys never leave your cluster
  • Secret management with OpenBao — Open-source HashiCorp Vault fork for enterprise-grade secrets
  • Zero vendor lock-in — Switch providers anytime, no data hostage situations

🌐 Zero-Trust Networking

Feature Technology What It Does
Cloudflare Tunnels cloudflared Expose services publicly without opening firewall ports — no public IPs needed, automatic TLS, global edge CDN
Kilo VPN WireGuard Secure mesh VPN for remote cluster access — connect from anywhere with encrypted tunnels
OpenBao Vault fork Centralized secrets management with dynamic credentials and audit logging

Cloudflare Tunnels is a game-changer: your entire platform can run on air-gapped infrastructure while still being accessible from anywhere. No ingress controllers, no load balancers, no exposed ports—just secure outbound tunnels through Cloudflare's network.

🏭 Infrastructure Operators (Managed by Bolt)

Replace expensive managed cloud services with open-source Kubernetes operators:

Operator Replaces Savings
CloudNative-PG AWS RDS, Cloud SQL, Azure PostgreSQL ~70-80%
Strimzi Kafka AWS MSK, Confluent Cloud ~60-70%
MinIO AWS S3, GCS, Azure Blob ~80-90%
Redis Operator ElastiCache, Memorystore ~70-80%
OpenSearch AWS OpenSearch, Elastic Cloud ~60-70%
ClickHouse BigQuery, Redshift, Snowflake ~70-80%
QuestDB TimescaleDB Cloud, InfluxDB Cloud ~70-80%

Bolt automatically deploys, monitors, and maintains these operators—giving you managed-service reliability at self-hosted prices.


🚧 Development Status

Public launch: January 1st, 2025 🚀

The platform is in beta and being refined based on production usage.

Current Status: ✅ Core platform architecture implemented
✅ MCP server with dynamic tool registration
✅ Kubernetes controllers with self-healing
✅ GitHub Apps + Linear integration
✅ Bare-metal deployment (Latitude, Hetzner, OVH, Vultr, Scaleway, Cherry, DigitalOcean)
✅ Cloudflare Tunnels for public access without exposed interfaces
✅ Infrastructure operators (PostgreSQL, Kafka, Redis, MinIO, OpenSearch, ClickHouse, QuestDB)
🔄 Documentation and onboarding improvements


Meet Your AI Engineering Team

Thirteen specialized agents with distinct personalities working together 24/7—your full-stack engineering department in a box

🎯 Project Management & Architecture

Morgan

The Technical Program Manager

Morgan Avatar

🐕 Personality: Articulate & organized
📋 Superpower: Turns chaos into actionable roadmaps
💬 Motto: "A plan without tasks is just a wish."

Morgan orchestrates project lifecycles—syncing GitHub Issues with Linear roadmaps, decomposing PRDs into sprint-ready tasks, and keeping stakeholders aligned through intake() MCP calls.

🦀 Backend Engineering Squad

Rex

The Rust Architect

Rex Avatar

🦀 Stack: Rust, Tokio, Axum
Superpower: Zero-cost abstractions at scale
💬 Motto: "If it compiles, it ships."

Rex builds high-performance APIs, real-time services, and systems-level infrastructure. When microseconds matter, Rex delivers.

Grizz

The Go Specialist

Grizz Avatar

🐻 Stack: Go, gRPC, PostgreSQL
🛠️ Superpower: Ships bulletproof services under pressure
💬 Motto: "Simple scales."

Grizz builds backend services, REST/gRPC APIs, CLI tools, and Kubernetes operators. From simple CRUD to distributed systems—battle-tested reliability is his signature.

Nova

The Node.js Engineer

Nova Avatar

Stack: Node.js, TypeScript, Fastify
🌌 Superpower: Rapid API development & integrations
💬 Motto: "Move fast, type safe."

Nova builds REST/GraphQL APIs, serverless functions, and third-party integrations. Speed-to-market is her specialty.

🎨 Frontend Engineering Squad

Blaze

The Web App Developer

Blaze Avatar

🎨 Stack: React, Next.js, shadcn/ui
Superpower: Pixel-perfect responsive interfaces
💬 Motto: "Great UX is invisible."

Blaze creates stunning web applications with modern component libraries. From dashboards to marketing sites, she delivers polished experiences.

Tap

The Mobile Developer

Tap Avatar

📱 Stack: Expo, React Native, NativeWind
🎯 Superpower: Cross-platform mobile excellence
💬 Motto: "One codebase, every pocket."

Tap builds native-quality iOS and Android apps from a single TypeScript codebase. App Store ready, always.

Spark

The Desktop Developer

Spark Avatar

Stack: Electron, Tauri, React
🖥️ Superpower: Native desktop apps that feel right
💬 Motto: "Desktop isn't dead—it's evolved."

Spark crafts cross-platform desktop applications with native integrations, system tray support, and offline-first architectures.

🛡️ Quality & Security Squad

Cleo

The Quality Guardian

Cleo Avatar

🔍 Personality: Meticulous & wise
Superpower: Spots code smells instantly
💬 Motto: "Excellence isn't negotiable."

Cleo refactors for maintainability, enforces patterns, and ensures enterprise-grade code quality across every PR.

Cipher

The Security Sentinel

Cipher Avatar

🛡️ Personality: Vigilant & protective
🔒 Superpower: Finds vulnerabilities before attackers
💬 Motto: "Trust nothing, verify everything."

Cipher runs security audits, dependency scans, and ensures OWASP compliance across all workflows.

Tess

The Testing Genius

Tess Avatar

🕵️ Personality: Curious & thorough
🎪 Superpower: Finds edge cases others miss
💬 Motto: "If it can break, I'll find it first!"

Tess creates comprehensive test suites—unit, integration, and e2e—ensuring reliability before every merge.

🚀 Operations Squad

Stitch

The Automated Code Reviewer

Stitch Avatar

🧵 Personality: Meticulous & tireless
🔎 Superpower: Reviews every PR with surgical precision
💬 Motto: "No loose threads."

Stitch provides automated code review on every pull request—catches bugs, suggests improvements, and ensures consistency across your entire codebase.

Atlas

The Integration Master

Atlas Avatar

🔗 Personality: Systematic & reliable
🌉 Superpower: Resolves merge conflicts automatically
💬 Motto: "Every branch finds its way home."

Atlas manages PR merges, rebases stale branches, and ensures clean integration with trunk-based development.

Bolt

The Deployment Specialist

Bolt Avatar

Personality: Fast & action-oriented
🚀 Superpower: Zero-downtime deployments
💬 Motto: "Ship it fast, ship it right!"

Bolt handles GitOps deployments, monitors rollouts, and ensures production health with automated rollbacks.


🌟 The Magic: How Your AI Team Collaborates

Watch the magic happen when they work together:

📚 Phase 1
Morgan documents
requirements & architecture

via intake()

⚡ Phase 2
Rex & Blaze build
backend + frontend

via play()

🛡️ Phase 3
Cleo, Tess, Cipher
quality, testing, security

via play()

🔗 Phase 4
Stitch & Atlas
review, merge & integrate

via play()

🚀 Phase 5
Bolt deploys
and distributes

via play()

💡 Project Flexibility:

**🦀 Backend Projects**
Rex (Rust) • Grizz (Go) • Nova (Node.js)
**🎨 Frontend Projects**
Blaze (Web/shadcn) • Tap (Mobile/Expo) • Spark (Desktop/Electron)
**🚀 Full-Stack Projects**
Mix backend + frontend agents seamlessly
**🛡️ Quality Always**
Cleo reviews • Tess tests • Cipher secures • Stitch code-reviews

🎯 Result: Production-Ready Code

Fast • Elegant • Tested • Documented • Secure

It's like having a senior development team that never sleeps, never argues, and always delivers! 🎭


⚡ What CTO Does

The Cognitive Task Orchestrator provides a complete AI engineering platform:

🚀 Unified Project Intake (intake())

Morgan processes PRDs, generates tasks, and syncs with your project management tools.

  • Parses PRD and generates structured task breakdown
  • Linear Integration: Two-way sync with Linear roadmaps and sprints
  • GitHub Projects: Auto-creates issues and project boards
  • Enriches context via Firecrawl (auto-scrapes referenced URLs)
  • Creates comprehensive documentation (task.md, prompt.md, acceptance-criteria.md)
  • XML Prompts: Structured prompts optimized for AI agent consumption
  • Agent routing: automatically assigns frontend/backend/mobile tasks
  • Works with any supported model (Claude, GPT, Gemini, local models)

🎮 Multi-Agent Play Workflows (play())

The entire team orchestrates complex multi-agent workflows with event-driven coordination.

  • Phase 1 - Intake: Morgan documents requirements and architecture
  • Phase 2 - Implementation: Backend (Rex/Grizz/Nova) or Frontend (Blaze/Tap/Spark)
  • Phase 3 - Quality: Cleo reviews, Tess tests, Cipher secures
  • Phase 4 - Integration: Stitch code-reviews, Atlas merges and rebases
  • Phase 5 - Deployment: Bolt deploys and distributes
  • Event-Driven Coordination: Automatic handoffs between phases
  • GitHub Integration: Each phase submits detailed PRs
  • Auto-Resume: Continues from where you left off (task_id optional)

🔧 Workflow Management

Control and monitor your AI development workflows:

  • jobs() - List all running workflows with status
  • stop_job() - Stop any running workflow gracefully
  • docs_ingest() - Intelligently analyze and ingest documentation from GitHub repos
  • register_tool() - Dynamically register new MCP tools at runtime

🔄 Self-Healing Infrastructure

The platform includes comprehensive self-healing capabilities:

  • Platform Self-Healing: Monitors CTO's own health—detects stuck workflows, pod failures, step timeouts, and auto-remediates
  • Application Self-Healing: Extends healing to your deployed apps—CI failures, silent errors, stale progress alerts
  • Alert Types: Comment order issues, silent failures, approval loops, post-Tess CI failures, pod failures, step timeouts, stuck CodeRuns
  • Automated Remediation: Spawns healing agents to diagnose and fix issues automatically

All operations run as Kubernetes jobs with enhanced reliability through TTL-safe reconciliation, preventing infinite loops and ensuring proper resource cleanup.


🚀 Getting Started

Prerequisites

  • Access to any AI coding assistant (Claude Code, Cursor, Factory, Codex, OpenCode, etc.)
  • GitHub repository for your project

🏗️ Platform Architecture

This is an integrated platform with crystal-clear data flow:

🖥️ Supported AI CLIs

CTO works with your favorite AI coding assistant:

CLI Description Status
Claude Code Anthropic's official CLI ✅ Full support
Cursor AI-first code editor ✅ Full support
Codex OpenAI's coding assistant ✅ Full support
Factory Code Factory CLI ✅ Full support
Gemini Google's AI assistant ✅ Full support
OpenCode Open-source alternative ✅ Full support
Dexter Lightweight AI CLI ✅ Full support

🔧 Integrated Tools Library

Dynamic MCP tool registration with 60+ pre-configured tools:

Category Tools
Kubernetes Pod logs, exec, resource CRUD, events, metrics, Helm operations
ArgoCD Application sync, logs, events, GitOps management
GitHub PRs, issues, code scanning, secret scanning, repository management
Context7 Library documentation lookup and code examples
OpenMemory Persistent memory across agent sessions

Frontend Stack: shadcn/ui components, Tailwind CSS, React patterns built-in

Component Architecture:

  • MCP Server (cto-mcp): Handles MCP protocol calls from any CLI with dynamic tool registration
  • Controller Service: Kubernetes REST API that manages CodeRun/DocsRun CRDs via Argo Workflows
  • Healer Service: Self-healing daemon monitoring platform and application health
  • Argo Workflows: Orchestrates agent deployment through workflow templates
  • Kubernetes Controllers: Separate controllers for CodeRun and DocsRun resources with TTL-safe reconciliation
  • Agent Workspaces: Isolated persistent volumes for each service with session continuity
  • GitHub Apps + Linear: Secure authentication and project management integration
  • CloudFront Tunneling: Expose services publicly without opening firewall ports

🌐 CloudFront Tunneling

Access your services from anywhere without exposing your infrastructure:

  • Zero External Interface: No public IPs or open firewall ports required
  • Automatic TLS: End-to-end encryption via CloudFront
  • Global Edge: Low-latency access from anywhere in the world
  • Secure by Default: Traffic routes through AWS infrastructure

Data Flow:

  1. Any CLI calls MCP tools (intake(), play(), etc.) via MCP protocol
  2. MCP server loads configuration from cto-config.json and applies defaults
  3. MCP server submits workflow to Argo with all required parameters
  4. Argo Workflows creates CodeRun/DocsRun custom resources
  5. Dedicated Kubernetes controllers reconcile CRDs with idempotent job management
  6. Controllers deploy configured CLI agents as Jobs with workspace isolation
  7. Agents authenticate via GitHub Apps and complete work
  8. Agents submit GitHub PRs with automatic cleanup
  9. Healer monitors for issues and auto-remediates failures

📦 Installation

🔧 Deployment Options

CTO runs anywhere you have Kubernetes—from bare-metal servers to managed cloud:

Deployment Type Providers Best For
Bare-Metal Latitude, Hetzner, OVH, Vultr, Scaleway, Cherry, DigitalOcean Maximum cost savings, data sovereignty
On-Premises Any server with Talos Linux Air-gapped environments, full control
Cloud AWS, Azure, GCP Existing cloud infrastructure

Deploy on Bare-Metal (Recommended)

Save 60-80% vs cloud by running on dedicated servers:

# Bootstrap a Talos cluster on bare-metal (Latitude example)
cto-metal init --provider latitude --region MIA --plan c3-large-x86 --nodes 3

# Or use your own hardware
cto-metal init --provider onprem --config ./my-servers.yaml

# Deploy CTO platform
helm repo add 5dlabs https://5dlabs.github.io/cto
helm install cto 5dlabs/cto --namespace cto --create-namespace

Supported Bare-Metal Providers:

  • Latitude.sh - Global bare-metal cloud
  • Hetzner - European dedicated servers
  • OVH - European cloud & bare-metal
  • Vultr - Global bare-metal & cloud
  • Scaleway - European cloud provider
  • Cherry Servers - European bare-metal
  • DigitalOcean - Droplets & bare-metal

Deploy on Existing Kubernetes

# Add the 5dlabs Helm repository
helm repo add 5dlabs https://5dlabs.github.io/cto
helm repo update

# Install Custom Resource Definitions (CRDs) first
kubectl apply -f https://raw.githubusercontent.com/5dlabs/cto/main/infra/charts/cto/crds/platform-crds.yaml

# Install the cto
helm install cto 5dlabs/cto --namespace cto --create-namespace

# Setup agent secrets (interactive)
wget https://raw.githubusercontent.com/5dlabs/cto/main/infra/scripts/setup-agent-secrets.sh
chmod +x setup-agent-secrets.sh
./setup-agent-secrets.sh --help

Requirements:

  • Kubernetes 1.19+
  • Helm 3.2.0+
  • GitHub Personal Access Token (or GitHub App)
  • API key for your preferred model provider (Anthropic, OpenAI, Google, or local)

What you get:

  • Complete cto platform deployed to Kubernetes
  • Self-healing infrastructure monitoring
  • REST API for task management
  • Separate Kubernetes controllers for CodeRun/DocsRun resources with TTL-safe reconciliation
  • Agent workspace management and isolation with persistent volumes
  • Automatic resource cleanup and job lifecycle management
  • MCP tools with dynamic registration
  • CloudFront tunneling for secure public access

Remote Cluster Access with Kilo VPN

Kilo is an open-source WireGuard-based VPN that provides secure access to cluster services. It's deployed automatically via ArgoCD.

Client Setup:

  1. Install WireGuard and kgctl:
# macOS
brew install wireguard-tools
go install github.com/squat/kilo/cmd/kgctl@latest

# Linux
sudo apt install wireguard-tools
go install github.com/squat/kilo/cmd/kgctl@latest
  1. Generate your WireGuard keys and create a Peer resource (see docs/vpn/kilo-client-setup.md)

  2. Connect to access cluster services:

sudo wg-quick up ~/.wireguard/kilo.conf

This enables direct access to:

  • ClusterIPs (e.g., curl http://10.x.x.x:port)
  • Service DNS (e.g., curl http://service.namespace.svc.cluster.local)

See docs/vpn/kilo-client-setup.md for full setup instructions.

Install MCP Server

For CLI integration (Cursor, Claude Code, etc.), install the MCP server:

# One-liner installer (Linux/macOS)
curl --proto '=https' --tlsv1.2 -LsSf https://github.com/5dlabs/cto/releases/download/v0.2.0/tools-installer.sh | sh

# Verify installation
cto-mcp --help   # MCP server for any CLI

What you get:

  • cto-mcp - MCP server that integrates with any CLI
  • Multi-platform support (Linux x64/ARM64, macOS Intel/Apple Silicon)
  • Automatic installation to system PATH

⚙️ Configuration

Configure Project Settings

Create a cto-config.json file in your project root to configure agents, models, tool access, and workflow defaults:

{
  "version": "1.0",
  "defaults": {
    "docs": {
      "model": "claude-opus-4-1-20250805",
      "githubApp": "5DLabs-Morgan",
      "includeCodebase": false,
      "sourceBranch": "main"
    },
    "play": {
      "model": "claude-sonnet-4-20250514",
      "cli": "claude",
      "implementationAgent": "5DLabs-Rex",
      "qualityAgent": "5DLabs-Cleo",
      "testingAgent": "5DLabs-Tess",
      "repository": "your-org/your-repo",
      "service": "your-service",
      "docsRepository": "your-org/your-docs-repo",
      "docsProjectDirectory": "docs"
    },
    "intake": {
      "githubApp": "5DLabs-Morgan",
      "primary": { "model": "opus", "cli": "claude" },
      "research": { "model": "gpt-4o", "cli": "codex" },
      "fallback": { "model": "gemini-pro", "cli": "gemini" }
    }
  },
  "agents": {
    "morgan": {
      "githubApp": "5DLabs-Morgan",
      "cli": "claude",
      "model": "claude-sonnet-4-20250514",
      "tools": {
        "remote": [
          "memory_create_entities",
          "memory_add_observations",
          "brave_search_brave_web_search"
        ],
        "localServers": {
          "filesystem": {
            "enabled": true,
            "tools": ["read_file", "write_file", "list_directory", "search_files", "directory_tree"]
          },
          "git": {
            "enabled": true,
            "tools": ["git_status", "git_diff", "git_log", "git_show"]
          }
        }
      }
    },
    "rex": {
      "githubApp": "5DLabs-Rex",
      "cli": "codex",
      "model": "gpt-5-codex",
      "tools": {
        "remote": [
          "memory_create_entities",
          "memory_add_observations"
        ],
        "localServers": {
          "filesystem": {
            "enabled": true,
            "tools": ["read_file", "write_file", "list_directory", "search_files", "directory_tree"]
          },
          "git": {
            "enabled": true,
            "tools": ["git_status", "git_diff", "git_log", "git_show"]
          }
        }
      }
    },
    "cleo": {
      "githubApp": "5DLabs-Cleo",
      "cli": "claude",
      "model": "claude-sonnet-4-20250514",
      "tools": {
        "remote": ["memory_create_entities", "memory_add_observations"],
        "localServers": {
          "filesystem": {"enabled": true, "tools": ["read_file", "write_file", "list_directory", "search_files", "directory_tree"]},
          "git": {"enabled": true, "tools": ["git_status", "git_diff", "git_log", "git_show"]}
        }
      }
    },
    "tess": {
      "githubApp": "5DLabs-Tess",
      "cli": "claude",
      "model": "claude-sonnet-4-20250514",
      "tools": {
        "remote": ["memory_create_entities", "memory_add_observations"],
        "localServers": {
          "filesystem": {"enabled": true, "tools": ["read_file", "write_file", "list_directory", "search_files", "directory_tree"]},
          "git": {"enabled": true, "tools": ["git_status", "git_diff"]}
        }
      }
    }
  }
}

Agent Configuration Fields:

  • githubApp: GitHub App name for authentication
  • cli: Which CLI to use (claude, cursor, codex, opencode, factory)
  • model: Model identifier for the CLI
  • tools (optional): Fine-grained tool access control
    • remote: Array of remote tool names from Tools
    • localServers: Local MCP server configurations
      • Each server specifies enabled and which tools the agent can access

Benefits:

  • CLI Flexibility: Different agents can use different CLIs
  • Model Selection: Each agent can use its optimal model
  • Tool Profiles: Customize tool access per agent
  • Security: Restrict agent capabilities as needed

Configure MCP Integration

After creating your configuration file, configure your CLI to use the MCP server.

For Cursor, create a .cursor/mcp.json file in your project directory:

{
  "mcpServers": {
    "cto-mcp": {
      "command": "cto-mcp",
      "args": [],
      "env": {}
    }
  }
}

For Claude Code, add to your MCP configuration (typically in ~/.config/claude/mcp.json):

{
  "mcpServers": {
    "cto-mcp": {
      "command": "cto-mcp",
      "args": []
    }
  }
}

Usage:

  1. Create the cto-config.json file in your project root with your specific settings
  2. Configure your CLI's MCP integration as shown above
  3. Restart your CLI to load the MCP server
  4. All MCP tools will be available with your configured defaults

Benefits of Configuration-Driven Approach:

  • Simplified MCP Calls: Most parameters have sensible defaults from your config
  • Dynamic Agent Lists: Tool descriptions show available agents from your config
  • Consistent Settings: All team members use the same model/agent assignments
  • Easy Customization: Change defaults without modifying MCP server setup

🎨 Multi-CLI Support

The platform supports multiple AI coding assistants with the same unified architecture. Choose the CLI that best fits your workflow:

Claude Code

Official Anthropic CLI

  • Native Integration
  • Best for Claude models
  • Enterprise-ready

Cursor

Popular AI editor

  • VS Code-based
  • Rich IDE features
  • Excellent UX

Codex

Multi-model support

  • Provider Agnostic
  • Flexible configuration
  • OpenAI, Anthropic, more

OpenCode

Open-source CLI

  • Community Driven
  • Extensible architecture
  • Full transparency

Factory

Autonomous AI CLI

  • Auto-Run Mode
  • Unattended execution
  • CI/CD optimized

How It Works:

  • Each agent in cto-config.json specifies its cli and model
  • Controllers automatically use the correct CLI for each agent
  • All CLIs follow the same template structure
  • Seamless switching between CLIs per-agent

Example Multi-CLI Configuration:

{
  "agents": {
    "morgan": {
      "githubApp": "5DLabs-Morgan",
      "cli": "claude",
      "model": "claude-opus-4-20250514",
      "tools": {
        "remote": ["brave_search_brave_web_search"]
      }
    },
    "rex": {
      "githubApp": "5DLabs-Rex",
      "cli": "factory",
      "model": "gpt-5-factory-high",
      "tools": {
        "remote": ["memory_create_entities"]
      }
    },
    "blaze": {
      "githubApp": "5DLabs-Blaze",
      "cli": "opencode",
      "model": "claude-sonnet-4-20250514",
      "tools": {
        "remote": ["brave_search_brave_web_search"]
      }
    },
    "cleo": {
      "githubApp": "5DLabs-Cleo",
      "cli": "cursor",
      "model": "claude-sonnet-4-20250514",
      "tools": {
        "localServers": {
          "filesystem": {"enabled": true, "tools": ["read_file", "write_file"]}
        }
      }
    },
    "tess": {
      "githubApp": "5DLabs-Tess",
      "cli": "codex",
      "model": "gpt-4o",
      "tools": {
        "remote": ["memory_add_observations"]
      }
    }
  }
}

Each agent independently configured with its own CLI, model, and tool access.


🔧 MCP Tools (Model Context Protocol)

The platform includes built-in MCP tools, but you can add ANY external MCP servers or custom tools you need:

  • addTool() — Dynamically add any MCP server by GitHub URL — agents instantly gain access to new capabilities
  • intake() — Project onboarding — initializes new projects with proper structure and configuration
  • docs() — Documentation generation — Morgan analyzes projects and creates comprehensive docs
  • play() — Full orchestration — coordinates the entire team through build/test/deploy phases

Detailed Tool Reference

1. intake() - Unified Project Intake ⭐ NEW

Process PRDs, generate tasks, and create comprehensive documentation in one operation.

// Minimal call - handles everything
intake({
  project_name: "my-awesome-app"
});

// Customize with options
intake({
  project_name: "my-awesome-app",
  enrich_context: true,        // Auto-scrape URLs via Firecrawl
  include_codebase: false,     // Include existing code context
  model: "your-preferred-model" // Any supported model
});

What unified intake does: ✅ Parses PRD and generates structured task breakdown
✅ Enriches context by scraping URLs found in PRD (via Firecrawl)
✅ Creates comprehensive documentation (task.md, prompt.md, acceptance-criteria.md)
XML Prompts: Generates task.xml with structured prompts optimized for AI agents
✅ Adds agent routing hints for frontend/backend task assignment
✅ Submits single PR with complete project structure
✅ Works with any supported model provider

2. play() - Multi-Agent Orchestration

Executes complex multi-agent workflows with event-driven coordination.

// Minimal call - auto-resumes from where you left off
play();

// Or specify a task
play({
  task_id: 1  // optional - auto-detects if omitted
});

// Customize agent assignments
play({
  implementation_agent: "rex",
  quality_agent: "cleo",
  repository: "myorg/my-project"
});

What the team does:Phase 1 - Intake: Morgan documents requirements and architecture
Phase 2 - Implementation: Rex/Blaze builds the feature
Phase 3 - Quality: Cleo reviews, Tess tests, Cipher secures
Phase 4 - Integration: Stitch code-reviews, Atlas merges and rebases
Phase 5 - Deployment: Bolt deploys and distributes
Event-Driven: Automatic phase transitions
Auto-Resume: Continues from where you left off

3. jobs() - Workflow Status

List all running Argo workflows with simplified status info.

// List all workflows
jobs();

// Filter by type
jobs({
  include: ["play", "intake"]
});

// Specify namespace
jobs({
  namespace: "cto"
});

Returns: List of active workflows with type, name, phase, and status

4. stop_job() - Workflow Control

Stop any running Argo workflow gracefully.

// Stop a specific workflow
stop_job({
  job_type: "play",
  name: "play-workflow-abc123"
});

// Stop with explicit namespace
stop_job({
  job_type: "intake",
  name: "intake-workflow-xyz789",
  namespace: "cto"
});

Workflow types: intake, play, workflow

5. docs_ingest() - Documentation Analysis

Intelligently analyze GitHub repos and ingest documentation.

// Ingest repository documentation
docs_ingest({
  repository_url: "https://github.com/cilium/cilium",
  doc_type: "cilium"
});

---

## **📋 Complete MCP Tool Parameters**

### `docs` Tool Parameters

**Required:**
- `working_directory` - Working directory for the project (e.g., `"projects/simple-api"`)

**Optional (with config defaults):**
- `agent` - Agent name to use (defaults to `defaults.docs.githubApp` mapping)
- `model` - Model to use for the docs agent (defaults to `defaults.docs.model`)
- `source_branch` - Source branch to work from (defaults to `defaults.docs.sourceBranch`)
- `include_codebase` - Include existing codebase as context (defaults to `defaults.docs.includeCodebase`)

### `play` Tool Parameters

**All parameters are optional**  the platform auto-resumes from where you left off:

- `task_id` - Task ID to implement (auto-detected if omitted)

**Optional (with config defaults):**
- `repository` - Target repository URL (e.g., `"5dlabs/cto"`) (defaults to `defaults.play.repository`)
- `service` - Service identifier for persistent workspace (defaults to `defaults.play.service`)
- `docs_repository` - Documentation repository URL (defaults to `defaults.play.docsRepository`)
- `docs_project_directory` - Project directory within docs repository (defaults to `defaults.play.docsProjectDirectory`)
- `implementation_agent` - Agent for implementation work (defaults to `defaults.play.implementationAgent`)
- `quality_agent` - Agent for quality assurance (defaults to `defaults.play.qualityAgent`)
- `testing_agent` - Agent for testing and validation (defaults to `defaults.play.testingAgent`)
- `model` - Model to use for play-phase agents (defaults to `defaults.play.model`)

---

## **🎨 Template Customization**

The platform uses a template system to customize agent behavior, settings, and prompts. Templates are Handlebars (`.hbs`) files rendered with task-specific data at runtime. Multi-CLI support lives alongside these templates so Claude, Codex, and future CLIs follow the same structure.

**Model Defaults**: Models are configured through `cto-config.json` defaults (and can be overridden via MCP parameters). We ship presets for Claude (`claude-sonnet-4-20250514`), Codex (`gpt-5-codex`), and Factory (`gpt-5-factory-high`), but any supported model for a CLI can be supplied via configuration.

### Template Architecture

All templates now live under `infra/charts/controller/agent-templates/` with CLI-specific subdirectories:

**Docs Tasks (Multi-CLI Support)**

- **Prompts**: Rendered from `docs/{cli}/prompt.md.hbs` into the ConfigMap
- **Settings**: `docs/{cli}/settings.json.hbs` controls model, permissions, tools
- **Container Script**: `docs/{cli}/container.sh.hbs` handles Git workflow and CLI execution

**Code Tasks (multi-CLI)**

- **Claude**: `code/claude/**`
  - Settings: `code/claude/settings.json.hbs`
  - Container: `code/claude/container.sh.hbs`
- **Codex**: `code/codex/**`
  - Agents memory: `code/codex/agents.md.hbs`
  - Config: `code/codex/config.toml.hbs`
  - Container scripts: `code/codex/container*.sh.hbs`
- **Factory**: `code/factory/**`
  - Agents memory: `code/factory/agents*.md.hbs`
  - Config: `code/factory/factory-cli-config.json.hbs`
  - Container scripts: `code/factory/container*.sh.hbs`
- **Shared assets**: `code/mcp.json.hbs`, `code/coding-guidelines.md.hbs`, and `code/github-guidelines.md.hbs`

**Play Workflows**: Multi-agent orchestration with event-driven coordination

- **Workflow Template**: `play-workflow-template.yaml` defines the multi-phase workflow
- **Phase Coordination**: Each phase triggers the next phase automatically
- **Agent Handoffs**: Seamless transitions between implementation  QA  testing phases

### How to Customize

#### 1. Changing Agent Settings

Edit the settings template files for your chosen CLI:

```bash
# For docs agents (Claude Code example)
vim infra/charts/controller/agent-templates/docs/claude/settings.json.hbs

# For code agents (Claude Code example)
vim infra/charts/controller/agent-templates/code/claude/settings.json.hbs

# For code agents (Codex example)
vim infra/charts/controller/agent-templates/code/codex/config.toml.hbs

# For code agents (Factory example)
vim infra/charts/controller/agent-templates/code/factory/factory-cli-config.json.hbs

Settings control:

  • Model selection (CLI-specific model identifiers)
  • Tool permissions and access
  • MCP tool configuration
  • CLI-specific settings (permissions, hooks, etc.)

Refer to your CLI's documentation for complete configuration options:

2. Updating Prompts

For docs tasks (affects all documentation generation):

# Edit the docs prompt template for your CLI
vim infra/charts/controller/agent-templates/docs/{cli}/prompt.md.hbs

# Examples:
vim infra/charts/controller/agent-templates/docs/claude/prompt.md.hbs
vim infra/charts/controller/agent-templates/docs/cursor/prompt.md.hbs

For code tasks (affects specific task implementation):

# Edit task-specific files in your docs repository
vim {docs_project_directory}/tasks/task-{id}/prompt.md
vim {docs_project_directory}/tasks/task-{id}/task.md
vim {docs_project_directory}/tasks/task-{id}/acceptance-criteria.md

3. Customizing Play Workflows

For play workflows (affects multi-agent orchestration):

# Edit the play workflow template
vim infra/charts/controller/templates/workflowtemplates/play-workflow-template.yaml

The play workflow template controls:

  • Phase sequencing and dependencies
  • Agent assignments for each phase
  • Event triggers between phases
  • Parameter passing between phases

4. Adding Custom Hooks

Hooks are shell scripts that run during agent execution. Add new hook files beneath the CLI you are extending:

# Create new hook script (docs/Claude Code example)
vim infra/charts/controller/agent-templates/docs/claude/hooks/my-custom-hook.sh.hbs

# Create new hook script (code/Codex example)
vim infra/charts/controller/agent-templates/code/codex/hooks/my-custom-hook.sh.hbs

# Create new hook script (code/Factory example)
vim infra/charts/controller/agent-templates/code/factory/hooks/my-custom-hook.sh.hbs

Hook files are automatically discovered and rendered. Ensure the hook name matches any references in your settings templates.

Refer to your CLI's documentation for hook configuration:

5. Deploying Template Changes

After editing any template files, redeploy the cto:

# Deploy template changes
helm upgrade cto infra/charts/controller -n cto

# Verify ConfigMap was updated (fullname = <release>-controller)
kubectl get configmap cto-controller-agent-templates -n cto -o yaml

Important: Template changes only affect new agent jobs. Running jobs continue with their original templates.

Template Variables

Common variables available in templates:

  • {{task_id}} - Task ID for code tasks
  • {{service_name}} - Target service name
  • {{github_user}} - GitHub username
  • {{repository_url}} - Target repository URL
  • {{working_directory}} - Working directory path
  • {{model}} - Claude model name
  • {{docs_repository_url}} - Documentation repository URL

💡 Best Practices

  1. Configure cto-config.json first to set up your agents, models, tool profiles, and repository defaults
  2. Use intake() for new projects to parse PRD, generate tasks, and create documentation in one operation
  3. Choose the right tool for the job:
    • Use intake() for new project setup from PRDs (handles docs automatically)
    • Use play() for full-cycle development (implementation → QA → testing)
    • Use jobs() / stop_job() for workflow management
  4. Mix and match CLIs - assign the best CLI to each agent based on task requirements
  5. Customize tool access - use the tools configuration to control agent capabilities
  6. Use minimal MCP calls - let configuration defaults handle most parameters
  7. Review GitHub PRs promptly - agents provide detailed logs and explanations
  8. Update config file when adding new agents, tools, or changing project structure

🛠️ Building from Source (Development)

# Build from source
git clone https://github.com/5dlabs/cto.git
cd cto/controller

# Build MCP server
cargo build --release --bin cto-mcp

# Verify the build
./target/release/cto-mcp --help   # MCP server

# Install to your system (optional)
cp target/release/cto-mcp /usr/local/bin/

🆘 Support

  • Check GitHub PRs for detailed agent logs and explanations
  • Verify cto-config.json configuration and GitHub Apps authentication setup
  • Ensure Argo Workflows are properly deployed and accessible

📄 License

This project is licensed under the GNU Affero General Public License v3.0 (AGPL-3.0). This means:

  • You can use, modify, and distribute this software freely
  • You can use it for commercial purposes
  • ⚠️ If you deploy a modified version on a network server, you must provide source code access to users
  • ⚠️ Any derivative works must also be licensed under AGPL-3.0

The AGPL license is specifically designed for server-side software to ensure that improvements to the codebase remain open source, even when deployed as a service. This protects the open source nature of the project while allowing commercial use.

Source Code Access: Since this platform operates as a network service, users interacting with it have the right to access the source code under AGPL-3.0. The complete source code is available at this repository, ensuring full compliance with AGPL-3.0's network clause.

For more details, see the LICENSE file.


🔗 Related Projects

  • 5D Labs - Building the future of AI-powered software development.

🌟 Join the AI Development Revolution

⭐ Star
Support project
🍴 Fork
Build with us
💬 Discord
Join community
🐦 X
Get updates
📺 YouTube
Watch tutorials
📖 Docs
Learn more
🐛 Issues
Report bugs
💡 Discuss
Share ideas

Built with ❤️ and 🤖 by the 5D Labs Team


The platform runs on Kubernetes and automatically manages multi-CLI agent deployments, workspace isolation, and GitHub integration. All you need to do is call the MCP tools and review the resulting PRs.

Releases

No releases published

Packages