Skip to content

Conversation

@ammar-agent
Copy link
Collaborator

Add official OpenRouter provider integration for access to 300+ models through a single API. Fixes errors that occurred when using baseURL override approach.

Changes

  • Install @openrouter/ai-sdk-provider package
  • Add OpenRouter to aiService.ts createModel method
  • Add OpenRouterProviderOptions to provider types
  • Add OPENROUTER_API_KEY environment variable support
  • Update docs/models.md with OpenRouter setup guide

Benefits

OpenRouter provides:

  • Universal model access - Anthropic, OpenAI, Google, Cerebras, DeepSeek, and 300+ others
  • Pay-as-you-go pricing - No monthly fees, transparent per-token costs
  • High availability - Automatic failover across providers
  • Immediate access - New models available as soon as they're released

Usage

// ~/.cmux/providers.jsonc
{
  "openrouter": {
    "apiKey": "sk-or-v1-..."
  }
}

Model format:

  • openrouter:anthropic/claude-3.5-sonnet
  • openrouter:google/gemini-2.0-flash-thinking-exp
  • openrouter:cerebras/glm-4.6
  • openrouter:deepseek/deepseek-chat

Testing

  • ✅ Type checking passes
  • ✅ Linting passes
  • ✅ Formatting passes
  • ✅ Package imports successfully
  • ✅ Model instantiation works

Why Not BaseURL Override?

The baseURL override approach has several issues:

  1. Missing OpenRouter-specific error handling
  2. Request format differences for features like prompt caching
  3. Response parsing issues with advanced features
  4. No automatic failover headers

The official provider handles all edge cases automatically.

Generated with cmux

Add official OpenRouter provider integration for access to 300+ models
through a single API. Fixes errors that occurred when using baseURL
override approach.

Changes:
- Install @openrouter/ai-sdk-provider package
- Add OpenRouter to aiService.ts createModel method
- Add OpenRouterProviderOptions to provider types
- Add OPENROUTER_API_KEY environment variable support
- Update docs/models.md with OpenRouter setup guide

OpenRouter provides:
- Universal model access (Anthropic, OpenAI, Google, Cerebras, etc.)
- Pay-as-you-go pricing with transparent per-token costs
- High availability with automatic failover
- Immediate access to new models

Usage:
  openrouter:anthropic/claude-3.5-sonnet
  openrouter:google/gemini-2.0-flash-thinking-exp
  openrouter:cerebras/glm-4.6
  openrouter:deepseek/deepseek-chat

_Generated with `cmux`_
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

cmux/src/services/ipcMain.ts

Lines 1121 to 1125 in 0598fc9

ipcMain.handle(IPC_CHANNELS.PROVIDERS_LIST, () => {
try {
// Return all supported providers, not just configured ones
// This matches the providers defined in the registry
return ["anthropic", "openai"];

P1 Badge Expose OpenRouter in providers list

The new OpenRouter integration never shows up in the providers UI because PROVIDERS_LIST still returns only "anthropic" and "openai". The renderer calls this handler to know which providers can be configured (src/browser/api.ts and src/preload.ts), so users will not see or be able to configure OpenRouter credentials even though the backend now supports it. Please include "openrouter" (and any other supported providers) in this list.

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Enable transparent pass-through of OpenRouter provider routing options
from providers.jsonc to control which infrastructure providers serve
requests (Cerebras, Fireworks, Together, etc.).

Changes:
- Update OpenRouterProviderOptions documentation
- Add OpenRouter case to buildProviderOptions
- Document provider routing in docs/models.md with examples
- Add GLM-4.6 example (z-ai/glm-4.6, not cerebras/glm-4.6)

Usage:
```jsonc
{
  "openrouter": {
    "apiKey": "sk-or-v1-...",
    "provider": {
      "order": ["Cerebras", "Fireworks"],
      "allow_fallbacks": true
    }
  }
}
```

The ProviderConfig already supports arbitrary properties via
`[key: string]: unknown`, so OpenRouter options pass through
transparently to the SDK's extraBody parameter.

_Generated with `cmux`_
Map provider routing options from providers.jsonc to OpenRouter's
extraBody parameter. The SDK expects standard options (apiKey, baseURL,
headers, fetch) at the top level and everything else in extraBody.

Before: Spread entire providerConfig (provider routing ignored)
After: Extract standard fields, pass rest via extraBody

This enables provider routing to actually work:
```jsonc
{
  "openrouter": {
    "apiKey": "sk-or-v1-...",
    "provider": {
      "require": "Cerebras"
    }
  }
}
```

_Generated with `cmux`_
The documentation incorrectly showed:
  "provider": { "require": "Cerebras" }

OpenRouter's API doesn't have a 'require' field. The correct format is:
  "provider": { "order": ["Cerebras"], "allow_fallbacks": false }

Changes:
- Fixed example to use correct 'order' + 'allow_fallbacks' fields
- Added comprehensive list of all provider routing options
- Added link to official OpenRouter provider routing docs

_Generated with `cmux`_
Enable thinking levels for OpenRouter reasoning models (Claude Sonnet
Thinking, etc.) by passing reasoning.effort through providerOptions.

OpenRouter supports two reasoning control methods:
1. reasoning.effort: 'low'|'medium'|'high' (maps to our thinking levels)
2. reasoning.max_tokens: number (token budget)

We use effort-based control which maps cleanly to our existing thinking
level UI (off/low/medium/high).

Changes:
- Added OPENROUTER_REASONING_EFFORT mapping in thinking.ts
- Updated buildProviderOptions to pass reasoning config when thinking > off
- Added OpenRouterReasoningOptions type for type safety
- Set exclude: false to show reasoning traces in UI

This complements factory-level provider routing (configured in
providers.jsonc) with per-request reasoning control (based on thinking
slider).

_Generated with `cmux`_
Added section explaining how to use the thinking slider with OpenRouter
reasoning models. The thinking level controls reasoning.effort (low,
medium, high) which works with Claude Sonnet Thinking and other
reasoning-capable models via OpenRouter.

_Generated with `cmux`_
Ran scripts/update_models.ts to pull latest model data from LiteLLM.

Added Z.AI GLM-4.6 to models-extra.ts with OpenRouter pricing:
- 200K context window (202,752 tokens)
- $0.40/M input, $1.75/M output
- Supports tool use, reasoning, and structured outputs

This fixes model stat lookups for:
- openrouter:z-ai/glm-4.6
- openrouter:anthropic/claude-3.7-sonnet:thinking (already in models.json)

Changes:
- Updated src/utils/tokens/models.json (3,379 additions from LiteLLM)
- Added openrouter/z-ai/glm-4.6 to models-extra.ts

_Generated with `cmux`_
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant