Skip to content

feat(cohere): add new models [bot]#334

Merged
LordGameleo merged 2 commits intomainfrom
bot/add-cohere-20260318-064801
Mar 18, 2026
Merged

feat(cohere): add new models [bot]#334
LordGameleo merged 2 commits intomainfrom
bot/add-cohere-20260318-064801

Conversation

@hganwani-droid
Copy link
Collaborator

@hganwani-droid hganwani-droid commented Mar 18, 2026

Auto-generated by model-addition-agent for provider cohere.


Note

Medium Risk
Primarily updates Cohere model configuration YAMLs, but several models switch mode between completion and chat, which can change which API/path the runtime uses. Incorrect mode/limit metadata could cause request failures or truncated outputs at runtime.

Overview
Updates multiple Cohere model definition YAMLs to align metadata with current capabilities.

Several models adjust their primary mode (notably switching between completion and chat) and add supportedModes where both are intended. The PR also refreshes declared features (e.g., tools/tool_choice/structured_output) and revises limits.context_window for various chat, embedding, and rerank models.

Written by Cursor Bugbot for commit ad58b21. This will update automatically on new commits. Configure here.

@LordGameleo LordGameleo merged commit 5de62aa into main Mar 18, 2026
3 checks passed
Copy link

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 4 potential issues.

Fix All in Cursor

Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.

- tools
isDeprecated: true
limits:
context_window: 132096
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Incorrect max_tokens value in command-r-plus config

Medium Severity

The newly added max_tokens: 128000 is inconsistent with max_output_tokens: 4000. Every comparable Cohere model (command-r-08-2024, command-r7b-12-2024, command-r7b-arabic-02-2025) sets max_tokens equal to max_output_tokens (i.e. 4000). The value 128000 matches max_input_tokens, suggesting the wrong field's value was used here.

Fix in Cursor Fix in Web

- tools
limits:
context_window: 16000
context_window: 8992
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Incorrect context window value for command-a-translate

Medium Severity

The context_window is set to 8992, but the actual context window for command-a-translate-08-2025 is 16K tokens (16384). The correct total context length is 8K input + 8K output = 16K. The value 8992 does not correspond to any documented limit and is almost certainly wrong.

Fix in Cursor Fix in Web

- tools
limits:
context_window: 256000
context_window: 288000
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Incorrect context window for command-a-03-2025

Medium Severity

The context_window is set to 288000, but the actual context window for command-a-03-2025 is 256K tokens (256000) per Cohere's official documentation. The value 288000 overstates the actual limit and does not match the max_input_tokens: 256000 field in the same file.

Fix in Cursor Fix in Web

- tools
limits:
context_window: 256000
context_window: 288768
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Incorrect context window for command-a-reasoning

Medium Severity

The context_window is set to 288768, but the actual context window for command-a-reasoning-08-2025 is 256K tokens (256000) per Cohere's official documentation. This is inconsistent with max_input_tokens: 256000 in the same file.

Fix in Cursor Fix in Web

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants