Skip to content

feat: add MiniMax as a new LLM provider#11367

Open
octo-patch wants to merge 1 commit intocontinuedev:mainfrom
octo-patch:add-minimax-provider
Open

feat: add MiniMax as a new LLM provider#11367
octo-patch wants to merge 1 commit intocontinuedev:mainfrom
octo-patch:add-minimax-provider

Conversation

@octo-patch
Copy link

@octo-patch octo-patch commented Mar 12, 2026

Summary

  • Add MiniMax as a new LLM provider with OpenAI-compatible API support
  • Register MiniMax-M2.5 (204K context) and MiniMax-M2.5-highspeed models
  • Include GUI model selection, provider configuration, and documentation

Changes

New Files

  • core/llm/llms/MiniMax.ts — Provider class extending OpenAI with:
    • Temperature clamping to (0, 1] range (MiniMax rejects 0)
    • response_format removal (unsupported)
    • Default API base: https://api.minimax.io/v1/
  • packages/llm-info/src/providers/minimax.ts — Model metadata
  • docs/customize/model-providers/more/minimax.mdx — Provider documentation

Modified Files

  • core/llm/llms/index.ts — Register in LLMClasses
  • packages/openai-adapters/src/index.ts — Register OpenAI-compatible adapter
  • packages/config-types/src/index.ts — Add "minimax" to provider enum
  • packages/llm-info/src/index.ts — Add to allModelProviders
  • gui/src/pages/AddNewModel/configs/models.ts — Model entries
  • gui/src/pages/AddNewModel/configs/providers.ts — Provider config with API key setup
  • docs/customize/model-providers/overview.mdx — Add to hosted services table

Models

Model Context Max Output Description
MiniMax-M2.5 204,800 192,000 Peak performance, complex reasoning
MiniMax-M2.5-highspeed 204,800 192,000 Same performance, lower latency

Test Plan

  • Verified MiniMax API connectivity and response format
  • Verified temperature constraint handling
  • Code follows existing provider patterns (Groq, DeepSeek, Mistral)

Summary by cubic

Add MiniMax as an OpenAI-compatible LLM provider with GUI support and docs. Includes MiniMax-M2.5 and MiniMax-M2.5-highspeed models (204K context) and normalizes unsupported params.

  • New Features

    • New minimax provider using https://api.minimax.io/v1/ via the OpenAI-compatible adapter.
    • Registers provider and models in LLMClasses, config types, and llm-info.
    • GUI: provider setup with API key and model entries for MiniMax-M2.5 and MiniMax-M2.5-highspeed.
    • Request handling: clamps temperature to (0, 1] and removes unsupported response_format.
    • Docs: provider guide and overview entry.
  • Migration

    • Configure provider: minimax with your API key, or set MINIMAX_API_KEY.
    • For China, set apiBase to https://api.minimaxi.com/v1/.

Written for commit a15305e. Summary will update on new commits.

Add MiniMax (https://platform.minimax.io) as a new LLM provider with
OpenAI-compatible API support.

Changes:
- Add MiniMax LLM provider class extending OpenAI with temperature
  clamping (must be in (0, 1]) and response_format removal
- Register provider in LLMClasses, openai-adapters, and config-types
- Add model info for MiniMax-M2.5 and MiniMax-M2.5-highspeed
  (204K context, 192K max output)
- Add GUI model selection entries and provider configuration
- Add provider documentation page
@octo-patch octo-patch requested a review from a team as a code owner March 12, 2026 23:18
@octo-patch octo-patch requested review from Patrick-Erichsen and removed request for a team March 12, 2026 23:18
@dosubot dosubot bot added the size:L This PR changes 100-499 lines, ignoring generated files. label Mar 12, 2026
@github-actions
Copy link
Contributor


Thank you for your submission, we really appreciate it. Like many open-source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution. You can sign the CLA by just posting a Pull Request Comment same as the below format.


I have read the CLA Document and I hereby sign the CLA


PR Bot seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.
You can retrigger this bot by commenting recheck in this Pull Request. Posted by the CLA Assistant Lite bot.

Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 issue found across 10 files

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="packages/openai-adapters/src/index.ts">

<violation number="1" location="packages/openai-adapters/src/index.ts:145">
P1: MiniMax is wired to generic `OpenAIApi`, which skips the repo’s MiniMax-specific request fixes (temperature clamping and `response_format` removal), creating a real incompatibility path in adapter-based runtime flows.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

case "groq":
return openAICompatible("https://api.groq.com/openai/v1/", config);
case "minimax":
return openAICompatible("https://api.minimax.io/v1/", config);
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot Mar 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1: MiniMax is wired to generic OpenAIApi, which skips the repo’s MiniMax-specific request fixes (temperature clamping and response_format removal), creating a real incompatibility path in adapter-based runtime flows.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At packages/openai-adapters/src/index.ts, line 145:

<comment>MiniMax is wired to generic `OpenAIApi`, which skips the repo’s MiniMax-specific request fixes (temperature clamping and `response_format` removal), creating a real incompatibility path in adapter-based runtime flows.</comment>

<file context>
@@ -141,6 +141,8 @@ export function constructLlmApi(config: LLMConfig): BaseLlmApi | undefined {
     case "groq":
       return openAICompatible("https://api.groq.com/openai/v1/", config);
+    case "minimax":
+      return openAICompatible("https://api.minimax.io/v1/", config);
     case "sambanova":
       return openAICompatible("https://api.sambanova.ai/v1/", config);
</file context>
Fix with Cubic

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

size:L This PR changes 100-499 lines, ignoring generated files.

Projects

Status: Todo

Development

Successfully merging this pull request may close these issues.

1 participant