Skip to content

Add MiniMax as a first-class LLM provider#690

Open
octo-patch wants to merge 2 commits intogoogle:mainfrom
octo-patch:feature/add-minimax-provider
Open

Add MiniMax as a first-class LLM provider#690
octo-patch wants to merge 2 commits intogoogle:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown

Summary

Add MiniMax as a first-class LLM provider in Langfun, following the existing patterns used by DeepSeek and Groq providers.

Changes

  • langfun/core/llms/minimax.py — New provider module:

    • MiniMaxModelInfo extending lf.ModelInfo with provider metadata and links
    • MiniMax base class extending OpenAIChatCompletionAPI with MiniMax-specific configuration
    • MiniMaxM27 and MiniMaxM27Highspeed convenience classes for MiniMax-M2.7 and MiniMax-M2.7-highspeed models
    • Temperature clamping (MiniMax API requires temperature ∈ (0.0, 1.0], so 0.0 is clamped to 0.01)
    • API key management via api_key parameter or MINIMAX_API_KEY environment variable
    • Model registration with lf.LanguageModel.register() for LanguageModel.get() support
  • langfun/core/llms/minimax_test.py — 10 unit tests covering:

    • Model directory listing
    • API key validation and environment variable fallback
    • Model ID and resource ID verification
    • Model info metadata
    • Request formatting
    • Temperature clamping behavior
    • LanguageModel.get() registration
  • langfun/core/llms/__init__.py — Added MiniMax exports

  • README.md — Added MiniMax to the list of supported LLMs

Supported Models

Model Context Window Description
MiniMax-M2.7 1M tokens Latest flagship model
MiniMax-M2.7-highspeed 1M tokens Optimized for speed

Usage

import langfun as lf

# Using convenience class
lm = lf.llms.MiniMaxM27(api_key='YOUR_KEY')
# Or via environment variable MINIMAX_API_KEY
lm = lf.llms.MiniMaxM27()

r = lm('Who are you?')
print(r)

# Using LanguageModel.get()
lm = lf.LanguageModel.get('MiniMax-M2.7')

Test Results

All 10 unit tests pass. Existing provider tests (DeepSeek, Groq) continue to pass without regressions.


4 files changed, 288 additions, 1 deletion

Add support for MiniMax models (M2.7 and M2.7-highspeed) via their
OpenAI-compatible Chat Completion API. MiniMax is a leading AI company
offering high-performance language models with up to 1M context window.

Changes:
- Add langfun/core/llms/minimax.py with MiniMax provider class extending
  OpenAIChatCompletionAPI, including model info, temperature clamping
  (MiniMax requires temperature > 0), and API key management
- Add langfun/core/llms/minimax_test.py with 10 unit tests
- Update langfun/core/llms/__init__.py to export MiniMax classes
- Update README.md to mention MiniMax in supported providers list
@google-cla
Copy link
Copy Markdown

google-cla bot commented Mar 28, 2026

Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

View this failed invocation of the CLA check for more information.

For the most up to date status, view the checks section at the bottom of the pull request.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant