Skip to content

Add three NVIDIA NIM models: Qwen3.5-397B, MiniMax-M2.5, and Step-3.5-Flash#1069

Open
lcwecker wants to merge 1 commit intoanomalyco:devfrom
lcwecker:add-nvidia-nim-models
Open

Add three NVIDIA NIM models: Qwen3.5-397B, MiniMax-M2.5, and Step-3.5-Flash#1069
lcwecker wants to merge 1 commit intoanomalyco:devfrom
lcwecker:add-nvidia-nim-models

Conversation

@lcwecker
Copy link

@lcwecker lcwecker commented Mar 1, 2026

Summary

Add three new model configurations for NVIDIA NIM provider, verified against official NVIDIA NIM documentation.

Changes

1. Qwen3.5-397B-A17B (providers/nvidia/models/qwen/qwen3.5-397b-a17b.toml)

2. MiniMax-M2.5 (providers/nvidia/models/minimaxai/minimax-m2.5.toml)

3. Step-3.5-Flash (providers/nvidia/models/stepfun-ai/step-3.5-flash.toml)

Verification

  • All configurations validated with bun validate
  • Specifications verified against NVIDIA NIM official documentation
  • Model cards referenced from official sources

Notes

  • Model specifications (context length, modalities, release dates) sourced from official NVIDIA NIM documentation
  • Knowledge cutoff dates are estimates based on release dates where not explicitly stated by vendors

…-3.5-Flash

- Add qwen3.5-397b-a17b: 397B MoE multimodal model with 262K context (NVIDIA NIM)
- Add minimax-m2.5: 230B coding and reasoning model with 204K context (NVIDIA NIM)
- Add step-3.5-flash: 196B MoE reasoning model with 256K context (NVIDIA NIM)
- Create stepfun-ai provider directory for Step-3.5-Flash
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant