Skip to content

[BUG] LiteLLM Moonshot AI Kimi K2 thinking fails on multi-turn conversations with 'Invalid part type: thinking' #1150

@westonbrown

Description

@westonbrown

Checks

  • I have updated to the latest minor and patch version of Strands
  • I have checked the documentation and this is not expected behavior
  • I have searched issues and there are no duplicates

Strands Version

1.15.0

Python Version

3.10.18

Operating System

macOS (Darwin 25.0.0)

Installation Method

pip install strands==1.15.0

Steps to Reproduce

  1. Install Strands and configure Moonshot AI via LiteLLM:
from strands.models import LiteLLMModel

model = LiteLLMModel(
    model_id="moonshot/kimi-k2-thinking",
    client_args={}
)
  1. Make initial request with tool calling:
messages = [{"role": "user", "content": [{"text": "Analyze this target"}]}]
response = await model.stream(messages=messages, tool_specs=tools)
  1. Add response to conversation history and make second request:
# Response contains reasoning content
messages.append(assistant_message_with_reasoning)
messages.append({"role": "user", "content": [{"text": "Continue"}]})
response2 = await model.stream(messages=messages, tool_specs=tools)
  1. See error from Moonshot API

Expected Behavior

Multi-turn conversations with reasoning content should work seamlessly. Reasoning content should be preserved in a format compatible with Moonshot's OpenAI-compatible API.

Actual Behavior

Second request fails with:

MoonshotException - Invalid request: the message at position 2 with role 'assistant' 
contains an invalid part type: thinking

Additional Context

Root cause: In src/strands/models/litellm.py lines 95-102, format_request_message_content() converts reasoning content to:

{
    "type": "thinking",
    "thinking": reasoning_text["text"],
}

Moonshot API only accepts OpenAI-compatible content types: "text" and "image_url". The "thinking" type works with Claude via Bedrock but is rejected by Moonshot and other OpenAI-compatible providers.

Possible Solution

Modify src/strands/models/litellm.py lines 95-102:

if "reasoningContent" in content:
    reasoning_text = content["reasoningContent"]["reasoningText"]
    # Use text type for OpenAI-compatible providers
    return {
        "type": "text",
        "text": reasoning_text["text"],
    }

This preserves reasoning information while maintaining compatibility with all OpenAI-compatible providers (Moonshot, DeepSeek R1, etc.).

Related Issues

Similar to #652 which handled filtering reasoningContent for DeepSeek with Bedrock.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions