Skip to content

feat: add tool calling support to m serve#850

Draft
markstur wants to merge 1 commit intogenerative-computing:mainfrom
markstur:issue_825
Draft

feat: add tool calling support to m serve#850
markstur wants to merge 1 commit intogenerative-computing:mainfrom
markstur:issue_825

Conversation

@markstur
Copy link
Copy Markdown
Contributor

@markstur markstur commented Apr 13, 2026

Misc PR

Type of PR

  • Bug Fix
  • New Feature
  • Documentation
  • Other

Description

Successfully added tool calling support to m serve CLI with proper type annotations. Here's what was implemented:

Changes Made

1. Updated Models (cli/serve/models.py)

  • Added ToolCallFunction model for function details in tool calls
  • Added ChatCompletionMessageToolCall model for tool call structure
  • Extended ChatCompletionMessage to include optional tool_calls field
  • Updated Choice.finish_reason to support "tool_calls" value

2. Modified Server Logic (cli/serve/app.py)

  • Added json and Literal imports for proper typing
  • Imported new tool-related models
  • Updated _build_model_options() to pass through tools (mapped to ModelOption.TOOLS) and tool_choice parameters
  • Enhanced make_chat_endpoint() to:
    • Extract tool calls from ModelOutputThunk.tool_calls with proper type checking (isinstance(dict))
    • Generate unique IDs for each tool call in format call_<24-char-hex>
    • Serialize tool arguments to JSON
    • Set finish_reason with proper Literal type annotation
    • Return tool calls in OpenAI-compatible format

3. Comprehensive Tests (test/cli/test_serve_tool_calling.py)

  • 8 new tests covering:
    • Single and multiple tool calls
    • Tool call formatting and serialization
    • Complex nested arguments
    • Tool parameters passed to model_options
    • Backward compatibility (requests without tools)
    • Usage info alongside tool calls

4. Updated Existing Test (test/cli/test_serve.py)

  • Renamed test_tool_params_excluded_from_model_options to test_tool_params_passed_to_model_options
  • Updated assertions to verify tools and tool_choice are now passed through

5. Example Code

  • docs/examples/m_serve/m_serve_example_tool_calling.py: Complete server example with GetWeatherTool and CalculatorTool implementations
  • docs/examples/m_serve/client_tool_calling.py: Client demonstrating how to call the tool-enabled server with various scenarios

Key Features

OpenAI-Compatible: Follows OpenAI's tool calling API format
Type-Safe: Proper Literal type annotations for finish_reason
Robust Type Checking: Uses isinstance(dict) to avoid Mock object issues
Automatic Tool Call Detection: Extracts tool calls from ModelOutputThunk
Proper Finish Reasons: Returns "tool_calls" when tools are invoked, "stop" otherwise
Unique Tool Call IDs: Generates unique IDs in format call_<24-char-hex>
JSON Serialization: Properly serializes tool arguments to JSON strings
Backward Compatible: Works with existing code that doesn't use tools
Fully Tested: All 43 serve tests pass, including 8 new tool-specific tests
Type Checked: Passes mypy type checking

Usage

Start server with tool support:

uv run m serve docs/examples/m_serve/m_serve_example_tool_calling.py

Call with tools from client:

response = requests.post(
    "http://localhost:8080/v1/chat/completions",
    json={
        "model": "gpt-3.5-turbo",
        "messages": [{"role": "user", "content": "What's the weather in Paris?"}],
        "tools": [...],  # Tool definitions
        "tool_choice": "auto"
    }
)

The implementation properly handles tool calls from Mellea's ModelOutputThunk and formats them according to OpenAI's API specification with full type safety.

Testing

  • Tests added to the respective file if code was changed
  • New code has 100% coverage if code as added
  • Ensure existing tests and github automation passes (a maintainer will kick off the github automation when the rest of the PR is populated)

Signed-off-by: Mark Sturdevant <mark.sturdevant@ibm.com>
@markstur markstur requested a review from a team as a code owner April 13, 2026 23:38
@markstur markstur marked this pull request as draft April 13, 2026 23:38
@github-actions github-actions bot added the enhancement New feature or request label Apr 13, 2026
@github-actions
Copy link
Copy Markdown
Contributor

The PR description has been updated. Please fill out the template for your PR to be reviewed.

@planetf1
Copy link
Copy Markdown
Contributor

@markstur Do you want review comments yet or still WIP?

@markstur
Copy link
Copy Markdown
Contributor Author

@markstur Do you want review comments yet or still WIP?

Comments would be great! It is draft because I need to do more review/test myself on the generated code. I don't want to waste your time but comments early would be very welcome.

Copy link
Copy Markdown
Member

@psschwei psschwei left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review: feat: add tool calling support to m serve

Good feature PR — the core plumbing is correct and the OpenAI-compatible response format looks right. A couple of bugs to fix before merge, plus some improvements.

Summary

The implementation correctly wires tool calling through the serve endpoint: tools maps to ModelOption.TOOLS, tool_choice passes through as-is, and the response extracts tool calls from ModelOutputThunk into the OpenAI format. The Pydantic models mirror the OpenAI types well, and tests cover the main paths.

Two bugs need fixing (see inline comments):

  1. Empty tool_calls dict produces incorrect finish_reason: "tool_calls" with an empty array
  2. Client example's multi-turn loop duplicates the assistant message for each tool call

Other improvements (see inline comments):

  • Unused loop variable tool_name
  • eval() in example code with # noqa suppressing the security lint for copy-pasters
  • Missing test for the empty dict edge case
  • hasattr check is always true for ModelOutputThunk — defensive but masks upstream bugs

What's working well

  • Pydantic models (ToolCallFunction, ChatCompletionMessageToolCall) closely match OpenAI types
  • _build_model_options change is clean — tools removed from exclusion set, mapped to ModelOption.TOOLS
  • 8 well-structured tests covering single/multiple tool calls, finish reasons, model_options passthrough, complex args, usage info, and backward compat
  • Existing test updated consistently from "excluded" to "passed"

Comment on lines +180 to +206
tool_calls = None
finish_reason: Literal[
"stop", "length", "content_filter", "tool_calls", "function_call"
] = "stop"
if (
hasattr(output, "tool_calls")
and output.tool_calls is not None
and isinstance(output.tool_calls, dict)
):
tool_calls = []
for tool_name, model_tool_call in output.tool_calls.items():
# Generate a unique ID for this tool call
tool_call_id = f"call_{uuid.uuid4().hex[:24]}"

# Serialize the arguments to JSON string
args_json = json.dumps(model_tool_call.args)

tool_calls.append(
ChatCompletionMessageToolCall(
id=tool_call_id,
type="function",
function=ToolCallFunction(
name=model_tool_call.name, arguments=args_json
),
)
)
finish_reason = "tool_calls"
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Empty tool_calls dict produces wrong finish_reason
When output.tool_calls is {}, the code passes the isinstance(dict) check, creates an empty tool_calls = [], and sets finish_reason = "tool_calls". Per the OpenAI API, this should be "stop" with no tool calls. Fix: check if tool_calls: after the loop before setting the finish reason.

and isinstance(output.tool_calls, dict)
):
tool_calls = []
for tool_name, model_tool_call in output.tool_calls.items():
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tool_name is never used. Use .values() instead.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we add a test for output.tool_calls = {}, which is the trigger for the bug noted earlier

tool_result = "Tool result"

# Add tool response to conversation
messages.append(
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: if someone was using N tools, this would append the assistant message N times. could move it out of the tool call loop to prevent that (though it's an example, so not a major problem)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

m serve OpenAI API tool calling round-trip

3 participants