Skip to content

Conversation

@baonudesifeizhai
Copy link
Contributor

@baonudesifeizhai baonudesifeizhai commented Nov 9, 2025

Purpose

Fix issue #28262: Restore missing channel metadata when converting Responses API output items back to Harmony Messages for multi-turn conversations.
Changes:
Set channel='commentary' for function_call_output type inputs
Set channel='analysis' or 'commentary' for reasoning type based on the following message (commentary if followed by function_call, analysis otherwise)
Add test to verify channel metadata is correctly preserved across conversation turns
Update parse_response_input() to accept optional next_msg parameter for context-aware channel assignmen

Test Plan

pytest tests/entrypoints/openai/test_response_api_with_harmony.py::test_function_call_with_previous_input_messages -v

Test Result

pass

Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses an issue where channel metadata was being lost during multi-turn conversations in the Responses API. The changes correctly assign 'commentary' or 'analysis' channels to function_call_output and reasoning messages, respectively, based on the conversational context. The addition of tests to verify this behavior is a great step. I have one suggestion to simplify the conditional logic for setting the channel on reasoning messages, which will improve code clarity and maintainability.

Copy link
Contributor

@alecsolder alecsolder left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for this! Left a few comments.

Author.new(Role.TOOL, f"functions.{call_response.name}"),
response_msg["output"],
)
msg = msg.with_channel("commentary")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you want to add the content_type change from the issue here too? Or are you just tackling one specific part of the issue?

Copy link
Contributor Author

@baonudesifeizhai baonudesifeizhai Nov 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

msg = msg.with_channel("commentary")

msg = msg.with_content_type("json") -- is that enough?

@baonudesifeizhai
Copy link
Contributor Author

pytest tests/entrypoints/openai/test_response_api_with_harmony.py::test_function_call_with_previous_input_messages -v pass

@baonudesifeizhai
Copy link
Contributor Author

pytest tests/entrypoints/openai/test_response_api_with_harmony.py -v -k "not test_code_interpreter" this pass

test_code_interpreter dont know why out of time

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

frontend gpt-oss Related to GPT-OSS models

Projects

Status: To Triage

Development

Successfully merging this pull request may close these issues.

2 participants