-
-
Notifications
You must be signed in to change notification settings - Fork 11.8k
[Frontend] split append tool output #28333
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Frontend] split append tool output #28333
Conversation
Signed-off-by: Andrew Xia <axia@fb.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codex Review
Here are some automated review suggestions for this pull request.
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| class ConversationContext(ABC): | ||
| @abstractmethod | ||
| def append_output(self, output) -> None: | ||
| def append_output(self, output: RequestOutput) -> None: | ||
| pass | ||
|
|
||
| @abstractmethod | ||
| def append_tool_output(self, output) -> None: | ||
| pass |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add required append_tool_output implementation for test mock
The commit introduces a new abstract method append_tool_output on ConversationContext but only adds concrete implementations for the production contexts. tests/entrypoints/openai/test_serving_responses.py still defines MockConversationContext without overriding this method, so constructing the mock (MockConversationContext()) now raises TypeError: Can't instantiate abstract class … with abstract method append_tool_output and the test suite fails before any assertions run. Please add a stub implementation to the mock or adjust the base class contract.
Useful? React with 👍 / 👎.
Signed-off-by: Andrew Xia <axia@fb.com>
Signed-off-by: Andrew Xia <axia@fb.com> Co-authored-by: Andrew Xia <axia@fb.com> Signed-off-by: George D. Torres <gdavtor@gmail.com>
Signed-off-by: Andrew Xia <axia@fb.com> Co-authored-by: Andrew Xia <axia@fb.com> Signed-off-by: Bram Wasti <bwasti@meta.com>
Signed-off-by: Andrew Xia <axia@fb.com> Co-authored-by: Andrew Xia <axia@fb.com>
Signed-off-by: Andrew Xia <axia@fb.com> Co-authored-by: Andrew Xia <axia@fb.com> Signed-off-by: Xingyu Liu <charlotteliu12x@gmail.com>
Signed-off-by: Andrew Xia <axia@fb.com> Co-authored-by: Andrew Xia <axia@fb.com>
Purpose
append_output currently does two things, it takes an input from the engine (as type RequestOutput) and it takes input from a MCP tool call. These are two fundamentally different functions, so this PR separates them. No functional changes expected.
Test Plan
Test Result
This passes
pytest -sv tests/entrypoints/openai/test_response_api_with_harmony.py::test_function_calling_multi_turn