-
Notifications
You must be signed in to change notification settings - Fork 3k
fix: stabilize function call IDs across streaming events #4653
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Closed
giulio-leone
wants to merge
1
commit into
google:main
from
giulio-leone:fix/streaming-function-call-id-mismatch
+232
−6
Closed
Changes from all commits
Commits
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change | ||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
@@ -181,12 +181,25 @@ def generate_client_function_call_id() -> str: | |||||||||||||||||||||||||
| return f'{AF_FUNCTION_CALL_ID_PREFIX}{uuid.uuid4()}' | ||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||
| def populate_client_function_call_id(model_response_event: Event) -> None: | ||||||||||||||||||||||||||
| def populate_client_function_call_id( | ||||||||||||||||||||||||||
| model_response_event: Event, | ||||||||||||||||||||||||||
| function_call_id_cache: Optional[dict[str, str]] = None, | ||||||||||||||||||||||||||
| ) -> None: | ||||||||||||||||||||||||||
| if not model_response_event.get_function_calls(): | ||||||||||||||||||||||||||
| return | ||||||||||||||||||||||||||
| for function_call in model_response_event.get_function_calls(): | ||||||||||||||||||||||||||
| for idx, function_call in enumerate( | ||||||||||||||||||||||||||
| model_response_event.get_function_calls() | ||||||||||||||||||||||||||
| ): | ||||||||||||||||||||||||||
| if not function_call.id: | ||||||||||||||||||||||||||
| function_call.id = generate_client_function_call_id() | ||||||||||||||||||||||||||
| # Use (name, index) as cache key so that two calls to the same | ||||||||||||||||||||||||||
| # function in a single response keep separate stable IDs. | ||||||||||||||||||||||||||
| cache_key = f'{function_call.name}:{idx}' | ||||||||||||||||||||||||||
| if function_call_id_cache is not None and cache_key in function_call_id_cache: | ||||||||||||||||||||||||||
| function_call.id = function_call_id_cache[cache_key] | ||||||||||||||||||||||||||
| else: | ||||||||||||||||||||||||||
| function_call.id = generate_client_function_call_id() | ||||||||||||||||||||||||||
| if function_call_id_cache is not None: | ||||||||||||||||||||||||||
| function_call_id_cache[cache_key] = function_call.id | ||||||||||||||||||||||||||
|
Comment on lines
+197
to
+202
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This logic for handling the cache can be simplified. Using
Suggested change
|
||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||
| def remove_client_function_call_id(content: Optional[types.Content]) -> None: | ||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||
196 changes: 196 additions & 0 deletions
196
tests/unittests/flows/llm_flows/test_streaming_function_call_ids.py
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,196 @@ | ||
| # Copyright 2026 Google LLC | ||
| # | ||
| # Licensed under the Apache License, Version 2.0 (the "License"); | ||
| # you may not use this file except in compliance with the License. | ||
| # You may obtain a copy of the License at | ||
| # | ||
| # http://www.apache.org/licenses/LICENSE-2.0 | ||
| # | ||
| # Unless required by applicable law or agreed to in writing, software | ||
| # distributed under the License is distributed on an "AS IS" BASIS, | ||
| # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
| # See the License for the specific language governing permissions and | ||
| # limitations under the License. | ||
|
|
||
| """Tests that function call IDs stay stable across streaming events.""" | ||
|
|
||
| from google.adk.events.event import Event | ||
| from google.adk.flows.llm_flows.base_llm_flow import _finalize_model_response_event | ||
| from google.adk.flows.llm_flows.functions import populate_client_function_call_id | ||
| from google.adk.models.llm_request import LlmRequest | ||
| from google.adk.models.llm_response import LlmResponse | ||
| from google.genai import types | ||
| import pytest | ||
|
|
||
|
|
||
| def _make_fc_response(name: str, args: dict | None = None, partial: bool = False) -> LlmResponse: | ||
| """Create an LlmResponse containing a single function call.""" | ||
| fc = types.FunctionCall(name=name, args=args or {}) | ||
| return LlmResponse( | ||
| content=types.Content(role='model', parts=[types.Part(function_call=fc)]), | ||
| partial=partial, | ||
| ) | ||
|
|
||
|
|
||
| def _make_multi_fc_response(calls: list[tuple[str, dict]], partial: bool = False) -> LlmResponse: | ||
| """Create an LlmResponse containing multiple function calls.""" | ||
| parts = [ | ||
| types.Part(function_call=types.FunctionCall(name=name, args=args)) | ||
| for name, args in calls | ||
| ] | ||
| return LlmResponse( | ||
| content=types.Content(role='model', parts=parts), | ||
| partial=partial, | ||
| ) | ||
|
|
||
|
|
||
| class TestPopulateClientFunctionCallIdWithCache: | ||
| """Tests for populate_client_function_call_id with ID caching.""" | ||
|
|
||
| def test_generates_id_and_stores_in_cache(self): | ||
| event = Event(author='agent') | ||
| event.content = types.Content( | ||
| role='model', | ||
| parts=[types.Part(function_call=types.FunctionCall(name='get_weather', args={}))], | ||
| ) | ||
| cache: dict[str, str] = {} | ||
| populate_client_function_call_id(event, cache) | ||
| fc = event.get_function_calls()[0] | ||
| assert fc.id.startswith('adk-') | ||
| assert 'get_weather:0' in cache | ||
| assert cache['get_weather:0'] == fc.id | ||
|
|
||
| def test_reuses_cached_id(self): | ||
| cache: dict[str, str] = {'get_weather:0': 'adk-cached-id-123'} | ||
|
|
||
| event = Event(author='agent') | ||
| event.content = types.Content( | ||
| role='model', | ||
| parts=[types.Part(function_call=types.FunctionCall(name='get_weather', args={}))], | ||
| ) | ||
| populate_client_function_call_id(event, cache) | ||
| assert event.get_function_calls()[0].id == 'adk-cached-id-123' | ||
|
|
||
| def test_no_cache_generates_new_id_each_time(self): | ||
| event1 = Event(author='agent') | ||
| event1.content = types.Content( | ||
| role='model', | ||
| parts=[types.Part(function_call=types.FunctionCall(name='get_weather', args={}))], | ||
| ) | ||
| event2 = Event(author='agent') | ||
| event2.content = types.Content( | ||
| role='model', | ||
| parts=[types.Part(function_call=types.FunctionCall(name='get_weather', args={}))], | ||
| ) | ||
| populate_client_function_call_id(event1) | ||
| populate_client_function_call_id(event2) | ||
| assert event1.get_function_calls()[0].id != event2.get_function_calls()[0].id | ||
|
|
||
| def test_multiple_calls_same_name_get_separate_ids(self): | ||
| event = Event(author='agent') | ||
| event.content = types.Content( | ||
| role='model', | ||
| parts=[ | ||
| types.Part(function_call=types.FunctionCall(name='search', args={'q': 'a'})), | ||
| types.Part(function_call=types.FunctionCall(name='search', args={'q': 'b'})), | ||
| ], | ||
| ) | ||
| cache: dict[str, str] = {} | ||
| populate_client_function_call_id(event, cache) | ||
| fcs = event.get_function_calls() | ||
| assert fcs[0].id != fcs[1].id | ||
| assert cache['search:0'] == fcs[0].id | ||
| assert cache['search:1'] == fcs[1].id | ||
|
|
||
| def test_skips_function_calls_that_already_have_ids(self): | ||
| event = Event(author='agent') | ||
| event.content = types.Content( | ||
| role='model', | ||
| parts=[types.Part(function_call=types.FunctionCall( | ||
| name='get_weather', args={}, id='server-provided-id'))], | ||
| ) | ||
| cache: dict[str, str] = {} | ||
| populate_client_function_call_id(event, cache) | ||
| assert event.get_function_calls()[0].id == 'server-provided-id' | ||
| assert len(cache) == 0 | ||
|
|
||
|
|
||
| class TestFinalizeModelResponseEventWithCache: | ||
| """Tests that _finalize_model_response_event preserves IDs via cache.""" | ||
|
|
||
| def test_partial_and_final_share_same_function_call_id(self): | ||
| model_response_event = Event( | ||
| author='agent', | ||
| invocation_id='inv-1', | ||
| ) | ||
| llm_request = LlmRequest(model='mock', contents=[]) | ||
| cache: dict[str, str] = {} | ||
|
|
||
| # Partial event | ||
| partial_response = _make_fc_response('get_weather', partial=True) | ||
| partial_event = _finalize_model_response_event( | ||
| llm_request, partial_response, model_response_event, cache, | ||
| ) | ||
| partial_id = partial_event.get_function_calls()[0].id | ||
| assert partial_id.startswith('adk-') | ||
|
|
||
| # Final event — same function call must get the same ID | ||
| final_response = _make_fc_response('get_weather', partial=False) | ||
| final_event = _finalize_model_response_event( | ||
| llm_request, final_response, model_response_event, cache, | ||
| ) | ||
| final_id = final_event.get_function_calls()[0].id | ||
| assert final_id == partial_id | ||
|
|
||
| def test_without_cache_ids_differ(self): | ||
| model_response_event = Event( | ||
| author='agent', | ||
| invocation_id='inv-1', | ||
| ) | ||
| llm_request = LlmRequest(model='mock', contents=[]) | ||
|
|
||
| partial_response = _make_fc_response('get_weather', partial=True) | ||
| partial_event = _finalize_model_response_event( | ||
| llm_request, partial_response, model_response_event, | ||
| ) | ||
| partial_id = partial_event.get_function_calls()[0].id | ||
|
|
||
| final_response = _make_fc_response('get_weather', partial=False) | ||
| final_event = _finalize_model_response_event( | ||
| llm_request, final_response, model_response_event, | ||
| ) | ||
| final_id = final_event.get_function_calls()[0].id | ||
|
|
||
| # Without cache, IDs are different (this is the bug scenario) | ||
| assert final_id != partial_id | ||
|
|
||
| def test_multi_function_call_streaming_preserves_all_ids(self): | ||
| model_response_event = Event( | ||
| author='agent', | ||
| invocation_id='inv-1', | ||
| ) | ||
| llm_request = LlmRequest(model='mock', contents=[]) | ||
| cache: dict[str, str] = {} | ||
|
|
||
| # Partial with two function calls | ||
| partial_response = _make_multi_fc_response( | ||
| [('search', {'q': 'weather'}), ('lookup', {'id': '42'})], | ||
| partial=True, | ||
| ) | ||
| partial_event = _finalize_model_response_event( | ||
| llm_request, partial_response, model_response_event, cache, | ||
| ) | ||
| partial_ids = [fc.id for fc in partial_event.get_function_calls()] | ||
|
|
||
| # Final with same two function calls | ||
| final_response = _make_multi_fc_response( | ||
| [('search', {'q': 'weather'}), ('lookup', {'id': '42'})], | ||
| partial=False, | ||
| ) | ||
| final_event = _finalize_model_response_event( | ||
| llm_request, final_response, model_response_event, cache, | ||
| ) | ||
| final_ids = [fc.id for fc in final_event.get_function_calls()] | ||
|
|
||
| assert partial_ids == final_ids | ||
| assert partial_ids[0] != partial_ids[1] # different calls have different IDs |
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change correctly introduces a cache to stabilize function call IDs for the SSE streaming mode handled by
_run_one_step_async. However, the live/bidi-streaming mode handled byrun_liveappears to be missing this fix. Therun_livemethod does not create or pass afunction_call_id_cache, leading to unstable function call IDs in that streaming scenario. The caching mechanism should also be implemented for therun_liveflow to ensure consistent behavior across all streaming modes.