Skip to content

Commit 3cce24b

Browse files
authored
Python: Bug fix for azure ai agent truncate strategy. Add sample. (#11503)
### Motivation and Context We are not handling the AzureAIAgent truncation strategy object correctly in AgentThreadActions. Fix this and add a sample showing how to configure the object during agent invocation. <!-- Thank you for your contribution to the semantic-kernel repo! Please help reviewers and future users, providing the following information: 1. Why is this change required? 2. What problem does it solve? 3. What scenario does it contribute to? 4. If it fixes an open issue, please link to the issue here. --> ### Description Bug fix and sample. <!-- Describe your changes, the overall approach, the underlying design. These notes will help understanding how your code works. Thanks! --> ### Contribution Checklist <!-- Before submitting this PR, please make sure: --> - [X] The code builds clean without any errors or warnings - [X] The PR follows the [SK Contribution Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md) and the [pre-submission formatting script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts) raises no violations - [X] All unit tests pass, and I have added new tests where possible - [X] I didn't break anyone 😄
1 parent 1285192 commit 3cce24b

File tree

3 files changed

+88
-2
lines changed

3 files changed

+88
-2
lines changed

python/samples/concepts/README.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,7 @@
1616
- [Azure AI Agent Message Callback](./agents/azure_ai_agent/azure_ai_agent_message_callback.py)
1717
- [Azure AI Agent Streaming](./agents/azure_ai_agent/azure_ai_agent_streaming.py)
1818
- [Azure AI Agent Structured Outputs](./agents/azure_ai_agent/azure_ai_agent_structured_outputs.py)
19+
- [Azure AI Agent Truncation Strategy](./agents/azure_ai_agent/azure_ai_agent_truncation_strategy.py)
1920

2021
#### [Bedrock Agent](../../semantic_kernel/agents/bedrock/bedrock_agent.py)
2122

Lines changed: 85 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,85 @@
1+
# Copyright (c) Microsoft. All rights reserved.
2+
3+
import asyncio
4+
5+
from azure.ai.projects.models import TruncationObject
6+
from azure.identity.aio import DefaultAzureCredential
7+
8+
from semantic_kernel.agents import (
9+
AzureAIAgent,
10+
AzureAIAgentSettings,
11+
AzureAIAgentThread,
12+
)
13+
14+
"""
15+
The following sample demonstrates how to create an Azure AI Agent Agent
16+
and configure a truncation strategy for the agent.
17+
"""
18+
19+
USER_INPUTS = [
20+
"Why is the sky blue?",
21+
"What is the speed of light?",
22+
"What have we been talking about?",
23+
]
24+
25+
26+
async def main() -> None:
27+
ai_agent_settings = AzureAIAgentSettings.create()
28+
29+
async with (
30+
DefaultAzureCredential() as creds,
31+
AzureAIAgent.create_client(
32+
credential=creds,
33+
conn_str=ai_agent_settings.project_connection_string.get_secret_value(),
34+
) as client,
35+
):
36+
# Create the agent definition
37+
agent_definition = await client.agents.create_agent(
38+
model=ai_agent_settings.model_deployment_name,
39+
name="TruncateAgent",
40+
instructions="You are a helpful assistant that answers user questions in one sentence.",
41+
)
42+
43+
# Create the AzureAI Agent
44+
agent = AzureAIAgent(
45+
client=client,
46+
definition=agent_definition,
47+
)
48+
49+
thread: AzureAIAgentThread | None = None
50+
51+
# Options are "auto" or "last_messages"
52+
# If using "last_messages", specify the number of messages to keep with `last_messages` kwarg
53+
truncation_strategy = TruncationObject(type="last_messages", last_messages=2)
54+
55+
try:
56+
for user_input in USER_INPUTS:
57+
print(f"# User: {user_input}")
58+
# 4. Invoke the agent with the specified message for response
59+
response = await agent.get_response(
60+
messages=user_input, thread=thread, truncation_strategy=truncation_strategy
61+
)
62+
print(f"# {response.name}: {response}")
63+
thread = response.thread
64+
finally:
65+
# 6. Cleanup: Delete the thread and agent
66+
await thread.delete() if thread else None
67+
await client.agents.delete_agent(agent.id)
68+
69+
"""
70+
Sample Output:
71+
72+
# User: Why is the sky blue?
73+
# TruncateAgent: The sky appears blue because molecules in the Earth's atmosphere scatter sunlight in all
74+
directions, and blue light is scattered more than other colors due to its shorter wavelength.
75+
# User: What is the speed of light?
76+
# TruncateAgent: The speed of light in a vacuum is approximately 299,792,458 meters per second
77+
(or about 186,282 miles per second).
78+
# User: What have we been talking about?
79+
# TruncateAgent: I'm sorry, but I don't have access to previous interactions. Could you remind me what
80+
we've been discussing?
81+
"""
82+
83+
84+
if __name__ == "__main__":
85+
asyncio.run(main())

python/semantic_kernel/agents/azure_ai/agent_thread_actions.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -725,7 +725,7 @@ def _merge_options(
725725
def _generate_options(cls: type[_T], **kwargs: Any) -> dict[str, Any]:
726726
"""Generate a dictionary of options that can be passed directly to create_run."""
727727
merged = cls._merge_options(**kwargs)
728-
trunc_count = merged.get("truncation_message_count", None)
728+
truncation_strategy = merged.get("truncation_strategy", None)
729729
max_completion_tokens = merged.get("max_completion_tokens", None)
730730
max_prompt_tokens = merged.get("max_prompt_tokens", None)
731731
parallel_tool_calls = merged.get("parallel_tool_calls_enabled", None)
@@ -735,7 +735,7 @@ def _generate_options(cls: type[_T], **kwargs: Any) -> dict[str, Any]:
735735
"top_p": merged.get("top_p"),
736736
"response_format": merged.get("response_format"),
737737
"temperature": merged.get("temperature"),
738-
"truncation_strategy": trunc_count,
738+
"truncation_strategy": truncation_strategy,
739739
"metadata": merged.get("metadata"),
740740
"max_completion_tokens": max_completion_tokens,
741741
"max_prompt_tokens": max_prompt_tokens,

0 commit comments

Comments
 (0)