Skip to content

Releases: strands-agents/sdk-python

v1.17.0

18 Nov 19:09
95ac650

Choose a tag to compare

Strands Agents SDK v1.17.0 Release Notes

Features

Configurable Timeout for MCP Agent Tools - PR#1184

You can now set custom timeout values when creating MCP (Model Context Protocol) agent tools, providing better control over tool execution time limits and improving reliability when working with external MCP servers.

from datetime import timedelta
from strands.tools.mcp import MCPAgentTool

# Create MCP tool with custom 30-second timeout
mcp_tool = MCPAgentTool(
    ...,
    timeout=timedelta(seconds=30)
)

agent = Agent(tools=[mcp_tool])

This feature is especially useful when working with MCP servers that may have varying response times, allowing you to fine-tune timeout behavior for different use cases.


Bug Fixes

  • Swarm Handoff Timing - PR#1147
    Fixed swarm handoff behavior to only switch to the handoff node after the current node completes execution. Previously, the switch occurred mid-execution, causing incorrect event emissions and invalid swarm state when tools were interrupted concurrently with handoff tools.

  • LiteLLM Stream Parameter Validation - PR#1183
    Added validation for the stream parameter in LiteLLM to prevent TypeError when stream=False is provided. The SDK now properly handles both streaming and non-streaming responses with clear error messaging.

  • Optional MetadataEvent Fields - PR#1187
    Fixed handling of MetadataEvents when custom model implementations omit optional usage or metrics fields. The SDK now provides sensible defaults, preventing KeyError exceptions and enabling greater flexibility for custom model providers.

  • A2A Protocol File Data Decoding - PR#1195
    Fixed A2A (Agent-to-Agent) executor to properly base64 decode file bytes from A2A messages before passing to Strands agents. Previously, agents were receiving base64-encoded strings instead of actual binary file content.


All changes

  • feat: allow setting a timeout when creating MCPAgentTool by @AnirudhKonduru in #1184
  • fix(litellm): add validation for stream parameter in LiteLLM by @dbschmigelski in #1183
  • fix(event_loop): handle MetadataEvents without optional usage and metrics by @dbschmigelski in #1187
  • swarm - switch to handoff node only after current node stops by @pgrayy in #1147
  • fix(a2a): base64 decode byte data before placing in ContentBlocks by @dbschmigelski in #1195

New Contributors

Full Changelog: v1.16.0...v1.17.0

v1.16.0

12 Nov 21:06
8cae18c

Choose a tag to compare

Major Features

Async Hooks Support - PR#1119

Hooks now support asynchronous callbacks, allowing your hook code to run concurrently with other async tasks without blocking the event loop. This is particularly beneficial for async agent invocations and scenarios where hooks perform I/O operations.

import asyncio
from strands import Agent
from strands.hooks import BeforeInvocationEvent, HookProvider, HookRegistry

class AsyncHook(HookProvider):
    def register_hooks(self, registry: HookRegistry, **kwargs) -> None:
        registry.add_callback(BeforeInvocationEvent, self.async_callback)

    async def async_callback(self, event: BeforeInvocationEvent) -> None:
        # Perform async operations without blocking the event loop
        await asyncio.sleep(1)
        print("Hook executed asynchronously!")

agent = Agent(hooks=[AsyncHook()])
await agent.invoke_async("Hello!")

Thread Context Sharing - PR#1146

Context variables (contextvars) are now automatically copied from the main thread to agent threads when using synchronous invocations. This ensures that context-dependent tools and hooks work correctly.

from contextvars import ContextVar
from strands import Agent, tool

request_id = ContextVar('request_id')

@tool
def my_tool():
    # Context variables are now accessible within tools
    current_request_id = request_id.get()
    return f"Processing request: {current_request_id}"

request_id.set("abc-123")
agent = Agent(tools=[my_tool])
response = agent.invoke_async("Use my tool")  # Context is properly propagated

Enhanced Telemetry with Tool Definitions - PR#1113

Tool definitions are now included in OpenTelemetry traces via semantic convention opt-in, providing better observability into agent tool usage, following the OpenTelemetry semantic conventions for GenAI.

Opt-in via environment variable:

OTEL_SEMCONV_STABILITY_OPT_IN=gen_ai_tool_definitions
from strands import Agent, tool
from strands_tools import calculator

# Tool definition will appear in telemetry traces
agent = Agent(tools=[calculator])
agent("What is 5 + 3?")

String Descriptions in Annotated Tool Parameters - PR#1089

You can now use simple string descriptions directly in Annotated type hints for tool parameters, improving code readability and reducing boilerplate.

from typing import Annotated
from strands import tool

@tool
def get_weather(
    location: Annotated[str, "The city and state, e.g., San Francisco, CA"],
    units: Annotated[str, "Temperature units: 'celsius' or 'fahrenheit'"] = "celsius"
):
    """Get weather for a location."""
    return f"Weather in {location}: 72°{units[0].upper()}"

# Previously required more verbose Field() syntax or using a doc-string

Major Bug Fixes

  • Anthropic "Prompt Too Long" Error Handling - PR#1137
    The SDK now properly handles and surfaces Anthropic's "prompt is too long" errors, making it easier to diagnose and fix context window issues.
  • MCP Server 5xx Error Resilience - PR#1169
    The SDK no longer hangs when Model Context Protocol (MCP) servers return 5xx errors, improving reliability when working with external services.
  • Gemini Non-JSON Error Message Handling - PR#1062
    The Gemini model provider now gracefully handles non-JSON error responses, preventing unexpected crashes.

All changes

  • fix(models/gemini): handle non-JSON error messages from Gemini API by @Ratish1 in #1062
  • fix: Handle "prompt is too long" from Anthropic by @zastrowm in #1137
  • feat(telemetry): Add tool definitions to traces via semconv opt-in by @Ratish1 in #1113
  • fix: Strip argument sections out of inputSpec top-level description by @zastrowm in #1142
  • share thread context by @pgrayy in #1146
  • async hooks by @pgrayy in #1119
  • feat(tools): Support string descriptions in Annotated parameters by @Ratish1 in #1089
  • chore(telemetry): updated opt-in attributes to internal by @poshinchen in #1152
  • feat(models): allow SystemContentBlocks in LiteLLMModel by @dbschmigelski in #1141
  • share interrupt state by @pgrayy in #1148
  • fix: Don't hang when MCP server returns 5xx by @zastrowm in #1169
  • fix(models): allow setter on system_prompt and system_prompt_content by @dbschmigelski in #1171

Full Changelog: v1.15.0...v1.16.0

v1.15.0

04 Nov 18:21
9f10595

Choose a tag to compare

Major Features

SystemContentBlock Support for Provider-Agnostic Caching - PR#1112

System prompts now support SystemContentBlock arrays, enabling provider-agnostic caching and advanced multi-prompt system configurations. Cache points can be defined explicitly within system content.

from strands import Agent
from strands.types.content import SystemContentBlock

# Define system content with cache points
system_content: list[SystemContentBlock] = [
    {"text": "You are a helpful assistant with extensive knowledge."},
    {"text": "Your responses should be concise and accurate."},
    {"text": "Always cite sources."},
    {"cachePoint": {"type": "default"}},
]

agent = Agent(system_prompt=system_content)
agent('What is the capital of Franch?')

Multi-Agent Session Management and Persistence - PR#1071, PR#1110

Multi-agent systems now support session management and persistence, enabling agents to maintain state across invocations and support long-running workflows.

from strands import Agent
from strands.multiagent import GraphBuilder
from strands.multiagent.base import Status
from strands.session import FileSessionManager

def build_graph(max_nodes: int):
    session_manager = FileSessionManager(session_id="my_session_1", storage_dir="./sessions")

    builder = GraphBuilder()
    builder.add_node(Agent(name="analyzer", system_prompt="Explain using 2 paragraphs. "))
    builder.add_node(Agent(name="summarizer", system_prompt="Summerize and be concise.  10 words or less"))

    builder.add_edge("analyzer", "summarizer")
    builder.set_entry_point("analyzer")
    builder.set_max_node_executions(max_nodes)
    builder.set_session_manager(session_manager)

    return builder.build()

# Simulate failure because after the first node we exceed max_nodes
result = await build_graph(max_nodes=1).invoke_async("Analyze why 2+2=4")
assert result.status == Status.FAILED

# Simulate that we're resuming from a failure with a fresh session - this picks at the summerizer
result = await build_graph(max_nodes=10).invoke_async("Analyze why 2+2=4")
assert result.status == Status.COMPLETED

Async Streaming for Multi-Agent Systems - PR#961

Multi-agent systems now support stream_async, enabling real-time streaming of events from agent teams as they collaborate.

from strands import Agent
from strands.multiagent import GraphBuilder

# Create multi-agent graph
analyzer = Agent(name="analyzer")
processor = Agent(name="processor")

builder = GraphBuilder()
builder.add_node(analyzer)
builder.add_node(processor)
builder.add_edge("analyzer", "processor")
builder.set_entry_point("analyzer")

graph = builder.build()

# Stream events as agents process
async for event in graph.stream_async("Analyze this data"):
    print(f"Event: {event.get('type', 'unknown')}")

Major Bug Fixes

  • Guardrails Redaction Fix - PR#1072
    Fixed input/output message redaction when guardrails_trace="enabled_full", ensuring sensitive data is properly protected in traces.

  • Tool Result Block Redaction - PR#1080
    Properly redact tool result blocks to prevent conversation corruption when using content filtering or PII redaction.

  • Orphaned Tool Use Fix - PR#1123
    Fixed broken conversations caused by orphaned toolUse blocks, improving reliability when tools fail or are interrupted.

  • Reasoning Content Handling - PR#1099
    Drop reasoningContent from requests to prevent errors with providers that don't support extended thinking modes.

  • Swarm Initialization Fix - PR#1107
    Don't initialize agents during Swarm construction, preventing unnecessary resource allocation and improving startup performance.

  • Structured Output Context Fix - PR#1128
    Allow None structured output context in tool executors, fixing edge cases where tools don't require structured responses.

What's Changed

  • fix: (bug): Drop reasoningContent from request by @mehtarac in #1099
  • fix: Dont initialize an agent on swarm init by @Unshure in #1107
  • feat: add multiagent session/repository management. by @JackYPCOnline in #1071
  • feat(multiagent): Add stream_async by @mkmeral in #961
  • Fix #1077: properly redact toolResult blocks to avoid corrupting the conversation by @leotac in #1080
  • linting by @pgrayy in #1120
  • Fix input/output message not redacted when guardrails_trace="enabled_full" by @leotac in #1072
  • fix: Allow none structured output context in tool executors by @mkmeral in #1128
  • fix: Fix broken converstaion with orphaned toolUse by @Unshure in #1123
  • feat: Enable multiagent session persistent in Graph/Swarm by @JackYPCOnline in #1110
  • feat(models): add SystemContentBlock support for provider-agnostic caching by @dbschmigelski in #1112

New Contributors

Full Changelog: v1.14.0...v1.15.0

v1.14.0

29 Oct 14:15
c2ba0f7

Choose a tag to compare

Major Features

Structured Output via Agentic Loop

Agents can now validate responses against predefined schemas using JSON Schema or Pydantic models. Validation occurs at response generation time with configurable retry behavior for non-conforming outputs.

agent = Agent()
result = agent(
    "John Smith is a 30 year-old software engineer",
    structured_output_model=PersonInfo
)

# Access the structured output from the result
person_info: PersonInfo = result.structured_output

See more in the docs for Structured Output.

Interrupting Agents

Interrupts now provide first-class support for Human-in-the-loop patterns in Strands. They can be raised in Hooks or directly in tool definitions. Related, MCP elicitation has been exposed on the MCPClient.

import json
from typing import Any

from strands import Agent, tool
from strands.hooks import BeforeToolCallEvent, HookProvider, HookRegistry

from my_project import delete_files, inspect_files

class ApprovalHook(HookProvider):
    def __init__(self, app_name: str) -> None:
        self.app_name = app_name

    def register_hooks(self, registry: HookRegistry, **kwargs: Any) -> None:
        registry.add_callback(BeforeToolCallEvent, self.approve)

    def approve(self, event: BeforeToolCallEvent) -> None:
        if event.tool_use["name"] != "delete_files":
            return

        approval = event.interrupt(f"{self.app_name}-approval", reason={"paths": event.tool_use["input"]["paths"]})
        if approval.lower() != "y":
            event.cancel_tool = "User denied permission to delete files"


agent = Agent(
    hooks=[ApprovalHook("myapp")],
    system_prompt="You delete files older than 5 days",
    tools=[delete_files, inspect_files],
)

paths = ["a/b/c.txt", "d/e/f.txt"]
result = agent(f"paths=<{paths}>")

while True:
    if result.stop_reason != "interrupt":
        break

    responses = []
    for interrupt in result.interrupts:
        if interrupt.name == "myapp-approval":
            user_input = input(f"Do you want to delete {interrupt.reason["paths"]} (y/N): ")
            responses.append({
                "interruptResponse": {
                    "interruptId": interrupt.id, 
                    "response": user_input
                }
            })

    result = agent(responses)

Managed MCP Connections

We've introduced MCP Connections via ToolProviders, an experimental interface that addresses the requirement to use context managers with MCP tools. The Agent now manages connection lifecycles automatically, enabling simpler syntax:

agent = Agent(tools=[stdio_mcp_client])
agent("do something")

While this feature is experimental, we aim to mark it as stable soon and welcome user testing of this and other new features.

Agent Config

Users can now define and create agents using configuration files or dictionaries:

{
  "model": "us.anthropic.claude-3-5-sonnet-20241022-v2:0",
  "prompt": "You are a coding assistant. Help users write, debug, and improve their code. You have access to file operations and can execute shell commands when needed.",
  "tools": ["strands_tools.file_read", "strands_tools.editor", "strands_tools.shell"]
}

All Changes

  • models - litellm - start and stop reasoning by @pgrayy in #947
  • feat: add experimental AgentConfig with comprehensive tool management by @mr-lee in #935
  • fix(telemetry): make strands agent invoke_agent span as INTERNAL spanKind by @poshinchen in #1055
  • feat: add multiagent hooks, add serialize & deserialize function to multiagent base & agent result by @JackYPCOnline in #1070
  • feat: Add Structured Output as part of the agent loop by @afarntrog in #943
  • integ tests - interrupts - remove asyncio marker by @pgrayy in #1045
  • interrupt - docstring - fix formatting by @pgrayy in #1074
  • ci: add pr size labeler by @dbschmigelski in #1082
  • fix: Don't bail out if there are no tool_uses by @zastrowm in #1087
  • feat(mcp): add experimental agent managed connection via ToolProvider by @dbschmigelski in #895
  • fix (bug): retry on varying Bedrock throttlingexception cases by @mehtarac in #1096
  • feat: skip model invocation when latest message contains ToolUse by @Unshure in #1068
  • direct tool call - interrupt not allowed by @pgrayy in #1097
  • mcp elicitation by @pgrayy in #1094
  • fix(litellm): enhance structured output handling by @Arindam200 in #1021
  • Transform invalid tool usages on sending, not on initial detection by @zastrowm in #1091

New Contributors

Full Changelog: v1.13.0...v1.14.0

v1.13.0

17 Oct 18:56
26862e4

Choose a tag to compare

What's Changed

  • feat: replace kwargs with invocation_state in agent APIs by @JackYPCOnline in #966
  • feat(telemetry): updated semantic conventions, added timeToFirstByteMs into spans and metrics by @poshinchen in #997
  • chore(telemetry): added gen_ai.tool.description and gen_ai.tool.json_schema by @poshinchen in #1027
  • fix(tool/decorator): validate ToolContext parameter name and raise clear error by @Ratish1 in #1028
  • integ tests - fix flaky structured output test by @pgrayy in #1030
  • hooks - before tool call event - interrupt by @pgrayy in #987
  • multiagents - temporarily raise exception when interrupted by @pgrayy in #1038
  • feat: Support adding exception notes for Python 3.10 by @zastrowm in #1034
  • interrupts - decorated tools by @pgrayy in #1041

Full Changelog: v1.12.0...v1.13.0

v1.12.0

10 Oct 15:11
419de19

Choose a tag to compare

What's Changed

  • feat: Refactor and update tool loading to support modules by @Unshure in #989
  • Adding Development Tenets to CONTRIBUTING.md by @Unshure in #1009
  • Revert "feat: implement concurrent message reading for session managers (#897)" by @pgrayy in #1013
  • feat(models): use tool for litellm structured_output when supports_response_schema=false by @dbschmigelski in #957
  • Add EmbeddedResource support to mcp (read GitHub file contents blocker) by @KyMidd in #726
  • conversation manager - summarization - noop tool by @pgrayy in #1003
  • Fix additional_args passing in SageMakerAIModel by @athewsey in #983

New Contributors

Full Changelog: v1.11.0...v1.12.0

v1.11.0

08 Oct 16:01
2a26ffa

Choose a tag to compare

What's Changed

  • fix: GeminiModel argument in README by @tosi29 in #955
  • tool - executors - concurrent - remove no-op gather by @pgrayy in #954
  • feat(telemetry): updated traces to match OTEL v1.37 semantic conventions by @poshinchen in #952
  • event loop - handle model execution by @pgrayy in #958
  • feat: implement concurrent message reading for session managers by @vamgan in #897
  • hooks - before tool call event - cancel tool by @pgrayy in #964
  • fix(telemetry): removed double serialization for events by @poshinchen in #977
  • fix(litellm): map LiteLLM context window errors to ContextWindowOverflowException by @Ratish1 in #994

New Contributors

Full Changelog: v1.10.0...v1.11.0

v1.10.0

29 Sep 14:25
eef11cc

Choose a tag to compare

What's Changed

  • feat: add optional outputSchema support for tool specifications by @vamgan in #818
  • feat: add Gemini model provider by @notgitika in #725
  • Improve OpenAI error handling by @mkmeral in #918
  • ci: update sphinx-autodoc-typehints requirement from <2.0.0,>=1.12.0 to >=1.12.0,<4.0.0 by @dependabot[bot] in #903
  • ci: update sphinx requirement from <6.0.0,>=5.0.0 to >=5.0.0,<9.0.0 by @dependabot[bot] in #904
  • ci: update openai requirement from <1.108.0,>=1.68.0 to >=1.68.0,<1.110.0 by @dependabot[bot] in #916
  • ci: update pytest-asyncio requirement from <1.2.0,>=1.0.0 to >=1.0.0,<1.3.0 by @dependabot[bot] in #861
  • fix(gemini): Fix event loop closed error from Gemini asyncio by @mkmeral in #932
  • fix: Fix mcp timeout issue by @Unshure in #922
  • feat: add supports_hot_reload property to PythonAgentTool by @cagataycali in #928
  • feat(hooks): Mark ModelCall and ToolCall events as non-experimental by @zastrowm in #926
  • feat: Create a new HookEvent for Multiagent by @JackYPCOnline in #925

New Contributors

Full Changelog: v1.9.1...v1.10.0

v1.9.1

19 Sep 19:39
00a1f28

Choose a tag to compare

What's Changed

  • feat: decouple Strands ContentBlock and BedrockModel by @dbschmigelski in #836
  • fix: Invoke callback handler for structured_output by @zastrowm in #857
  • fix: Update prepare to use format instead of test-format by @zastrowm in #858
  • fix: add explicit permissions to auto-close workflow by @Unshure in #893
  • fix: make mcp_instrumentation idempotent to prevent recursion errors by @Unshure in #892
  • fix: Fix github workflow to use fmt instead of hatch run by @Unshure in #898
  • fix(models): make tool_choice an optional keyword arg instead positional by @dbschmigelski in #899

Full Changelog: v1.9.0...v1.9.1

v1.9.0

17 Sep 19:08
1f25512

Choose a tag to compare

What's Changed

  • feat(telemetry): add cache usage metrics to OpenTelemetry spans by @vamgan in #825
  • docs: improve docstring formatting by @waitasecant in #846
  • ci: bump actions/setup-python from 5 to 6 by @dependabot[bot] in #796
  • ci: bump actions/github-script from 7 to 8 by @dependabot[bot] in #801
  • ci: bump aws-actions/configure-aws-credentials from 4 to 5 by @dependabot[bot] in #795
  • fix: Add type to tool_input by @Unshure in #854
  • feat(swarm): Make entry point configurable by @mkmeral in #851
  • ci: update ruff requirement from <0.13.0,>=0.12.0 to >=0.12.0,<0.14.0 by @dependabot[bot] in #840
  • ci: update openai requirement from <1.102.0,>=1.68.0 to >=1.68.0,<1.108.0 by @dependabot[bot] in #827
  • feat: add automated issue auto-close workflows with dry-run testing by @yonib05 in #832
  • fix: Clean up pyproject.toml by @Unshure in #844
  • fix: Updating documentation in decorator.py by @prabhuteja12 in #852
  • models - openai - use client context by @pgrayy in #856
  • Feature: Handle Bedrock redactedContent by @afarntrog in #848
  • fix: correctly label tool result messages in OpenTelemetry events by @vamgan in #839
  • models - openai - client context comment by @pgrayy in #864
  • fix: litellm structured_output test with more descriptive model by @dbschmigelski in #871
  • fix(mcp): auto cleanup on exceptions occurring in enter by @dbschmigelski in #833
  • fix(mcp): do not verify _background_session is present in stop() by @dbschmigelski in #876
  • docs(README): fix links and imports by @awsarron in #837

New Contributors

Full Changelog: v1.8.0...v1.9.0