Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion src/strands/experimental/bidi/agent/loop.py
Original file line number Diff line number Diff line change
Expand Up @@ -140,7 +140,7 @@ async def send(self, event: BidiInputEvent | ToolResultEvent) -> None:
await self._send_gate.wait()

if isinstance(event, BidiTextInputEvent):
message: Message = {"role": "user", "content": [{"text": event.text}]}
message: Message = {"role": event.role, "content": [{"text": event.text}]}
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you elaborate a bit more on the use case here? Note that we do allow you to prefill the initial message history that is sent to the model with assistant messages (Agent(messages=...)). They are sent separately in start(). Curious though what it would mean for the user to send an assistant message to the model in the middle of a conversation.

Copy link
Copy Markdown
Member

@pgrayy pgrayy Apr 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also ran some tests against 2 models (1 from google and 1 from openai). Google rejected the assistant message with an error response and openai seemed to ignore it.

Note, there is also some hardcoding of roles in the model providers themselves that I had to update for the tests.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pgrayy good catch on the Google test. Admittedly I'm using a custom adapter for an openai api compatible server (sglang / llama.cpp), and at the risk of being a bit presumptuous, I take your point that this PR may be premature given the official supported clients for BidiAgent are limited.

The use case it addresses is pushing context from async process as an input source, which through some other magic elicits unprompted engagement from the users perspective. A bit atypical I suppose. I can monkey patch around it, not a big deal, just took a bit of debugging to find out why assistant wasn't making it to the model - was a bit of a bear trap :)

await self._agent._append_messages(message)

await self._agent.model.send(event)
Expand Down
Loading