Skip to content

feat: add stop functionality for active agent sessions and improve handling of stop requests#5380

Merged
Soulter merged 3 commits intomasterfrom
feat/stop-agent
Feb 23, 2026
Merged

feat: add stop functionality for active agent sessions and improve handling of stop requests#5380
Soulter merged 3 commits intomasterfrom
feat/stop-agent

Conversation

@Soulter
Copy link
Member

@Soulter Soulter commented Feb 23, 2026

closes: #5328

Modifications / 改动点

  • This is NOT a breaking change. / 这不是一个破坏性变更。

Screenshots or Test Results / 运行截图或测试结果


Checklist / 检查清单

  • 😊 如果 PR 中有新加入的功能,已经通过 Issue / 邮件等方式和作者讨论过。/ If there are new features added in the PR, I have discussed it with the authors through issues/emails, etc.
  • 👀 我的更改经过了良好的测试,并已在上方提供了“验证步骤”和“运行截图”。/ My changes have been well-tested, and "Verification Steps" and "Screenshots" have been provided above.
  • 🤓 我确保没有引入新依赖库,或者引入了新依赖库的同时将其添加到了 requirements.txtpyproject.toml 文件相应位置。/ I have ensured that no new dependencies are introduced, OR if new dependencies are introduced, they have been added to the appropriate locations in requirements.txt and pyproject.toml.
  • 😮 我的更改没有引入恶意代码。/ My changes do not introduce malicious code.

@dosubot dosubot bot added the size:L This PR changes 100-499 lines, ignoring generated files. label Feb 23, 2026
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @Soulter, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances user control over active agent sessions by implementing a robust stop functionality. Users can now interrupt ongoing agent processes via a new chat command or a dedicated UI button, which gracefully terminates the agent's execution while preserving any partial output. This improves the responsiveness and usability of the system, especially for long-running or unintended agent interactions. The backend changes ensure proper state management and history recording for these interrupted sessions, while the frontend provides a clear and accessible way to trigger the stop action.

Highlights

  • New Stop Functionality for Agent Sessions: Introduced a new /stop command and a corresponding UI button to allow users to terminate active agent sessions gracefully. This prevents agents from continuing to process requests when no longer needed.
  • Improved Agent Stop Handling: The agent runner now includes mechanisms to detect and respond to stop requests, ensuring that partial outputs are preserved and the agent's state is correctly updated upon interruption.
  • Enhanced Conversation History Saving: The logic for saving conversation history has been updated to correctly handle scenarios where an agent run is aborted by the user, ensuring that any partial responses are still recorded.
  • Frontend Integration for Stopping Agents: The web dashboard's chat interface now dynamically displays a 'stop' button instead of the 'send' button when an agent is actively running, providing an intuitive way for users to interrupt ongoing operations.
Changelog
  • astrbot/builtin_stars/builtin_commands/commands/conversation.py
    • Added stop asynchronous method to handle agent session termination requests.
  • astrbot/builtin_stars/builtin_commands/main.py
    • Registered the new stop command to be accessible via the chat interface.
  • astrbot/core/agent/runners/tool_loop_agent_runner.py
    • Initialized _stop_requested and _aborted flags to manage agent state during termination.
    • Implemented logic to check for _stop_requested during agent steps and break the loop if a stop is requested.
    • Added comprehensive handling for _stop_requested to finalize LLM responses, set the agent to an aborted state, and trigger on_agent_done hooks.
    • Introduced request_stop and was_aborted methods to allow external components to signal and check for agent termination.
  • astrbot/core/astr_agent_run_util.py
    • Added _should_stop_agent utility function to check for agent stop signals.
    • Integrated a stop_watcher task to continuously monitor for agent stop signals during run_agent execution.
    • Modified run_agent to call agent_runner.request_stop() when a stop signal is detected and to handle aborted responses.
    • Ensured stop_watcher is cancelled and awaited in various exit paths of run_agent to prevent resource leaks.
    • Introduced _watch_agent_stop_signal asynchronous function to actively poll for agent stop requests.
  • astrbot/core/pipeline/process_stage/method/agent_sub_stages/internal.py
    • Updated history saving conditions to include agent_runner.was_aborted() to ensure partial results are saved.
    • Modified _save_to_history to accept a user_aborted flag and adjust LLM response handling for aborted sessions.
  • astrbot/core/utils/active_event_registry.py
    • Added request_agent_stop_all method to signal a stop request to all active agent events for a given UMO without fully stopping the event propagation.
  • astrbot/dashboard/routes/chat.py
    • Imported MessageType and active_event_registry for new functionality.
    • Added a new POST route /chat/stop to handle requests for stopping active chat sessions.
    • Implemented stop_session method to process stop requests, validate session ownership, and call active_event_registry.request_agent_stop_all.
  • dashboard/src/components/chat/Chat.vue
    • Passed is-running prop to ChatInput to indicate if an agent is active.
    • Added @stop event listener to ChatInput to trigger handleStopMessage.
  • dashboard/src/components/chat/ChatInput.vue
    • Modified the send button to conditionally display a 'stop' button with an error color when isRunning is true.
    • Added isRunning to the component's props interface.
    • Added stop to the component's emitted events.
  • dashboard/src/components/chat/StandaloneChat.vue
    • Passed is-running prop to ChatInput for standalone chat.
    • Added @stop event listener to ChatInput for standalone chat to trigger handleStopMessage.
  • dashboard/src/composables/useMessages.ts
    • Added new reactive references: currentRequestController, currentReader, currentRunningSessionId, and userStopRequested to manage stop state.
    • Updated sendMessage to set userStopRequested to false and currentRunningSessionId before sending.
    • Integrated AbortController to allow cancellation of fetch requests.
    • Stored the response.body reader in currentReader for potential cancellation.
    • Modified error handling in sendMessage to suppress console errors if userStopRequested is true.
    • Ensured cleanup of currentReader, currentRequestController, currentRunningSessionId, and userStopRequested in the finally block of sendMessage.
    • Implemented stopMessage function to send a stop request to the backend, cancel the current reader, and abort the fetch request.
    • Exported stopMessage from the composable.
  • dashboard/src/i18n/locales/en-US/features/chat.json
    • Added 'stopGenerating' translation key.
  • dashboard/src/i18n/locales/zh-CN/features/chat.json
    • Added '停止生成' (Stop generating) translation key.
  • tests/test_tool_loop_agent_runner.py
    • Added MockAbortableStreamProvider to simulate an LLM provider that can be aborted during streaming.
    • Added test_stop_signal_returns_aborted_and_persists_partial_message to verify agent stop functionality, ensuring partial messages are saved and the agent is marked as aborted.
Activity
  • The pull request introduces a new feature to stop active agent sessions.
  • The author has confirmed that the changes are not breaking.
  • The author has provided a summary of the modifications in the PR description.
  • The PR includes a checklist for review, covering aspects like new features discussion, testing, dependency management, and malicious code checks.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@dosubot dosubot bot added area:core The bug / feature is about astrbot's core, backend feature:chatui The bug / feature is about astrbot's chatui, webchat labels Feb 23, 2026
@dosubot
Copy link

dosubot bot commented Feb 23, 2026

Documentation Updates

1 document(s) were updated by changes in this PR:

pr4697的改动
View Changes
@@ -131,7 +131,7 @@
 ### 6. 主动代理会话停止功能
 
 #### 功能说明
-主动代理系统支持用户在对话进行中主动中断 Agent 任务执行,包括内置 Agent Runner 和第三方 Agent Runner。停止功能支持通过命令行和 UI 两种方式触发。
+主动代理系统支持用户在对话进行中主动中断 Agent 任务执行,包括内置 Agent Runner 和第三方 Agent Runner。停止功能支持通过 `/stop` 命令和 Dashboard UI 两种方式触发。
 
 #### 使用方式
 
@@ -143,32 +143,43 @@
 - 停止成功后会显示已停止的任务数量:"已请求停止 {count} 个运行中的任务。"
 - 若当前会话没有运行中的任务,会显示:"当前会话没有运行中的任务。"
 
+`/stop` 命令是内置对话命令的一部分,与 `/reset`、`/his` 等命令一样,可在任何对话平台(Telegram、QQ、微信、Discord 等)中使用。
+
 ##### Dashboard 聊天界面停止按钮
 在 Dashboard 聊天界面,当 Agent 正在执行时,聊天输入框中会显示停止按钮:
 
-- 停止按钮(红色,mdi-stop 图标)在 Agent 运行时自动出现,替代发送按钮
+- 停止按钮(mdi-stop 图标)在 Agent 运行时自动出现,替代发送按钮
 - 点击按钮可中断正在进行的响应生成
-- 按钮采用红色/错误色调,清晰标识为停止操作
+- 停止后会通过 API 调用请求中止当前会话的 Agent 任务
 
-#### 停止行为说明
+#### 停止机制说明
+
+停止机制根据 Agent Runner 类型有所不同:
 
 ##### 内置 Agent Runner(工具循环 Agent Runner)
 对于内置 Agent Runner,停止请求是平滑的(graceful),会保留中断前的部分输出:
 
+- 使用 `request_agent_stop_all()` 方法,不中断事件传播,允许后续流程(如历史记录保存)继续执行
 - 系统消息提示:"[SYSTEM: User actively interrupted the response generation. Partial output before interruption is preserved.]"
 - Agent 转换为 DONE 状态,并触发 on_agent_done 钩子
-- 对话历史和会话状态得以保留(与硬事件停止不同)
+- 对话历史和会话状态得以保留
 - 响应类型标记为 "aborted"
+- Agent 在执行过程中会定期检查 `_stop_requested` 标志,可平滑退出
 
-##### 第三方 Agent Runner
-对于第三方 Agent Runner,停止请求会完全中断事件传播,通过 `stop_all()` 方法实现。
+##### 第三方 Agent Runner(如 Dify、Coze)
+对于第三方 Agent Runner,停止请求会完全中断事件传播:
+
+- 使用 `stop_all()` 方法,完全停止事件流
+- 不保留中间状态,直接终止任务执行
 
 #### 技术实现要点
 
-- `request_agent_stop_all()` 方法(active_event_registry):用于平滑停止,不中断事件传播,允许后续流程(如历史记录保存)继续执行
-- `request_stop()` 方法(工具循环 Agent Runner):设置停止标志位
-- Agent 在执行过程中检查 `_stop_requested` 标志,可平滑退出
-- 与 `stop_all()` 方法的区别:`stop_all()` 会完全中断事件传播,而 `request_agent_stop_all()` 允许后续流程继续
+- `request_agent_stop_all()` 方法(active_event_registry):用于平滑停止,设置停止标志但不中断事件传播
+- `stop_all()` 方法(active_event_registry):用于硬停止,完全中断事件传播
+- `request_stop()` 方法(工具循环 Agent Runner):设置 `_stop_requested` 标志位
+- `was_aborted()` 方法(工具循环 Agent Runner):检查任务是否被用户主动中止
+- Agent 执行过程中通过后台监控任务(`_watch_agent_stop_signal`)检查停止信号
+- Dashboard 通过 `/api/chat/stop` API 端点触发停止请求
 
 ---
 

How did I do? Any feedback?  Join Discord

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - 我发现了 5 个问题,并留下了一些高层次的反馈:

  • run_agent 中,停止逻辑同时通过 _watch_agent_stop_signal 和每次迭代的 _should_stop_agent 检查来处理;建议将其整合为单一机制,以避免重复的 request_stop() 调用和多个 continue 分支,这些会让控制流变得更难理解。
  • _save_to_history 中,被注释掉的 user_aborted 历史标记代码块会增加噪音;如果你打算在历史中持久化这一状态,要么删除这段注释代码,要么实现一个清晰、显式的“用户中止对话”的表示方式。
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- In `run_agent`, the stop logic is handled both via `_watch_agent_stop_signal` and per-iteration `_should_stop_agent` checks; consider consolidating this into a single mechanism to avoid redundant `request_stop()` calls and `continue` branches that can make the control flow harder to follow.
- In `_save_to_history`, the commented-out `user_aborted` history marker block adds noise; either remove it or implement a clear, explicit representation of user-aborted conversations if you intend to persist that state in history.

## Individual Comments

### Comment 1
<location> `astrbot/core/agent/runners/tool_loop_agent_runner.py:334-338` </location>
<code_context>
                         ),
                     )
+                if self._stop_requested:
+                    llm_resp_result = LLMResponse(
+                        role="assistant",
+                        completion_text="[SYSTEM: User actively interrupted the response generation. Partial output before interruption is preserved.]",
+                        reasoning_content=llm_response.reasoning_content,
+                        reasoning_signature=llm_response.reasoning_signature,
+                    )
+                    break
</code_context>

<issue_to_address>
**suggestion:** The user-interruption system message string is duplicated; consider centralizing it.

This interruption marker string is hard-coded both here and in the `_stop_requested` handling after the loop. Please extract it into a shared constant or helper (e.g., `make_user_abort_response(...)`) so the text stays consistent and easier to maintain across both paths.

Suggested implementation:

```python
                if self._stop_requested:
                    llm_resp_result = LLMResponse(
                        role="assistant",
                        completion_text=USER_ABORT_SYSTEM_MESSAGE,
                        reasoning_content=llm_response.reasoning_content,
                        reasoning_signature=llm_response.reasoning_signature,
                    )
                    break

```

```python
        if not llm_resp_result:
            if self._stop_requested:
                llm_resp_result = LLMResponse(
                    role="assistant",
                    completion_text=USER_ABORT_SYSTEM_MESSAGE,
                )
            else:
                return

```

To fully implement the suggestion, also:

1. Define the shared constant `USER_ABORT_SYSTEM_MESSAGE` once in this module, for example near the top of `tool_loop_agent_runner.py`:

```python
USER_ABORT_SYSTEM_MESSAGE = (
    "[SYSTEM: User actively interrupted the response generation. "
    "Partial output before interruption is preserved.]"
)
```

2. If there are any other occurrences of the same interruption message string elsewhere in this file (or related runners), replace them with `USER_ABORT_SYSTEM_MESSAGE` so the text is centralized and consistent.

If you prefer a helper instead of a bare constant (e.g., to preserve reasoning fields when available), you can define a function like `make_user_abort_response(llm_response: LLMResponse | None) -> LLMResponse` in this module and have both call sites use it instead of constructing `LLMResponse` directly.
</issue_to_address>

### Comment 2
<location> `tests/test_tool_loop_agent_runner.py:419-420` </location>
<code_context>
     assert fallback_provider.call_count == 1


+@pytest.mark.asyncio
+async def test_stop_signal_returns_aborted_and_persists_partial_message(
+    runner, provider_request, mock_tool_executor, mock_hooks
+):
</code_context>

<issue_to_address>
**suggestion (testing):** Add tests for non-streaming mode and for stop requested before any chunks are produced

The new test covers the streaming case after a partial chunk has been delivered, but two important cases are still untested:

1. **Non-streaming mode (`streaming=False`)**: Add a test using `MockAbortableStreamProvider` (or similar) with `streaming=False` and `runner.request_stop()` that asserts:
   - An `aborted` response is yielded.
   - `runner.was_aborted()` is `True` and `final_llm_resp` is set correctly.
   - Partial output is preserved in `run_context.messages`.

2. **Stop before any chunks**: Add a test that calls `runner.request_stop()` *before* consuming from `step()` and asserts:
   - The generator yields an `aborted` response (or otherwise terminates as designed).
   - `final_llm_resp` matches the intended contract for the new branch (e.g. empty assistant message when `llm_resp_result` is missing).
   - `run_context.messages` matches that behavior.

These will exercise the new stop logic for both non-streaming runs and immediate-stop scenarios.

Suggested implementation:

```python
@pytest.mark.asyncio
async def test_stop_signal_returns_aborted_and_persists_partial_message(
    runner, provider_request, mock_tool_executor, mock_hooks
):
    provider = MockAbortableStreamProvider()

    await runner.reset(
        provider=provider,
        request=provider_request,
        run_context=ContextWrapper(context=None),
        tool_executor=mock_tool_executor,
        agent_hooks=mock_hooks,
        streaming=True,
    )

    # Start a streaming step and consume at least one partial chunk
    step_gen = runner.step()
    partial_steps = []
    async for step in step_gen:
        partial_steps.append(step)
        # Simulate user requesting stop after first partial assistant chunk
        if len(partial_steps) == 1:
            runner.request_stop()
        # Once stop has been requested and an aborted response is observed, break
        if getattr(step, "status", None) == "aborted" or getattr(
            getattr(step, "response", None), "status", None
        ) == "aborted":
            break

    # We should have observed at least one step before aborting
    assert partial_steps
    # Runner should record that it was aborted
    assert getattr(runner, "was_aborted")() is True
    # Final LLM response should be recorded
    assert getattr(runner, "final_llm_resp", None) is not None

    # Partial assistant output should be preserved in the run context messages
    run_context = getattr(runner, "run_context", None)
    assert run_context is not None
    messages = getattr(run_context, "messages", [])
    assert messages
    # There should be at least one assistant message (partial content)
    assert any(getattr(m, "role", None) == "assistant" for m in messages)


@pytest.mark.asyncio
async def test_stop_signal_non_streaming_persists_partial_message_and_sets_final_resp(
    runner, provider_request, mock_tool_executor, mock_hooks
):
    """Non-streaming mode: stop requested during the run should yield an aborted result
    and preserve partial assistant output in the run context messages.
    """
    provider = MockAbortableStreamProvider()

    await runner.reset(
        provider=provider,
        request=provider_request,
        run_context=ContextWrapper(context=None),
        tool_executor=mock_tool_executor,
        agent_hooks=mock_hooks,
        streaming=False,
    )

    step_gen = runner.step()

    # Consume the first step to allow the provider to start producing output.
    # This is where partial output may have been generated internally.
    first_step = await step_gen.__anext__()
    assert first_step is not None

    # Request stop after the run has started in non-streaming mode.
    runner.request_stop()

    # Collect remaining steps; one of them should reflect the aborted state.
    remaining_steps = [first_step]
    async for step in step_gen:
        remaining_steps.append(step)

    # Runner should be marked as aborted
    assert getattr(runner, "was_aborted")() is True
    # Final LLM response should be set
    assert getattr(runner, "final_llm_resp", None) is not None

    # There should be an aborted outcome in the collected steps
    assert any(
        getattr(s, "status", None) == "aborted"
        or getattr(getattr(s, "response", None), "status", None) == "aborted"
        for s in remaining_steps
    )

    # Partial assistant output should be preserved in the run context messages
    run_context = getattr(runner, "run_context", None)
    assert run_context is not None
    messages = getattr(run_context, "messages", [])
    assert messages
    assert any(getattr(m, "role", None) == "assistant" for m in messages)


@pytest.mark.asyncio
async def test_stop_requested_before_any_chunks_yields_aborted_and_empty_assistant(
    runner, provider_request, mock_tool_executor, mock_hooks
):
    """Stop requested before any chunks are consumed should still yield an aborted
    response and produce an appropriate final_llm_resp and messages.
    """
    provider = MockAbortableStreamProvider()

    await runner.reset(
        provider=provider,
        request=provider_request,
        run_context=ContextWrapper(context=None),
        tool_executor=mock_tool_executor,
        agent_hooks=mock_hooks,
        streaming=True,
    )

    # Create the generator but request stop before consuming any chunks
    step_gen = runner.step()
    runner.request_stop()

    steps = [step async for step in step_gen]

    # There should be at least one step representing the aborted outcome
    assert steps
    assert any(
        getattr(s, "status", None) == "aborted"
        or getattr(getattr(s, "response", None), "status", None) == "aborted"
        for s in steps
    )

    # Runner should be marked aborted and final_llm_resp should be consistent
    assert getattr(runner, "was_aborted")() is True
    final_llm_resp = getattr(runner, "final_llm_resp", None)
    assert final_llm_resp is not None

    # When stop happens before any chunks, assistant content should be empty
    # or equivalent to the "no output" contract in this codebase.
    assistant_messages = [
        m for m in getattr(runner.run_context, "messages", []) if getattr(m, "role", None) == "assistant"
    ]
    if assistant_messages:
        # If an assistant message exists, it should have empty/whitespace-only content
        assert all(
            not getattr(m, "content", "") or str(getattr(m, "content", "")).strip() == ""
            for m in assistant_messages
        )
    else:
        # Alternatively, no assistant messages at all is also an acceptable "empty" contract.
        assert assistant_messages == []

```

These tests assume:

1. `runner.was_aborted()` is a callable method returning a bool, `runner.final_llm_resp` holds the final LLM response, and `runner.run_context.messages` is a list of message objects with `role` and `content` attributes.
2. The step objects yielded by `runner.step()` either:
   - expose a `.status` attribute directly, or
   - have a `.response` attribute with a `.status` field set to `"aborted"` on abort.

If your actual API differs (e.g., different attribute names or response shapes), you will need to:
- Update the `getattr(..., "status", ...)` checks to match your real step/response model.
- Adjust how messages are accessed from `run_context` (e.g., `runner.run_context.context.messages` instead of `runner.run_context.messages`).
- Align the “empty assistant output” assertions with the concrete structure of your `final_llm_resp` and message types.

You may also want to align docstrings and test names to your project’s existing test naming conventions if they differ.
</issue_to_address>

### Comment 3
<location> `tests/test_tool_loop_agent_runner.py:440-444` </location>
<code_context>
+
+    runner.request_stop()
+
+    rest_responses = []
+    async for response in step_iter:
+        rest_responses.append(response)
+
+    assert any(resp.type == "aborted" for resp in rest_responses)
+    assert runner.was_aborted() is True
+
</code_context>

<issue_to_address>
**suggestion (testing):** Tighten assertions on the aborted response and exercise hook behavior

Currently the test only asserts that an aborted response exists. To better lock in the expected behavior:

1. Explicitly locate the aborted response and assert on its payload (e.g., `MessageChain(type="aborted")`) so changes to the aborted response shape are caught.
2. Add an assertion that the appropriate hook (e.g., `mock_hooks.on_agent_done`) is called once with the final LLM response on abort, to ensure the stop path still runs cleanup/post-processing logic.

Suggested implementation:

```python
    runner.request_stop()

    # Collect all remaining responses from the stream after requesting stop
    rest_responses = [first_resp]
    async for response in step_iter:
        rest_responses.append(response)

    # Locate the explicit aborted response and validate its payload
    aborted_resp = next(resp for resp in rest_responses if resp.type == "aborted")
    assert getattr(aborted_resp, "output", None) is not None
    # The aborted response should carry a MessageChain-style payload with type="aborted"
    assert aborted_resp.output.type == "aborted"

    # The runner should reflect that it was aborted
    assert runner.was_aborted() is True

    # The agent_done hook should still be called once with the final non-aborted response
    non_aborted_responses = [resp for resp in rest_responses if resp.type != "aborted"]
    final_response = non_aborted_responses[-1]
    mock_hooks.on_agent_done.assert_called_once_with(final_response)

```

1. If `aborted_resp.output` is a `MessageChain` (or similar) type that is not yet imported in this test module, add the appropriate import at the top of the file, e.g.:
   - `from inspect_ai.schema import MessageChain`
2. Ensure the `mock_hooks` fixture (or factory) passed into `agent_hooks` exposes `on_agent_done` as a `Mock`/`MagicMock`:
   - e.g., `mock_hooks.on_agent_done = mocker.Mock()` or similar, if not already present.
3. If the actual attribute name for the payload differs from `output` (e.g., `message` or `data`), adjust `aborted_resp.output` accordingly to match the existing response schema.
4. If the hook’s expected signature is `(runner, final_response)` or includes additional parameters, update `assert_called_once_with` to match that exact signature (you can inspect other tests in this file that assert on `mock_hooks.on_agent_done` for consistency).
</issue_to_address>

### Comment 4
<location> `astrbot/core/astr_agent_run_util.py:55` </location>
<code_context>
                     )
                 )

+        stop_watcher = asyncio.create_task(
+            _watch_agent_stop_signal(agent_runner, astr_event),
+        )
</code_context>

<issue_to_address>
**issue (complexity):** Consider restructuring the stop handling so a single watcher owns `request_stop()` and shared helper logic cleans up `stop_watcher` in one place.

You can simplify the new stop logic without losing any behavior by:

1. Having exactly one place that calls `agent_runner.request_stop()`.
2. De-duplicating the `stop_watcher` cancellation logic.

### 1. Single authority for `request_stop`

Right now:

- `_watch_agent_stop_signal` calls `agent_runner.request_stop()`.
- The main loop also calls `agent_runner.request_stop()` based on `_should_stop_agent`.

That makes it harder to reason about the lifecycle. Pick one authority. For example: let the watcher be the only place that calls `request_stop()`, and let the loop only *observe* the stop state:

```python
async for resp in agent_runner.step():
    # Only observe stop state; do not call request_stop() here
    if _should_stop_agent(astr_event):
        if resp.type == "aborted":
            # special aborted handling
            astr_event.set_extra("agent_user_aborted", True)
            astr_event.set_extra("agent_stop_requested", False)
            break
        # cooperative stop: just stop consuming further responses
        break

    if resp.type == "aborted":
        astr_event.set_extra("agent_user_aborted", True)
        astr_event.set_extra("agent_stop_requested", False)
        break

    if resp.type == "tool_call_result":
        ...
```

And keep `request_stop()` only in the watcher:

```python
async def _watch_agent_stop_signal(agent_runner: AgentRunner, astr_event) -> None:
    while not agent_runner.done():
        if _should_stop_agent(astr_event):
            agent_runner.request_stop()
            return
        await asyncio.sleep(0.5)
```

This keeps the same semantics but makes it clear that the watcher owns the stop request, and the loop only reacts to the stop state.

### 2. Factor out repeated `stop_watcher` cancellation

The `stop_watcher` cancellation pattern is repeated in three places. Extract a tiny helper and/or use `try/finally` so it’s centralized.

Helper:

```python
async def _cancel_task_safely(task: asyncio.Task | None) -> None:
    if not task or task.done():
        return
    task.cancel()
    try:
        await task
    except asyncio.CancelledError:
        pass
```

Then in `run_agent`:

```python
stop_watcher: asyncio.Task | None = None
try:
    stop_watcher = asyncio.create_task(
        _watch_agent_stop_signal(agent_runner, astr_event),
    )

    async for resp in agent_runner.step():
        ...
        if resp.type == "aborted":
            astr_event.set_extra("agent_user_aborted", True)
            astr_event.set_extra("agent_stop_requested", False)
            break
        ...

    if agent_runner.done():
        ...
        break

except Exception as e:
    await _cancel_task_safely(stop_watcher)
    ...
finally:
    # ensure watcher is always cleaned up, even on normal completion
    await _cancel_task_safely(stop_watcher)
```

This removes the per-branch duplication and makes the lifetime of `stop_watcher` obvious: created once, always cancelled/awaited in one place.
</issue_to_address>

### Comment 5
<location> `astrbot/core/agent/runners/tool_loop_agent_runner.py:333` </location>
<code_context>
                             ),
                         ),
                     )
+                if self._stop_requested:
+                    llm_resp_result = LLMResponse(
+                        role="assistant",
</code_context>

<issue_to_address>
**issue (complexity):** Consider extracting helpers for building the interrupted LLM response and finalizing aborts so the stop/abort logic in step() is centralized and less repetitive.

You can centralize the stop/abort path and remove duplication by extracting two small helpers: one to construct the “interrupted” `LLMResponse`, and one to finalize the abort (state, stats, messages, hooks, yield). That keeps `step()`’s control flow simpler while preserving behavior.

### 1. Extract a helper to build the interrupted `LLMResponse`

Right now the “user interrupted” system string and normalization are duplicated in multiple places. Wrap that into a single helper:

```python
SYSTEM_INTERRUPTED_TEXT = (
    "[SYSTEM: User actively interrupted the response generation. "
    "Partial output before interruption is preserved.]"
)

def _build_interrupted_response(
    self,
    base_resp: LLMResponse | None,
) -> LLMResponse:
    """
    Normalize an interrupted response into an assistant LLMResponse.

    - If base_resp is provided, preserve reasoning fields / partial content.
    - Ensure role='assistant'.
    - Ensure completion_text carries the system interruption message
      when appropriate.
    """
    if base_resp is None:
        return LLMResponse(role="assistant", completion_text="")

    if base_resp.role != "assistant":
        return LLMResponse(
            role="assistant",
            completion_text=SYSTEM_INTERRUPTED_TEXT,
            reasoning_content=base_resp.reasoning_content,
            reasoning_signature=base_resp.reasoning_signature,
        )

    # base_resp is already assistant; ensure completion_text has something
    if not base_resp.completion_text:
        return LLMResponse(
            role="assistant",
            completion_text=SYSTEM_INTERRUPTED_TEXT,
            reasoning_content=base_resp.reasoning_content,
            reasoning_signature=base_resp.reasoning_signature,
        )

    return base_resp
```

Now both the in-loop and post-loop stop branches can reuse this instead of hard-coding strings and logic.

### 2. Extract a focused abort finalizer

All the stop-path side effects (state, stats, messages, hook, and yielding `AgentResponse`) can move into a single private method:

```python
async def _finalize_abort(self, llm_resp: LLMResponse) -> AsyncIterator[AgentResponse]:
    logger.info("Agent execution was requested to stop by user.")

    self.final_llm_resp = llm_resp
    self._aborted = True
    self._transition_state(AgentState.DONE)
    self.stats.end_time = time.time()

    parts: list[Part] = []
    if llm_resp.reasoning_content or llm_resp.reasoning_signature:
        parts.append(
            ThinkPart(
                think=llm_resp.reasoning_content,
                encrypted=llm_resp.reasoning_signature,
            )
        )
    if llm_resp.completion_text:
        parts.append(TextPart(text=llm_resp.completion_text))
    if parts:
        self.run_context.messages.append(
            Message(role="assistant", content=parts)
        )

    try:
        await self.agent_hooks.on_agent_done(self.run_context, llm_resp)
    except Exception as e:
        logger.error("Error in on_agent_done hook: %s", e, exc_info=True)

    yield AgentResponse(
        type="aborted",
        data=AgentResponseData(chain=MessageChain(type="aborted")),
    )
```

### 3. Simplify `step()` to a single authoritative abort path

With those helpers, `step()` only needs:

- A normalized `llm_resp_result` set when stop is requested (either in-loop or post-loop).
- A single early exit that delegates to `_finalize_abort`.

For example (showing just the relevant parts around your changes):

```python
# inside the streaming loop
for llm_response in llm_responses:
    ...
    if self._stop_requested:
        # Use the latest chunk to build an interrupted response
        llm_resp_result = self._build_interrupted_response(llm_response)
        break

    continue

llm_resp_result = llm_response
...

# after the loop
if not llm_resp_result:
    if self._stop_requested:
        # No final chunk; still need a normalized interrupted response
        llm_resp_result = self._build_interrupted_response(None)
    else:
        return

if self._stop_requested:
    async for aborted_resp in self._finalize_abort(llm_resp_result):
        yield aborted_resp
    return
```

This keeps:

- The stop behavior and stats/state/messaging identical.
- A single, clearly visible abort branch.
- All “user interrupted” string and normalization logic centralized in `_build_interrupted_response`.
</issue_to_address>

Sourcery 对开源项目是免费的——如果你觉得这次 Review 有帮助,欢迎分享 ✨
帮我变得更有用!请在每条评论上点击 👍 或 👎,我会根据反馈改进之后的 Review。
Original comment in English

Hey - I've found 5 issues, and left some high level feedback:

  • In run_agent, the stop logic is handled both via _watch_agent_stop_signal and per-iteration _should_stop_agent checks; consider consolidating this into a single mechanism to avoid redundant request_stop() calls and continue branches that can make the control flow harder to follow.
  • In _save_to_history, the commented-out user_aborted history marker block adds noise; either remove it or implement a clear, explicit representation of user-aborted conversations if you intend to persist that state in history.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- In `run_agent`, the stop logic is handled both via `_watch_agent_stop_signal` and per-iteration `_should_stop_agent` checks; consider consolidating this into a single mechanism to avoid redundant `request_stop()` calls and `continue` branches that can make the control flow harder to follow.
- In `_save_to_history`, the commented-out `user_aborted` history marker block adds noise; either remove it or implement a clear, explicit representation of user-aborted conversations if you intend to persist that state in history.

## Individual Comments

### Comment 1
<location> `astrbot/core/agent/runners/tool_loop_agent_runner.py:334-338` </location>
<code_context>
                         ),
                     )
+                if self._stop_requested:
+                    llm_resp_result = LLMResponse(
+                        role="assistant",
+                        completion_text="[SYSTEM: User actively interrupted the response generation. Partial output before interruption is preserved.]",
+                        reasoning_content=llm_response.reasoning_content,
+                        reasoning_signature=llm_response.reasoning_signature,
+                    )
+                    break
</code_context>

<issue_to_address>
**suggestion:** The user-interruption system message string is duplicated; consider centralizing it.

This interruption marker string is hard-coded both here and in the `_stop_requested` handling after the loop. Please extract it into a shared constant or helper (e.g., `make_user_abort_response(...)`) so the text stays consistent and easier to maintain across both paths.

Suggested implementation:

```python
                if self._stop_requested:
                    llm_resp_result = LLMResponse(
                        role="assistant",
                        completion_text=USER_ABORT_SYSTEM_MESSAGE,
                        reasoning_content=llm_response.reasoning_content,
                        reasoning_signature=llm_response.reasoning_signature,
                    )
                    break

```

```python
        if not llm_resp_result:
            if self._stop_requested:
                llm_resp_result = LLMResponse(
                    role="assistant",
                    completion_text=USER_ABORT_SYSTEM_MESSAGE,
                )
            else:
                return

```

To fully implement the suggestion, also:

1. Define the shared constant `USER_ABORT_SYSTEM_MESSAGE` once in this module, for example near the top of `tool_loop_agent_runner.py`:

```python
USER_ABORT_SYSTEM_MESSAGE = (
    "[SYSTEM: User actively interrupted the response generation. "
    "Partial output before interruption is preserved.]"
)
```

2. If there are any other occurrences of the same interruption message string elsewhere in this file (or related runners), replace them with `USER_ABORT_SYSTEM_MESSAGE` so the text is centralized and consistent.

If you prefer a helper instead of a bare constant (e.g., to preserve reasoning fields when available), you can define a function like `make_user_abort_response(llm_response: LLMResponse | None) -> LLMResponse` in this module and have both call sites use it instead of constructing `LLMResponse` directly.
</issue_to_address>

### Comment 2
<location> `tests/test_tool_loop_agent_runner.py:419-420` </location>
<code_context>
     assert fallback_provider.call_count == 1


+@pytest.mark.asyncio
+async def test_stop_signal_returns_aborted_and_persists_partial_message(
+    runner, provider_request, mock_tool_executor, mock_hooks
+):
</code_context>

<issue_to_address>
**suggestion (testing):** Add tests for non-streaming mode and for stop requested before any chunks are produced

The new test covers the streaming case after a partial chunk has been delivered, but two important cases are still untested:

1. **Non-streaming mode (`streaming=False`)**: Add a test using `MockAbortableStreamProvider` (or similar) with `streaming=False` and `runner.request_stop()` that asserts:
   - An `aborted` response is yielded.
   - `runner.was_aborted()` is `True` and `final_llm_resp` is set correctly.
   - Partial output is preserved in `run_context.messages`.

2. **Stop before any chunks**: Add a test that calls `runner.request_stop()` *before* consuming from `step()` and asserts:
   - The generator yields an `aborted` response (or otherwise terminates as designed).
   - `final_llm_resp` matches the intended contract for the new branch (e.g. empty assistant message when `llm_resp_result` is missing).
   - `run_context.messages` matches that behavior.

These will exercise the new stop logic for both non-streaming runs and immediate-stop scenarios.

Suggested implementation:

```python
@pytest.mark.asyncio
async def test_stop_signal_returns_aborted_and_persists_partial_message(
    runner, provider_request, mock_tool_executor, mock_hooks
):
    provider = MockAbortableStreamProvider()

    await runner.reset(
        provider=provider,
        request=provider_request,
        run_context=ContextWrapper(context=None),
        tool_executor=mock_tool_executor,
        agent_hooks=mock_hooks,
        streaming=True,
    )

    # Start a streaming step and consume at least one partial chunk
    step_gen = runner.step()
    partial_steps = []
    async for step in step_gen:
        partial_steps.append(step)
        # Simulate user requesting stop after first partial assistant chunk
        if len(partial_steps) == 1:
            runner.request_stop()
        # Once stop has been requested and an aborted response is observed, break
        if getattr(step, "status", None) == "aborted" or getattr(
            getattr(step, "response", None), "status", None
        ) == "aborted":
            break

    # We should have observed at least one step before aborting
    assert partial_steps
    # Runner should record that it was aborted
    assert getattr(runner, "was_aborted")() is True
    # Final LLM response should be recorded
    assert getattr(runner, "final_llm_resp", None) is not None

    # Partial assistant output should be preserved in the run context messages
    run_context = getattr(runner, "run_context", None)
    assert run_context is not None
    messages = getattr(run_context, "messages", [])
    assert messages
    # There should be at least one assistant message (partial content)
    assert any(getattr(m, "role", None) == "assistant" for m in messages)


@pytest.mark.asyncio
async def test_stop_signal_non_streaming_persists_partial_message_and_sets_final_resp(
    runner, provider_request, mock_tool_executor, mock_hooks
):
    """Non-streaming mode: stop requested during the run should yield an aborted result
    and preserve partial assistant output in the run context messages.
    """
    provider = MockAbortableStreamProvider()

    await runner.reset(
        provider=provider,
        request=provider_request,
        run_context=ContextWrapper(context=None),
        tool_executor=mock_tool_executor,
        agent_hooks=mock_hooks,
        streaming=False,
    )

    step_gen = runner.step()

    # Consume the first step to allow the provider to start producing output.
    # This is where partial output may have been generated internally.
    first_step = await step_gen.__anext__()
    assert first_step is not None

    # Request stop after the run has started in non-streaming mode.
    runner.request_stop()

    # Collect remaining steps; one of them should reflect the aborted state.
    remaining_steps = [first_step]
    async for step in step_gen:
        remaining_steps.append(step)

    # Runner should be marked as aborted
    assert getattr(runner, "was_aborted")() is True
    # Final LLM response should be set
    assert getattr(runner, "final_llm_resp", None) is not None

    # There should be an aborted outcome in the collected steps
    assert any(
        getattr(s, "status", None) == "aborted"
        or getattr(getattr(s, "response", None), "status", None) == "aborted"
        for s in remaining_steps
    )

    # Partial assistant output should be preserved in the run context messages
    run_context = getattr(runner, "run_context", None)
    assert run_context is not None
    messages = getattr(run_context, "messages", [])
    assert messages
    assert any(getattr(m, "role", None) == "assistant" for m in messages)


@pytest.mark.asyncio
async def test_stop_requested_before_any_chunks_yields_aborted_and_empty_assistant(
    runner, provider_request, mock_tool_executor, mock_hooks
):
    """Stop requested before any chunks are consumed should still yield an aborted
    response and produce an appropriate final_llm_resp and messages.
    """
    provider = MockAbortableStreamProvider()

    await runner.reset(
        provider=provider,
        request=provider_request,
        run_context=ContextWrapper(context=None),
        tool_executor=mock_tool_executor,
        agent_hooks=mock_hooks,
        streaming=True,
    )

    # Create the generator but request stop before consuming any chunks
    step_gen = runner.step()
    runner.request_stop()

    steps = [step async for step in step_gen]

    # There should be at least one step representing the aborted outcome
    assert steps
    assert any(
        getattr(s, "status", None) == "aborted"
        or getattr(getattr(s, "response", None), "status", None) == "aborted"
        for s in steps
    )

    # Runner should be marked aborted and final_llm_resp should be consistent
    assert getattr(runner, "was_aborted")() is True
    final_llm_resp = getattr(runner, "final_llm_resp", None)
    assert final_llm_resp is not None

    # When stop happens before any chunks, assistant content should be empty
    # or equivalent to the "no output" contract in this codebase.
    assistant_messages = [
        m for m in getattr(runner.run_context, "messages", []) if getattr(m, "role", None) == "assistant"
    ]
    if assistant_messages:
        # If an assistant message exists, it should have empty/whitespace-only content
        assert all(
            not getattr(m, "content", "") or str(getattr(m, "content", "")).strip() == ""
            for m in assistant_messages
        )
    else:
        # Alternatively, no assistant messages at all is also an acceptable "empty" contract.
        assert assistant_messages == []

```

These tests assume:

1. `runner.was_aborted()` is a callable method returning a bool, `runner.final_llm_resp` holds the final LLM response, and `runner.run_context.messages` is a list of message objects with `role` and `content` attributes.
2. The step objects yielded by `runner.step()` either:
   - expose a `.status` attribute directly, or
   - have a `.response` attribute with a `.status` field set to `"aborted"` on abort.

If your actual API differs (e.g., different attribute names or response shapes), you will need to:
- Update the `getattr(..., "status", ...)` checks to match your real step/response model.
- Adjust how messages are accessed from `run_context` (e.g., `runner.run_context.context.messages` instead of `runner.run_context.messages`).
- Align the “empty assistant output” assertions with the concrete structure of your `final_llm_resp` and message types.

You may also want to align docstrings and test names to your project’s existing test naming conventions if they differ.
</issue_to_address>

### Comment 3
<location> `tests/test_tool_loop_agent_runner.py:440-444` </location>
<code_context>
+
+    runner.request_stop()
+
+    rest_responses = []
+    async for response in step_iter:
+        rest_responses.append(response)
+
+    assert any(resp.type == "aborted" for resp in rest_responses)
+    assert runner.was_aborted() is True
+
</code_context>

<issue_to_address>
**suggestion (testing):** Tighten assertions on the aborted response and exercise hook behavior

Currently the test only asserts that an aborted response exists. To better lock in the expected behavior:

1. Explicitly locate the aborted response and assert on its payload (e.g., `MessageChain(type="aborted")`) so changes to the aborted response shape are caught.
2. Add an assertion that the appropriate hook (e.g., `mock_hooks.on_agent_done`) is called once with the final LLM response on abort, to ensure the stop path still runs cleanup/post-processing logic.

Suggested implementation:

```python
    runner.request_stop()

    # Collect all remaining responses from the stream after requesting stop
    rest_responses = [first_resp]
    async for response in step_iter:
        rest_responses.append(response)

    # Locate the explicit aborted response and validate its payload
    aborted_resp = next(resp for resp in rest_responses if resp.type == "aborted")
    assert getattr(aborted_resp, "output", None) is not None
    # The aborted response should carry a MessageChain-style payload with type="aborted"
    assert aborted_resp.output.type == "aborted"

    # The runner should reflect that it was aborted
    assert runner.was_aborted() is True

    # The agent_done hook should still be called once with the final non-aborted response
    non_aborted_responses = [resp for resp in rest_responses if resp.type != "aborted"]
    final_response = non_aborted_responses[-1]
    mock_hooks.on_agent_done.assert_called_once_with(final_response)

```

1. If `aborted_resp.output` is a `MessageChain` (or similar) type that is not yet imported in this test module, add the appropriate import at the top of the file, e.g.:
   - `from inspect_ai.schema import MessageChain`
2. Ensure the `mock_hooks` fixture (or factory) passed into `agent_hooks` exposes `on_agent_done` as a `Mock`/`MagicMock`:
   - e.g., `mock_hooks.on_agent_done = mocker.Mock()` or similar, if not already present.
3. If the actual attribute name for the payload differs from `output` (e.g., `message` or `data`), adjust `aborted_resp.output` accordingly to match the existing response schema.
4. If the hook’s expected signature is `(runner, final_response)` or includes additional parameters, update `assert_called_once_with` to match that exact signature (you can inspect other tests in this file that assert on `mock_hooks.on_agent_done` for consistency).
</issue_to_address>

### Comment 4
<location> `astrbot/core/astr_agent_run_util.py:55` </location>
<code_context>
                     )
                 )

+        stop_watcher = asyncio.create_task(
+            _watch_agent_stop_signal(agent_runner, astr_event),
+        )
</code_context>

<issue_to_address>
**issue (complexity):** Consider restructuring the stop handling so a single watcher owns `request_stop()` and shared helper logic cleans up `stop_watcher` in one place.

You can simplify the new stop logic without losing any behavior by:

1. Having exactly one place that calls `agent_runner.request_stop()`.
2. De-duplicating the `stop_watcher` cancellation logic.

### 1. Single authority for `request_stop`

Right now:

- `_watch_agent_stop_signal` calls `agent_runner.request_stop()`.
- The main loop also calls `agent_runner.request_stop()` based on `_should_stop_agent`.

That makes it harder to reason about the lifecycle. Pick one authority. For example: let the watcher be the only place that calls `request_stop()`, and let the loop only *observe* the stop state:

```python
async for resp in agent_runner.step():
    # Only observe stop state; do not call request_stop() here
    if _should_stop_agent(astr_event):
        if resp.type == "aborted":
            # special aborted handling
            astr_event.set_extra("agent_user_aborted", True)
            astr_event.set_extra("agent_stop_requested", False)
            break
        # cooperative stop: just stop consuming further responses
        break

    if resp.type == "aborted":
        astr_event.set_extra("agent_user_aborted", True)
        astr_event.set_extra("agent_stop_requested", False)
        break

    if resp.type == "tool_call_result":
        ...
```

And keep `request_stop()` only in the watcher:

```python
async def _watch_agent_stop_signal(agent_runner: AgentRunner, astr_event) -> None:
    while not agent_runner.done():
        if _should_stop_agent(astr_event):
            agent_runner.request_stop()
            return
        await asyncio.sleep(0.5)
```

This keeps the same semantics but makes it clear that the watcher owns the stop request, and the loop only reacts to the stop state.

### 2. Factor out repeated `stop_watcher` cancellation

The `stop_watcher` cancellation pattern is repeated in three places. Extract a tiny helper and/or use `try/finally` so it’s centralized.

Helper:

```python
async def _cancel_task_safely(task: asyncio.Task | None) -> None:
    if not task or task.done():
        return
    task.cancel()
    try:
        await task
    except asyncio.CancelledError:
        pass
```

Then in `run_agent`:

```python
stop_watcher: asyncio.Task | None = None
try:
    stop_watcher = asyncio.create_task(
        _watch_agent_stop_signal(agent_runner, astr_event),
    )

    async for resp in agent_runner.step():
        ...
        if resp.type == "aborted":
            astr_event.set_extra("agent_user_aborted", True)
            astr_event.set_extra("agent_stop_requested", False)
            break
        ...

    if agent_runner.done():
        ...
        break

except Exception as e:
    await _cancel_task_safely(stop_watcher)
    ...
finally:
    # ensure watcher is always cleaned up, even on normal completion
    await _cancel_task_safely(stop_watcher)
```

This removes the per-branch duplication and makes the lifetime of `stop_watcher` obvious: created once, always cancelled/awaited in one place.
</issue_to_address>

### Comment 5
<location> `astrbot/core/agent/runners/tool_loop_agent_runner.py:333` </location>
<code_context>
                             ),
                         ),
                     )
+                if self._stop_requested:
+                    llm_resp_result = LLMResponse(
+                        role="assistant",
</code_context>

<issue_to_address>
**issue (complexity):** Consider extracting helpers for building the interrupted LLM response and finalizing aborts so the stop/abort logic in step() is centralized and less repetitive.

You can centralize the stop/abort path and remove duplication by extracting two small helpers: one to construct the “interrupted” `LLMResponse`, and one to finalize the abort (state, stats, messages, hooks, yield). That keeps `step()`’s control flow simpler while preserving behavior.

### 1. Extract a helper to build the interrupted `LLMResponse`

Right now the “user interrupted” system string and normalization are duplicated in multiple places. Wrap that into a single helper:

```python
SYSTEM_INTERRUPTED_TEXT = (
    "[SYSTEM: User actively interrupted the response generation. "
    "Partial output before interruption is preserved.]"
)

def _build_interrupted_response(
    self,
    base_resp: LLMResponse | None,
) -> LLMResponse:
    """
    Normalize an interrupted response into an assistant LLMResponse.

    - If base_resp is provided, preserve reasoning fields / partial content.
    - Ensure role='assistant'.
    - Ensure completion_text carries the system interruption message
      when appropriate.
    """
    if base_resp is None:
        return LLMResponse(role="assistant", completion_text="")

    if base_resp.role != "assistant":
        return LLMResponse(
            role="assistant",
            completion_text=SYSTEM_INTERRUPTED_TEXT,
            reasoning_content=base_resp.reasoning_content,
            reasoning_signature=base_resp.reasoning_signature,
        )

    # base_resp is already assistant; ensure completion_text has something
    if not base_resp.completion_text:
        return LLMResponse(
            role="assistant",
            completion_text=SYSTEM_INTERRUPTED_TEXT,
            reasoning_content=base_resp.reasoning_content,
            reasoning_signature=base_resp.reasoning_signature,
        )

    return base_resp
```

Now both the in-loop and post-loop stop branches can reuse this instead of hard-coding strings and logic.

### 2. Extract a focused abort finalizer

All the stop-path side effects (state, stats, messages, hook, and yielding `AgentResponse`) can move into a single private method:

```python
async def _finalize_abort(self, llm_resp: LLMResponse) -> AsyncIterator[AgentResponse]:
    logger.info("Agent execution was requested to stop by user.")

    self.final_llm_resp = llm_resp
    self._aborted = True
    self._transition_state(AgentState.DONE)
    self.stats.end_time = time.time()

    parts: list[Part] = []
    if llm_resp.reasoning_content or llm_resp.reasoning_signature:
        parts.append(
            ThinkPart(
                think=llm_resp.reasoning_content,
                encrypted=llm_resp.reasoning_signature,
            )
        )
    if llm_resp.completion_text:
        parts.append(TextPart(text=llm_resp.completion_text))
    if parts:
        self.run_context.messages.append(
            Message(role="assistant", content=parts)
        )

    try:
        await self.agent_hooks.on_agent_done(self.run_context, llm_resp)
    except Exception as e:
        logger.error("Error in on_agent_done hook: %s", e, exc_info=True)

    yield AgentResponse(
        type="aborted",
        data=AgentResponseData(chain=MessageChain(type="aborted")),
    )
```

### 3. Simplify `step()` to a single authoritative abort path

With those helpers, `step()` only needs:

- A normalized `llm_resp_result` set when stop is requested (either in-loop or post-loop).
- A single early exit that delegates to `_finalize_abort`.

For example (showing just the relevant parts around your changes):

```python
# inside the streaming loop
for llm_response in llm_responses:
    ...
    if self._stop_requested:
        # Use the latest chunk to build an interrupted response
        llm_resp_result = self._build_interrupted_response(llm_response)
        break

    continue

llm_resp_result = llm_response
...

# after the loop
if not llm_resp_result:
    if self._stop_requested:
        # No final chunk; still need a normalized interrupted response
        llm_resp_result = self._build_interrupted_response(None)
    else:
        return

if self._stop_requested:
    async for aborted_resp in self._finalize_abort(llm_resp_result):
        yield aborted_resp
    return
```

This keeps:

- The stop behavior and stats/state/messaging identical.
- A single, clearly visible abort branch.
- All “user interrupted” string and normalization logic centralized in `_build_interrupted_response`.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment on lines +334 to +338
llm_resp_result = LLMResponse(
role="assistant",
completion_text="[SYSTEM: User actively interrupted the response generation. Partial output before interruption is preserved.]",
reasoning_content=llm_response.reasoning_content,
reasoning_signature=llm_response.reasoning_signature,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: 用户中断的系统消息字符串被重复使用了;建议将其集中管理。

这个中断标记字符串在这里以及循环结束后的 _stop_requested 分支中都是硬编码的。建议把它提取为一个共享常量或 helper(例如 make_user_abort_response(...)),这样两处路径中的文案可以保持一致,也更便于维护。

Suggested implementation:

                if self._stop_requested:
                    llm_resp_result = LLMResponse(
                        role="assistant",
                        completion_text=USER_ABORT_SYSTEM_MESSAGE,
                        reasoning_content=llm_response.reasoning_content,
                        reasoning_signature=llm_response.reasoning_signature,
                    )
                    break
        if not llm_resp_result:
            if self._stop_requested:
                llm_resp_result = LLMResponse(
                    role="assistant",
                    completion_text=USER_ABORT_SYSTEM_MESSAGE,
                )
            else:
                return

要完整实现这一建议,还需要:

  1. 在该模块中只定义一次共享常量 USER_ABORT_SYSTEM_MESSAGE,例如放在 tool_loop_agent_runner.py 顶部附近:
USER_ABORT_SYSTEM_MESSAGE = (
    "[SYSTEM: User actively interrupted the response generation. "
    "Partial output before interruption is preserved.]"
)
  1. 如果这个中断消息字符串在本文件(或相关的 runner)中还有其他出现位置,也替换为 USER_ABORT_SYSTEM_MESSAGE,让文案集中管理并保持一致。

如果你更倾向于用 helper 而不是裸常量(比如希望在有推理字段时保留下来),可以在本模块中定义一个类似 make_user_abort_response(llm_response: LLMResponse | None) -> LLMResponse 的函数,然后让这两个调用点都使用该函数,而不是直接构造 LLMResponse

Original comment in English

suggestion: The user-interruption system message string is duplicated; consider centralizing it.

This interruption marker string is hard-coded both here and in the _stop_requested handling after the loop. Please extract it into a shared constant or helper (e.g., make_user_abort_response(...)) so the text stays consistent and easier to maintain across both paths.

Suggested implementation:

                if self._stop_requested:
                    llm_resp_result = LLMResponse(
                        role="assistant",
                        completion_text=USER_ABORT_SYSTEM_MESSAGE,
                        reasoning_content=llm_response.reasoning_content,
                        reasoning_signature=llm_response.reasoning_signature,
                    )
                    break
        if not llm_resp_result:
            if self._stop_requested:
                llm_resp_result = LLMResponse(
                    role="assistant",
                    completion_text=USER_ABORT_SYSTEM_MESSAGE,
                )
            else:
                return

To fully implement the suggestion, also:

  1. Define the shared constant USER_ABORT_SYSTEM_MESSAGE once in this module, for example near the top of tool_loop_agent_runner.py:
USER_ABORT_SYSTEM_MESSAGE = (
    "[SYSTEM: User actively interrupted the response generation. "
    "Partial output before interruption is preserved.]"
)
  1. If there are any other occurrences of the same interruption message string elsewhere in this file (or related runners), replace them with USER_ABORT_SYSTEM_MESSAGE so the text is centralized and consistent.

If you prefer a helper instead of a bare constant (e.g., to preserve reasoning fields when available), you can define a function like make_user_abort_response(llm_response: LLMResponse | None) -> LLMResponse in this module and have both call sites use it instead of constructing LLMResponse directly.

Comment on lines +419 to +420
@pytest.mark.asyncio
async def test_stop_signal_returns_aborted_and_persists_partial_message(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (testing): 为非流式模式以及“在产生任何 chunk 之前发出停止请求”的场景补充测试

当前新增的测试覆盖了“在流式模式下已经发送了部分 chunk 后再中断”的情况,但还有两个重要场景尚未测试:

  1. 非流式模式(streaming=False:添加一个使用 MockAbortableStreamProvider(或类似)的测试,在 streaming=False 下调用 runner.request_stop(),并断言:

    • 会产出一个 aborted 响应;
    • runner.was_aborted()True,且 final_llm_resp 被正确设置;
    • 部分输出被保存在 run_context.messages 中。
  2. 在任何 chunk 产生之前停止:添加一个测试,在从 step() 消费之前先调用 runner.request_stop(),并断言:

    • 生成器会产生一个 aborted 响应(或按设计方式终止);
    • final_llm_resp 符合新分支的约定(例如当 llm_resp_result 缺失时,最终的 assistant 消息为空);
    • run_context.messages 与该行为一致。

这些测试会覆盖新的停止逻辑在“非流式运行”和“立即停止”的两类场景下的行为。

Suggested implementation:

@pytest.mark.asyncio
async def test_stop_signal_returns_aborted_and_persists_partial_message(
    runner, provider_request, mock_tool_executor, mock_hooks
):
    provider = MockAbortableStreamProvider()

    await runner.reset(
        provider=provider,
        request=provider_request,
        run_context=ContextWrapper(context=None),
        tool_executor=mock_tool_executor,
        agent_hooks=mock_hooks,
        streaming=True,
    )

    # Start a streaming step and consume at least one partial chunk
    step_gen = runner.step()
    partial_steps = []
    async for step in step_gen:
        partial_steps.append(step)
        # Simulate user requesting stop after first partial assistant chunk
        if len(partial_steps) == 1:
            runner.request_stop()
        # Once stop has been requested and an aborted response is observed, break
        if getattr(step, "status", None) == "aborted" or getattr(
            getattr(step, "response", None), "status", None
        ) == "aborted":
            break

    # We should have observed at least one step before aborting
    assert partial_steps
    # Runner should record that it was aborted
    assert getattr(runner, "was_aborted")() is True
    # Final LLM response should be recorded
    assert getattr(runner, "final_llm_resp", None) is not None

    # Partial assistant output should be preserved in the run context messages
    run_context = getattr(runner, "run_context", None)
    assert run_context is not None
    messages = getattr(run_context, "messages", [])
    assert messages
    # There should be at least one assistant message (partial content)
    assert any(getattr(m, "role", None) == "assistant" for m in messages)


@pytest.mark.asyncio
async def test_stop_signal_non_streaming_persists_partial_message_and_sets_final_resp(
    runner, provider_request, mock_tool_executor, mock_hooks
):
    """Non-streaming mode: stop requested during the run should yield an aborted result
    and preserve partial assistant output in the run context messages.
    """
    provider = MockAbortableStreamProvider()

    await runner.reset(
        provider=provider,
        request=provider_request,
        run_context=ContextWrapper(context=None),
        tool_executor=mock_tool_executor,
        agent_hooks=mock_hooks,
        streaming=False,
    )

    step_gen = runner.step()

    # Consume the first step to allow the provider to start producing output.
    # This is where partial output may have been generated internally.
    first_step = await step_gen.__anext__()
    assert first_step is not None

    # Request stop after the run has started in non-streaming mode.
    runner.request_stop()

    # Collect remaining steps; one of them should reflect the aborted state.
    remaining_steps = [first_step]
    async for step in step_gen:
        remaining_steps.append(step)

    # Runner should be marked as aborted
    assert getattr(runner, "was_aborted")() is True
    # Final LLM response should be set
    assert getattr(runner, "final_llm_resp", None) is not None

    # There should be an aborted outcome in the collected steps
    assert any(
        getattr(s, "status", None) == "aborted"
        or getattr(getattr(s, "response", None), "status", None) == "aborted"
        for s in remaining_steps
    )

    # Partial assistant output should be preserved in the run context messages
    run_context = getattr(runner, "run_context", None)
    assert run_context is not None
    messages = getattr(run_context, "messages", [])
    assert messages
    assert any(getattr(m, "role", None) == "assistant" for m in messages)


@pytest.mark.asyncio
async def test_stop_requested_before_any_chunks_yields_aborted_and_empty_assistant(
    runner, provider_request, mock_tool_executor, mock_hooks
):
    """Stop requested before any chunks are consumed should still yield an aborted
    response and produce an appropriate final_llm_resp and messages.
    """
    provider = MockAbortableStreamProvider()

    await runner.reset(
        provider=provider,
        request=provider_request,
        run_context=ContextWrapper(context=None),
        tool_executor=mock_tool_executor,
        agent_hooks=mock_hooks,
        streaming=True,
    )

    # Create the generator but request stop before consuming any chunks
    step_gen = runner.step()
    runner.request_stop()

    steps = [step async for step in step_gen]

    # There should be at least one step representing the aborted outcome
    assert steps
    assert any(
        getattr(s, "status", None) == "aborted"
        or getattr(getattr(s, "response", None), "status", None) == "aborted"
        for s in steps
    )

    # Runner should be marked aborted and final_llm_resp should be consistent
    assert getattr(runner, "was_aborted")() is True
    final_llm_resp = getattr(runner, "final_llm_resp", None)
    assert final_llm_resp is not None

    # When stop happens before any chunks, assistant content should be empty
    # or equivalent to the "no output" contract in this codebase.
    assistant_messages = [
        m for m in getattr(runner.run_context, "messages", []) if getattr(m, "role", None) == "assistant"
    ]
    if assistant_messages:
        # If an assistant message exists, it should have empty/whitespace-only content
        assert all(
            not getattr(m, "content", "") or str(getattr(m, "content", "")).strip() == ""
            for m in assistant_messages
        )
    else:
        # Alternatively, no assistant messages at all is also an acceptable "empty" contract.
        assert assistant_messages == []

These tests assume:

  1. runner.was_aborted() is a callable method returning a bool, runner.final_llm_resp holds the final LLM response, and runner.run_context.messages is a list of message objects with role and content attributes.
  2. The step objects yielded by runner.step() either:
    • expose a .status attribute directly, or
    • have a .response attribute with a .status field set to "aborted" on abort.

If your actual API differs (e.g., different attribute names or response shapes), you will need to:

  • Update the getattr(..., "status", ...) checks to match your real step/response model.
  • Adjust how messages are accessed from run_context (e.g., runner.run_context.context.messages instead of runner.run_context.messages).
  • Align the “empty assistant output” assertions with the concrete structure of your final_llm_resp and message types.

You may also want to align docstrings and test names to your project’s existing test naming conventions if they differ.

Original comment in English

suggestion (testing): Add tests for non-streaming mode and for stop requested before any chunks are produced

The new test covers the streaming case after a partial chunk has been delivered, but two important cases are still untested:

  1. Non-streaming mode (streaming=False): Add a test using MockAbortableStreamProvider (or similar) with streaming=False and runner.request_stop() that asserts:

    • An aborted response is yielded.
    • runner.was_aborted() is True and final_llm_resp is set correctly.
    • Partial output is preserved in run_context.messages.
  2. Stop before any chunks: Add a test that calls runner.request_stop() before consuming from step() and asserts:

    • The generator yields an aborted response (or otherwise terminates as designed).
    • final_llm_resp matches the intended contract for the new branch (e.g. empty assistant message when llm_resp_result is missing).
    • run_context.messages matches that behavior.

These will exercise the new stop logic for both non-streaming runs and immediate-stop scenarios.

Suggested implementation:

@pytest.mark.asyncio
async def test_stop_signal_returns_aborted_and_persists_partial_message(
    runner, provider_request, mock_tool_executor, mock_hooks
):
    provider = MockAbortableStreamProvider()

    await runner.reset(
        provider=provider,
        request=provider_request,
        run_context=ContextWrapper(context=None),
        tool_executor=mock_tool_executor,
        agent_hooks=mock_hooks,
        streaming=True,
    )

    # Start a streaming step and consume at least one partial chunk
    step_gen = runner.step()
    partial_steps = []
    async for step in step_gen:
        partial_steps.append(step)
        # Simulate user requesting stop after first partial assistant chunk
        if len(partial_steps) == 1:
            runner.request_stop()
        # Once stop has been requested and an aborted response is observed, break
        if getattr(step, "status", None) == "aborted" or getattr(
            getattr(step, "response", None), "status", None
        ) == "aborted":
            break

    # We should have observed at least one step before aborting
    assert partial_steps
    # Runner should record that it was aborted
    assert getattr(runner, "was_aborted")() is True
    # Final LLM response should be recorded
    assert getattr(runner, "final_llm_resp", None) is not None

    # Partial assistant output should be preserved in the run context messages
    run_context = getattr(runner, "run_context", None)
    assert run_context is not None
    messages = getattr(run_context, "messages", [])
    assert messages
    # There should be at least one assistant message (partial content)
    assert any(getattr(m, "role", None) == "assistant" for m in messages)


@pytest.mark.asyncio
async def test_stop_signal_non_streaming_persists_partial_message_and_sets_final_resp(
    runner, provider_request, mock_tool_executor, mock_hooks
):
    """Non-streaming mode: stop requested during the run should yield an aborted result
    and preserve partial assistant output in the run context messages.
    """
    provider = MockAbortableStreamProvider()

    await runner.reset(
        provider=provider,
        request=provider_request,
        run_context=ContextWrapper(context=None),
        tool_executor=mock_tool_executor,
        agent_hooks=mock_hooks,
        streaming=False,
    )

    step_gen = runner.step()

    # Consume the first step to allow the provider to start producing output.
    # This is where partial output may have been generated internally.
    first_step = await step_gen.__anext__()
    assert first_step is not None

    # Request stop after the run has started in non-streaming mode.
    runner.request_stop()

    # Collect remaining steps; one of them should reflect the aborted state.
    remaining_steps = [first_step]
    async for step in step_gen:
        remaining_steps.append(step)

    # Runner should be marked as aborted
    assert getattr(runner, "was_aborted")() is True
    # Final LLM response should be set
    assert getattr(runner, "final_llm_resp", None) is not None

    # There should be an aborted outcome in the collected steps
    assert any(
        getattr(s, "status", None) == "aborted"
        or getattr(getattr(s, "response", None), "status", None) == "aborted"
        for s in remaining_steps
    )

    # Partial assistant output should be preserved in the run context messages
    run_context = getattr(runner, "run_context", None)
    assert run_context is not None
    messages = getattr(run_context, "messages", [])
    assert messages
    assert any(getattr(m, "role", None) == "assistant" for m in messages)


@pytest.mark.asyncio
async def test_stop_requested_before_any_chunks_yields_aborted_and_empty_assistant(
    runner, provider_request, mock_tool_executor, mock_hooks
):
    """Stop requested before any chunks are consumed should still yield an aborted
    response and produce an appropriate final_llm_resp and messages.
    """
    provider = MockAbortableStreamProvider()

    await runner.reset(
        provider=provider,
        request=provider_request,
        run_context=ContextWrapper(context=None),
        tool_executor=mock_tool_executor,
        agent_hooks=mock_hooks,
        streaming=True,
    )

    # Create the generator but request stop before consuming any chunks
    step_gen = runner.step()
    runner.request_stop()

    steps = [step async for step in step_gen]

    # There should be at least one step representing the aborted outcome
    assert steps
    assert any(
        getattr(s, "status", None) == "aborted"
        or getattr(getattr(s, "response", None), "status", None) == "aborted"
        for s in steps
    )

    # Runner should be marked aborted and final_llm_resp should be consistent
    assert getattr(runner, "was_aborted")() is True
    final_llm_resp = getattr(runner, "final_llm_resp", None)
    assert final_llm_resp is not None

    # When stop happens before any chunks, assistant content should be empty
    # or equivalent to the "no output" contract in this codebase.
    assistant_messages = [
        m for m in getattr(runner.run_context, "messages", []) if getattr(m, "role", None) == "assistant"
    ]
    if assistant_messages:
        # If an assistant message exists, it should have empty/whitespace-only content
        assert all(
            not getattr(m, "content", "") or str(getattr(m, "content", "")).strip() == ""
            for m in assistant_messages
        )
    else:
        # Alternatively, no assistant messages at all is also an acceptable "empty" contract.
        assert assistant_messages == []

These tests assume:

  1. runner.was_aborted() is a callable method returning a bool, runner.final_llm_resp holds the final LLM response, and runner.run_context.messages is a list of message objects with role and content attributes.
  2. The step objects yielded by runner.step() either:
    • expose a .status attribute directly, or
    • have a .response attribute with a .status field set to "aborted" on abort.

If your actual API differs (e.g., different attribute names or response shapes), you will need to:

  • Update the getattr(..., "status", ...) checks to match your real step/response model.
  • Adjust how messages are accessed from run_context (e.g., runner.run_context.context.messages instead of runner.run_context.messages).
  • Align the “empty assistant output” assertions with the concrete structure of your final_llm_resp and message types.

You may also want to align docstrings and test names to your project’s existing test naming conventions if they differ.

Comment on lines +440 to +444
rest_responses = []
async for response in step_iter:
rest_responses.append(response)

assert any(resp.type == "aborted" for resp in rest_responses)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (testing): 收紧对 aborted 响应的断言,并覆盖 hook 行为

当前测试只断言了存在一个 aborted 响应。为更好地锁定预期行为,可以:

  1. 明确找到 aborted 响应,并对其 payload 进行断言(例如 MessageChain(type="aborted")),以便在 aborted 响应结构发生变化时能被测试捕获;
  2. 添加断言,确保相应的 hook(如 mock_hooks.on_agent_done)在中止时会以最终的 LLM 响应被调用一次,从而保证停止路径仍然会执行清理/后处理逻辑。

Suggested implementation:

    runner.request_stop()

    # Collect all remaining responses from the stream after requesting stop
    rest_responses = [first_resp]
    async for response in step_iter:
        rest_responses.append(response)

    # Locate the explicit aborted response and validate its payload
    aborted_resp = next(resp for resp in rest_responses if resp.type == "aborted")
    assert getattr(aborted_resp, "output", None) is not None
    # The aborted response should carry a MessageChain-style payload with type="aborted"
    assert aborted_resp.output.type == "aborted"

    # The runner should reflect that it was aborted
    assert runner.was_aborted() is True

    # The agent_done hook should still be called once with the final non-aborted response
    non_aborted_responses = [resp for resp in rest_responses if resp.type != "aborted"]
    final_response = non_aborted_responses[-1]
    mock_hooks.on_agent_done.assert_called_once_with(final_response)
  1. 如果 aborted_resp.output 是尚未在该测试模块中导入的 MessageChain(或类似)类型,请在文件顶部添加相应的 import,例如:
    • from inspect_ai.schema import MessageChain
  2. 确保传入 agent_hooksmock_hooks fixture(或工厂)暴露了 on_agent_done 作为 Mock/MagicMock
    • 比如,如果还没有的话,mock_hooks.on_agent_done = mocker.Mock()
  3. 如果 payload 的真实属性名不是 output(例如 messagedata),请根据现有响应结构,将 aborted_resp.output 调整为正确的属性名。
  4. 如果 hook 的预期函数签名是 (runner, final_response) 或包含额外参数,请调整 assert_called_once_with 以匹配具体签名(你可以参考本文件中其它对 mock_hooks.on_agent_done 断言的测试以保持一致)。
Original comment in English

suggestion (testing): Tighten assertions on the aborted response and exercise hook behavior

Currently the test only asserts that an aborted response exists. To better lock in the expected behavior:

  1. Explicitly locate the aborted response and assert on its payload (e.g., MessageChain(type="aborted")) so changes to the aborted response shape are caught.
  2. Add an assertion that the appropriate hook (e.g., mock_hooks.on_agent_done) is called once with the final LLM response on abort, to ensure the stop path still runs cleanup/post-processing logic.

Suggested implementation:

    runner.request_stop()

    # Collect all remaining responses from the stream after requesting stop
    rest_responses = [first_resp]
    async for response in step_iter:
        rest_responses.append(response)

    # Locate the explicit aborted response and validate its payload
    aborted_resp = next(resp for resp in rest_responses if resp.type == "aborted")
    assert getattr(aborted_resp, "output", None) is not None
    # The aborted response should carry a MessageChain-style payload with type="aborted"
    assert aborted_resp.output.type == "aborted"

    # The runner should reflect that it was aborted
    assert runner.was_aborted() is True

    # The agent_done hook should still be called once with the final non-aborted response
    non_aborted_responses = [resp for resp in rest_responses if resp.type != "aborted"]
    final_response = non_aborted_responses[-1]
    mock_hooks.on_agent_done.assert_called_once_with(final_response)
  1. If aborted_resp.output is a MessageChain (or similar) type that is not yet imported in this test module, add the appropriate import at the top of the file, e.g.:
    • from inspect_ai.schema import MessageChain
  2. Ensure the mock_hooks fixture (or factory) passed into agent_hooks exposes on_agent_done as a Mock/MagicMock:
    • e.g., mock_hooks.on_agent_done = mocker.Mock() or similar, if not already present.
  3. If the actual attribute name for the payload differs from output (e.g., message or data), adjust aborted_resp.output accordingly to match the existing response schema.
  4. If the hook’s expected signature is (runner, final_response) or includes additional parameters, update assert_called_once_with to match that exact signature (you can inspect other tests in this file that assert on mock_hooks.on_agent_done for consistency).

)
)

stop_watcher = asyncio.create_task(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (complexity): 考虑重构停止处理逻辑,使只有一个 watcher 负责 request_stop(),并通过共享的 helper 在一个地方清理 stop_watcher

你可以在不改变行为的前提下简化新的停止逻辑:

  1. 只保留一个调用 agent_runner.request_stop() 的入口;
  2. 去重 stop_watcher 的取消逻辑。

1. request_stop 的单一权威

目前:

  • _watch_agent_stop_signal 会调用 agent_runner.request_stop()
  • 主循环也会根据 _should_stop_agent 调用 agent_runner.request_stop()

这会让生命周期更难推理。建议只保留一个“权威调用点”。例如:让 watcher 成为唯一调用 request_stop() 的地方,而主循环只 观察 停止状态:

async for resp in agent_runner.step():
    # Only observe stop state; do not call request_stop() here
    if _should_stop_agent(astr_event):
        if resp.type == "aborted":
            # special aborted handling
            astr_event.set_extra("agent_user_aborted", True)
            astr_event.set_extra("agent_stop_requested", False)
            break
        # cooperative stop: just stop consuming further responses
        break

    if resp.type == "aborted":
        astr_event.set_extra("agent_user_aborted", True)
        astr_event.set_extra("agent_stop_requested", False)
        break

    if resp.type == "tool_call_result":
        ...

并仅在 watcher 中调用 request_stop()

async def _watch_agent_stop_signal(agent_runner: AgentRunner, astr_event) -> None:
    while not agent_runner.done():
        if _should_stop_agent(astr_event):
            agent_runner.request_stop()
            return
        await asyncio.sleep(0.5)

这样能保持语义不变,但会更清晰地表明:watcher 拥有停止请求的“所有权”,主循环只对停止状态作出反应。

2. 抽取重复的 stop_watcher 取消逻辑

stop_watcher 的取消模式在三处重复出现。可以抽取一个小 helper,和/或使用 try/finally 来集中处理。

Helper:

async def _cancel_task_safely(task: asyncio.Task | None) -> None:
    if not task or task.done():
        return
    task.cancel()
    try:
        await task
    except asyncio.CancelledError:
        pass

然后在 run_agent 中:

stop_watcher: asyncio.Task | None = None
try:
    stop_watcher = asyncio.create_task(
        _watch_agent_stop_signal(agent_runner, astr_event),
    )

    async for resp in agent_runner.step():
        ...
        if resp.type == "aborted":
            astr_event.set_extra("agent_user_aborted", True)
            astr_event.set_extra("agent_stop_requested", False)
            break
        ...

    if agent_runner.done():
        ...
        break

except Exception as e:
    await _cancel_task_safely(stop_watcher)
    ...
finally:
    # ensure watcher is always cleaned up, even on normal completion
    await _cancel_task_safely(stop_watcher)

这样可以移除各个分支里的重复代码,使 stop_watcher 的生命周期非常明确:只创建一次,并在一个地方统一取消/等待完成。

Original comment in English

issue (complexity): Consider restructuring the stop handling so a single watcher owns request_stop() and shared helper logic cleans up stop_watcher in one place.

You can simplify the new stop logic without losing any behavior by:

  1. Having exactly one place that calls agent_runner.request_stop().
  2. De-duplicating the stop_watcher cancellation logic.

1. Single authority for request_stop

Right now:

  • _watch_agent_stop_signal calls agent_runner.request_stop().
  • The main loop also calls agent_runner.request_stop() based on _should_stop_agent.

That makes it harder to reason about the lifecycle. Pick one authority. For example: let the watcher be the only place that calls request_stop(), and let the loop only observe the stop state:

async for resp in agent_runner.step():
    # Only observe stop state; do not call request_stop() here
    if _should_stop_agent(astr_event):
        if resp.type == "aborted":
            # special aborted handling
            astr_event.set_extra("agent_user_aborted", True)
            astr_event.set_extra("agent_stop_requested", False)
            break
        # cooperative stop: just stop consuming further responses
        break

    if resp.type == "aborted":
        astr_event.set_extra("agent_user_aborted", True)
        astr_event.set_extra("agent_stop_requested", False)
        break

    if resp.type == "tool_call_result":
        ...

And keep request_stop() only in the watcher:

async def _watch_agent_stop_signal(agent_runner: AgentRunner, astr_event) -> None:
    while not agent_runner.done():
        if _should_stop_agent(astr_event):
            agent_runner.request_stop()
            return
        await asyncio.sleep(0.5)

This keeps the same semantics but makes it clear that the watcher owns the stop request, and the loop only reacts to the stop state.

2. Factor out repeated stop_watcher cancellation

The stop_watcher cancellation pattern is repeated in three places. Extract a tiny helper and/or use try/finally so it’s centralized.

Helper:

async def _cancel_task_safely(task: asyncio.Task | None) -> None:
    if not task or task.done():
        return
    task.cancel()
    try:
        await task
    except asyncio.CancelledError:
        pass

Then in run_agent:

stop_watcher: asyncio.Task | None = None
try:
    stop_watcher = asyncio.create_task(
        _watch_agent_stop_signal(agent_runner, astr_event),
    )

    async for resp in agent_runner.step():
        ...
        if resp.type == "aborted":
            astr_event.set_extra("agent_user_aborted", True)
            astr_event.set_extra("agent_stop_requested", False)
            break
        ...

    if agent_runner.done():
        ...
        break

except Exception as e:
    await _cancel_task_safely(stop_watcher)
    ...
finally:
    # ensure watcher is always cleaned up, even on normal completion
    await _cancel_task_safely(stop_watcher)

This removes the per-branch duplication and makes the lifetime of stop_watcher obvious: created once, always cancelled/awaited in one place.

),
),
)
if self._stop_requested:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (complexity): 建议提取构造“被中断的 LLM 响应”和“完成中止流程”的辅助函数,从而将 step() 中的停止/中止逻辑集中起来,减少重复。

你可以通过抽取两个小 helper 来集中停止/中止路径并消除重复:一个用于构造“中断”的 LLMResponse,另一个用于完成中止流程(更新状态、统计、消息、hooks,并产出最终的 AgentResponse)。这样可以在保留行为的前提下让 step() 的控制流更加简洁。

1. 抽取构造中断 LLMResponse 的 helper

目前“用户中断”的系统字符串和归一化逻辑在多个地方重复出现。可以将它们封装到一个 helper 中:

SYSTEM_INTERRUPTED_TEXT = (
    "[SYSTEM: User actively interrupted the response generation. "
    "Partial output before interruption is preserved.]"
)

def _build_interrupted_response(
    self,
    base_resp: LLMResponse | None,
) -> LLMResponse:
    """
    Normalize an interrupted response into an assistant LLMResponse.

    - If base_resp is provided, preserve reasoning fields / partial content.
    - Ensure role='assistant'.
    - Ensure completion_text carries the system interruption message
      when appropriate.
    """
    if base_resp is None:
        return LLMResponse(role="assistant", completion_text="")

    if base_resp.role != "assistant":
        return LLMResponse(
            role="assistant",
            completion_text=SYSTEM_INTERRUPTED_TEXT,
            reasoning_content=base_resp.reasoning_content,
            reasoning_signature=base_resp.reasoning_signature,
        )

    # base_resp is already assistant; ensure completion_text has something
    if not base_resp.completion_text:
        return LLMResponse(
            role="assistant",
            completion_text=SYSTEM_INTERRUPTED_TEXT,
            reasoning_content=base_resp.reasoning_content,
            reasoning_signature=base_resp.reasoning_signature,
        )

    return base_resp

这样,循环内和循环后的停止分支都可以复用这个 helper,而不用在多处硬编码字符串和逻辑。

2. 抽取专职的中止收尾 helper

所有停止路径的副作用(状态、统计、消息、hook,以及最终产出的 AgentResponse)可以移动到一个私有方法中:

async def _finalize_abort(self, llm_resp: LLMResponse) -> AsyncIterator[AgentResponse]:
    logger.info("Agent execution was requested to stop by user.")

    self.final_llm_resp = llm_resp
    self._aborted = True
    self._transition_state(AgentState.DONE)
    self.stats.end_time = time.time()

    parts: list[Part] = []
    if llm_resp.reasoning_content or llm_resp.reasoning_signature:
        parts.append(
            ThinkPart(
                think=llm_resp.reasoning_content,
                encrypted=llm_resp.reasoning_signature,
            )
        )
    if llm_resp.completion_text:
        parts.append(TextPart(text=llm_resp.completion_text))
    if parts:
        self.run_context.messages.append(
            Message(role="assistant", content=parts)
        )

    try:
        await self.agent_hooks.on_agent_done(self.run_context, llm_resp)
    except Exception as e:
        logger.error("Error in on_agent_done hook: %s", e, exc_info=True)

    yield AgentResponse(
        type="aborted",
        data=AgentResponseData(chain=MessageChain(type="aborted")),
    )

3. 将 step() 简化为单一的中止路径

有了这些 helper,step() 只需要:

  • 在请求停止时(无论是循环内还是循环之后)构造并归一化一个 llm_resp_result
  • 在一个统一的早返回分支中调用 _finalize_abort

示例(只展示与你的修改相关的部分):

# inside the streaming loop
for llm_response in llm_responses:
    ...
    if self._stop_requested:
        # Use the latest chunk to build an interrupted response
        llm_resp_result = self._build_interrupted_response(llm_response)
        break

    continue

llm_resp_result = llm_response
...

# after the loop
if not llm_resp_result:
    if self._stop_requested:
        # No final chunk; still need a normalized interrupted response
        llm_resp_result = self._build_interrupted_response(None)
    else:
        return

if self._stop_requested:
    async for aborted_resp in self._finalize_abort(llm_resp_result):
        yield aborted_resp
    return

这样可以确保:

  • 停止行为以及状态/统计/消息的处理保持不变;
  • 只有一个清晰可见的中止分支;
  • 所有“用户中断”文案及归一化逻辑都集中在 _build_interrupted_response 中。
Original comment in English

issue (complexity): Consider extracting helpers for building the interrupted LLM response and finalizing aborts so the stop/abort logic in step() is centralized and less repetitive.

You can centralize the stop/abort path and remove duplication by extracting two small helpers: one to construct the “interrupted” LLMResponse, and one to finalize the abort (state, stats, messages, hooks, yield). That keeps step()’s control flow simpler while preserving behavior.

1. Extract a helper to build the interrupted LLMResponse

Right now the “user interrupted” system string and normalization are duplicated in multiple places. Wrap that into a single helper:

SYSTEM_INTERRUPTED_TEXT = (
    "[SYSTEM: User actively interrupted the response generation. "
    "Partial output before interruption is preserved.]"
)

def _build_interrupted_response(
    self,
    base_resp: LLMResponse | None,
) -> LLMResponse:
    """
    Normalize an interrupted response into an assistant LLMResponse.

    - If base_resp is provided, preserve reasoning fields / partial content.
    - Ensure role='assistant'.
    - Ensure completion_text carries the system interruption message
      when appropriate.
    """
    if base_resp is None:
        return LLMResponse(role="assistant", completion_text="")

    if base_resp.role != "assistant":
        return LLMResponse(
            role="assistant",
            completion_text=SYSTEM_INTERRUPTED_TEXT,
            reasoning_content=base_resp.reasoning_content,
            reasoning_signature=base_resp.reasoning_signature,
        )

    # base_resp is already assistant; ensure completion_text has something
    if not base_resp.completion_text:
        return LLMResponse(
            role="assistant",
            completion_text=SYSTEM_INTERRUPTED_TEXT,
            reasoning_content=base_resp.reasoning_content,
            reasoning_signature=base_resp.reasoning_signature,
        )

    return base_resp

Now both the in-loop and post-loop stop branches can reuse this instead of hard-coding strings and logic.

2. Extract a focused abort finalizer

All the stop-path side effects (state, stats, messages, hook, and yielding AgentResponse) can move into a single private method:

async def _finalize_abort(self, llm_resp: LLMResponse) -> AsyncIterator[AgentResponse]:
    logger.info("Agent execution was requested to stop by user.")

    self.final_llm_resp = llm_resp
    self._aborted = True
    self._transition_state(AgentState.DONE)
    self.stats.end_time = time.time()

    parts: list[Part] = []
    if llm_resp.reasoning_content or llm_resp.reasoning_signature:
        parts.append(
            ThinkPart(
                think=llm_resp.reasoning_content,
                encrypted=llm_resp.reasoning_signature,
            )
        )
    if llm_resp.completion_text:
        parts.append(TextPart(text=llm_resp.completion_text))
    if parts:
        self.run_context.messages.append(
            Message(role="assistant", content=parts)
        )

    try:
        await self.agent_hooks.on_agent_done(self.run_context, llm_resp)
    except Exception as e:
        logger.error("Error in on_agent_done hook: %s", e, exc_info=True)

    yield AgentResponse(
        type="aborted",
        data=AgentResponseData(chain=MessageChain(type="aborted")),
    )

3. Simplify step() to a single authoritative abort path

With those helpers, step() only needs:

  • A normalized llm_resp_result set when stop is requested (either in-loop or post-loop).
  • A single early exit that delegates to _finalize_abort.

For example (showing just the relevant parts around your changes):

# inside the streaming loop
for llm_response in llm_responses:
    ...
    if self._stop_requested:
        # Use the latest chunk to build an interrupted response
        llm_resp_result = self._build_interrupted_response(llm_response)
        break

    continue

llm_resp_result = llm_response
...

# after the loop
if not llm_resp_result:
    if self._stop_requested:
        # No final chunk; still need a normalized interrupted response
        llm_resp_result = self._build_interrupted_response(None)
    else:
        return

if self._stop_requested:
    async for aborted_resp in self._finalize_abort(llm_resp_result):
        yield aborted_resp
    return

This keeps:

  • The stop behavior and stats/state/messaging identical.
  • A single, clearly visible abort branch.
  • All “user interrupted” string and normalization logic centralized in _build_interrupted_response.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

The pull request introduces a valuable 'stop' functionality for active agent sessions, allowing users to interrupt long-running tasks across the backend, utilities, and frontend. However, a critical security vulnerability exists in the command-line interface for the 'stop' command, lacking permission checks, which could allow unauthorized users to disrupt interactions in group chats. Furthermore, the implementation could be improved by addressing inefficiencies in run_agent's stop signal monitoring due to per-step task creation, reducing latency from the watcher's 0.5s polling interval, and ensuring partial output is correctly preserved in the database history by implementing chunk accumulation in the runner.

Comment on lines +334 to +339
llm_resp_result = LLMResponse(
role="assistant",
completion_text="[SYSTEM: User actively interrupted the response generation. Partial output before interruption is preserved.]",
reasoning_content=llm_response.reasoning_content,
reasoning_signature=llm_response.reasoning_signature,
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The runner currently does not accumulate streaming chunks. When a stop is requested, llm_resp_result is created using only the data from the current chunk. Consequently, all text and reasoning content yielded in previous chunks will be missing from the final response saved to the conversation history. To correctly preserve partial output as intended, you should accumulate completion_text and reasoning_content in buffers throughout the streaming loop and use those buffers when constructing the aborted LLMResponse.

Comment on lines +105 to +117
async def stop(self, message: AstrMessageEvent) -> None:
"""停止当前会话正在运行的 Agent"""
cfg = self.context.get_config(umo=message.unified_msg_origin)
agent_runner_type = cfg["provider_settings"]["agent_runner_type"]
umo = message.unified_msg_origin

if agent_runner_type in THIRD_PARTY_AGENT_RUNNER_KEY:
stopped_count = active_event_registry.stop_all(umo, exclude=message)
else:
stopped_count = active_event_registry.request_agent_stop_all(
umo,
exclude=message,
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-medium medium

The stop command lacks permission checks, allowing any user in a shared session (such as a group chat) to interrupt an active agent task initiated by another user. In contrast, other destructive or disruptive commands in the same file, such as reset and del_conv, implement permission checks that default to requiring administrator privileges in group settings. This inconsistency allows a regular member to perform a denial-of-service-like action against other members' interactions with the bot.

Comment on lines +55 to +57
stop_watcher = asyncio.create_task(
_watch_agent_stop_signal(agent_runner, astr_event),
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Creating and cancelling the stop_watcher task inside the while loop for every step is inefficient. If an agent run takes many steps, this results in frequent task creation and destruction. It is recommended to create a single watcher task for the entire duration of the run_agent execution and clean it up in a finally block.

if _should_stop_agent(astr_event):
agent_runner.request_stop()
return
await asyncio.sleep(0.5)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

A 0.5-second polling interval introduces a noticeable delay between the user clicking the stop button and the agent actually stopping. Reducing this interval to 0.1s would make the functionality feel much more responsive.

Suggested change
await asyncio.sleep(0.5)
await asyncio.sleep(0.1)

Comment on lines +383 to +389
# if user_aborted:
# message_to_save.append(
# Message(
# role="assistant",
# content="[User aborted this request. Partial output before abort was preserved.]",
# ).model_dump()
# )
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This block of commented-out code should be removed to maintain a clean codebase.

Comment on lines +631 to +634
umo = (
f"{session.platform_id}:{message_type}:"
f"{session.platform_id}!{username}!{session_id}"
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The logic for constructing the umo (Unified Message Origin) string is duplicated here and in the delete_webchat_session method. This duplication is fragile; if the UMO format requirements change, it's easy to miss one of these locations. Consider extracting this logic into a helper method or a property on the session model.

Comment on lines +64 to +69
if not stop_watcher.done():
stop_watcher.cancel()
try:
await stop_watcher
except asyncio.CancelledError:
pass
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This cleanup logic for the stop_watcher is repeated three times in this file (lines 64-69, 145-150, and 164-169). Extracting this into a small helper function would improve maintainability and reduce code duplication.

@Soulter Soulter merged commit e357d9d into master Feb 23, 2026
6 checks passed
astrbot-doc-agent bot pushed a commit to AstrBotDevs/AstrBot-docs that referenced this pull request Feb 23, 2026
@astrbot-doc-agent
Copy link

Generated docs update PR (pending manual review):
AstrBotDevs/AstrBot-docs#135
Trigger: PR merged


AI change summary:

  • zh/use/command.mden/use/command.md 中新增 /stop 指令说明,用于中断当前 Agent 任务。
  • zh/use/webui.mden/use/webui.md 中新增聊天章节,说明输入框右侧停止按钮的功能。
  • 中英文文档均已同步更新。

Experimental bot notice:

  • This output is generated by AstrBot-Doc-Agent for review only.
  • It does not represent the final documentation form.

LIghtJUNction added a commit that referenced this pull request Feb 27, 2026
* feat: add bocha web search tool (#4902)

* add bocha web search tool

* Revert "add bocha web search tool"

This reverts commit 1b36d75a17b4c4751828f31f6759357cd2d4000a.

* add bocha web search tool

* fix: correct temporary_cache spelling and update supported tools for web search

* ruff

---------

Co-authored-by: Soulter <905617992@qq.com>

* fix: messages[x] assistant content must contain at least one part (#4928)

* fix: messages[x] assistant content must contain at least one part

fixes: #4876

* ruff format

* chore: bump version to 4.14.5 (#4930)

* feat: implement feishu / lark media file handling utilities for file, audio and video processing (#4938)

* feat: implement media file handling utilities for audio and video processing

* feat: refactor file upload handling for audio and video in LarkMessageEvent

* feat: add cleanup for failed audio and video conversion outputs in media_utils

* feat: add utility methods for sending messages and uploading files in LarkMessageEvent

* fix: correct spelling of 'temporary' in SharedPreferences class

* perf: optimize webchat and wecom ai queue lifecycle (#4941)

* perf: optimize webchat and wecom ai queue lifecycle

* perf: enhance webchat back queue management with conversation ID support

* fix: localize provider source config UI (#4933)

* fix: localize provider source ui

* feat: localize provider metadata keys

* chore: add provider metadata translations

* chore: format provider i18n changes

* fix: preserve metadata fields in i18n conversion

* fix: internationalize platform config and dialog

* fix: add Weixin official account platform icon

---------

Co-authored-by: Soulter <905617992@qq.com>

* chore: bump version to 4.14.6

* feat: add provider-souce-level proxy (#4949)

* feat: 添加 Provider 级别代理支持及请求失败日志

* refactor: simplify provider source configuration structure

* refactor: move env proxy fallback logic to log_connection_failure

* refactor: update client proxy handling and add terminate method for cleanup

* refactor: update no_proxy configuration to remove redundant subnet

---------

Co-authored-by: Soulter <905617992@qq.com>

* feat(ComponentPanel):  implement permission management for dashboard (#4887)

* feat(backend): add permission update api

* feat(useCommandActions): add updatePermission action and translations

* feat(dashboard): implement permission editing ui

* style: fix import sorting in command.py

* refactor(backend): extract permission update logic to service

* feat(i18n): add success and failure messages for command updates

---------

Co-authored-by: Soulter <905617992@qq.com>

* feat: 允许 LLM 预览工具返回的图片并自主决定是否发送 (#4895)

* feat: 允许 LLM 预览工具返回的图片并自主决定是否发送

* 复用 send_message_to_user 替代独立的图片发送工具

* feat: implement _HandleFunctionToolsResult class for improved tool response handling

* docs: add path handling guidelines to AGENTS.md

---------

Co-authored-by: Soulter <905617992@qq.com>

* feat(telegram): 添加媒体组(相册)支持 / add media group (album) support (#4893)

* feat(telegram): 添加媒体组(相册)支持 / add media group (album) support

## 功能说明
支持 Telegram 的媒体组消息(相册),将多张图片/视频合并为一条消息处理,而不是分散成多条消息。

## 主要改动

### 1. 初始化媒体组缓存 (__init__)
- 添加 `media_group_cache` 字典存储待处理的媒体组消息
- 使用 2.5 秒超时收集媒体组消息(基于社区最佳实践)
- 最大等待时间 10 秒(防止永久等待)

### 2. 消息处理流程 (message_handler)
- 检测 `media_group_id` 判断是否为媒体组消息
- 媒体组消息走特殊处理流程,避免分散处理

### 3. 媒体组消息缓存 (handle_media_group_message)
- 缓存收到的媒体组消息
- 使用 APScheduler 实现防抖(debounce)机制
- 每收到新消息时重置超时计时器
- 超时后触发统一处理

### 4. 媒体组合并处理 (process_media_group)
- 从缓存中取出所有媒体项
- 使用第一条消息作为基础(保留文本、回复等信息)
- 依次添加所有图片、视频、文档到消息链
- 将合并后的消息发送到处理流程

## 技术方案论证

Telegram Bot API 在处理媒体组时的设计限制:
1. 将媒体组的每个消息作为独立的 update 发送
2. 每个 update 带有相同的 `media_group_id`
3. **不提供**组的总数、结束标志或一次性完整组的机制

因此,bot 必须自行收集消息,并通过硬编码超时(timeout/delay)等待可能延迟到达的消息。
这是目前唯一可靠的方案,被官方实现、主流框架和开发者社区广泛采用。

### 官方和社区证据:
- **Telegram Bot API 服务器实现(tdlib)**:明确指出缺少结束标志或总数信息
  https://github.com/tdlib/telegram-bot-api/issues/643

- **Telegram Bot API 服务器 issue**:讨论媒体组处理的不便性,推荐使用超时机制
  https://github.com/tdlib/telegram-bot-api/issues/339

- **Telegraf(Node.js 框架)**:专用媒体组中间件使用 timeout 控制等待时间
  https://github.com/DieTime/telegraf-media-group

- **StackOverflow 讨论**:无法一次性获取媒体组所有文件,必须手动收集
  https://stackoverflow.com/questions/50180048/telegram-api-get-all-uploaded-photos-by-media-group-id

- **python-telegram-bot 社区**:确认媒体组消息单独到达,需手动处理
  https://github.com/python-telegram-bot/python-telegram-bot/discussions/3143

- **Telegram Bot API 官方文档**:仅定义 `media_group_id` 为可选字段,不提供获取完整组的接口
  https://core.telegram.org/bots/api#message

## 实现细节
- 使用 2.5 秒超时收集媒体组消息(基于社区最佳实践)
- 最大等待时间 10 秒(防止永久等待)
- 采用防抖(debounce)机制:每收到新消息重置计时器
- 利用 APScheduler 实现延迟处理和任务调度

## 测试验证
- ✅ 发送 5 张图片相册,成功合并为一条消息
- ✅ 保留原始文本说明和回复信息
- ✅ 支持图片、视频、文档混合的媒体组
- ✅ 日志显示 Processing media group <media_group_id> with 5 items

## 代码变更
- 文件:astrbot/core/platform/sources/telegram/tg_adapter.py
- 新增代码:124 行
- 新增方法:handle_media_group_message(), process_media_group()

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>

* refactor(telegram): 优化媒体组处理性能和可靠性

根据代码审查反馈改进:

1. 实现 media_group_max_wait 防止无限延迟
   - 跟踪媒体组创建时间,超过最大等待时间立即处理
   - 最坏情况下 10 秒内必定处理,防止消息持续到达导致无限延迟

2. 移除手动 job 查找优化性能
   - 删除 O(N) 的 get_jobs() 循环扫描
   - 依赖 replace_existing=True 自动替换任务

3. 重用 convert_message 减少代码重复
   - 统一所有媒体类型转换逻辑
   - 未来添加新媒体类型只需修改一处

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>

* fix(telegram): handle missing message in media group processing and improve logging messages

---------

Co-authored-by: Ubuntu <ubuntu@localhost.localdomain>
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
Co-authored-by: Soulter <905617992@qq.com>

* feat: add welcome feature with localized content and onboarding steps

* fix: correct height attribute to max-height for dialog component

* feat: supports electron app (#4952)

* feat: add desktop wrapper with frontend-only packaging

* docs: add desktop build docs and track dashboard lockfile

* fix: track desktop lockfile for npm ci

* fix: allow custom install directory for windows installer

* chore: migrate desktop workflow to pnpm

* fix(desktop): build AppImage only on Linux

* fix(desktop): harden packaged startup and backend bundling

* fix(desktop): adapt packaged restart and plugin dependency flow

* fix(desktop): prevent backend respawn race on quit

* fix(desktop): prefer pyproject version for desktop packaging

* fix(desktop): improve startup loading UX and reduce flicker

* ci: add desktop multi-platform release workflow

* ci: fix desktop release build and mac runner labels

* ci: disable electron-builder auto publish in desktop build

* ci: avoid electron-builder publish path in build matrix

* ci: normalize desktop release artifact names

* ci: exclude blockmap files from desktop release assets

* ci: prefix desktop release assets with AstrBot and purge blockmaps

* feat: add electron bridge types and expose backend control methods in preload script

* Update startup screen assets and styles

- Changed the icon from PNG to SVG format for better scalability.
- Updated the border color from #d0d0d0 to #eeeeee for a softer appearance.
- Adjusted the width of the startup screen from 460px to 360px for improved responsiveness.

* Update .gitignore to include package.json

* chore: remove desktop gitkeep ignore exceptions

* docs: update desktop troubleshooting for current runtime behavior

* refactor(desktop): modularize runtime and harden startup flow

---------

Co-authored-by: Soulter <905617992@qq.com>
Co-authored-by: Soulter <37870767+Soulter@users.noreply.github.com>

* fix: dedupe preset messages (#4961)

* feat: enhance package.json with resource filters and compression settings

* chore: update Python version requirements to 3.12 (#4963)

* chore: bump version to 4.14.7

* feat: refactor release workflow and add special update handling for electron app (#4969)

* chore: bump version to 4.14.8 and bump faiss-cpu version up to date

* chore: auto ann fix by ruff (#4903)

* chore: auto fix by ruff

* refactor: 统一修正返回类型注解为 None/bool 以匹配实现

* refactor: 将 _get_next_page 改为异步并移除多余的请求错误抛出

* refactor: 将 get_client 的返回类型改为 object

* style: 为 LarkMessageEvent 的相关方法添加返回类型注解 None

---------

Co-authored-by: Soulter <37870767+Soulter@users.noreply.github.com>

* fix: prepare OpenSSL via vcpkg for Windows ARM64

* ci: change ghcr namespace

* chore: update pydantic dependency version (#4980)

* feat: add delete button to persona management dialog (#4978)

* Initial plan

* feat: add delete button to persona management dialog

- Added delete button to PersonaForm dialog (only visible when editing)
- Implemented deletePersona method with confirmation dialog
- Connected delete event to PersonaManager for proper handling
- Button positioned on left side of dialog actions for clear separation
- Uses existing i18n translations for delete button and messages

Co-authored-by: Soulter <37870767+Soulter@users.noreply.github.com>

* fix: use finally block to ensure saving state is reset

- Moved `this.saving = false` to finally block in deletePersona
- Ensures UI doesn't stay in saving state after errors
- Follows best practices for state management

Co-authored-by: Soulter <37870767+Soulter@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: Soulter <37870767+Soulter@users.noreply.github.com>

* feat: enhance Dingtalk adapter with active push message and image, video, audio message type (#4986)

* fix: handle pip install execution in frozen runtime (#4985)

* fix: handle pip install execution in frozen runtime

* fix: harden pip subprocess fallback handling

* fix: collect certifi data in desktop backend build (#4995)

* feat: 企业微信应用 支持主动消息推送,并优化企微应用、微信公众号、微信客服音频相关的处理 (#4998)

* feat: 企业微信智能机器人支持主动消息推送以及发送视频、文件等消息类型支持 (#4999)

* feat: enhance WecomAIBotAdapter and WecomAIBotMessageEvent for improved streaming message handling (#5000)

fixes: #3965

* feat: enhance persona tool management and update UI localization for subagent orchestration (#4990)

* feat: enhance persona tool management and update UI localization for subagent orchestration

* fix: remove debug logging for final ProviderRequest in build_main_agent function

* perf: 稳定源码与 Electron 打包环境下的 pip 安装行为,并修复非 Electron 环境下点击 WebUI 更新按钮时出现跳转对话框的问题 (#4996)

* fix: handle pip install execution in frozen runtime

* fix: harden pip subprocess fallback handling

* fix: scope global data root to packaged electron runtime

* refactor: inline frozen runtime check for electron guard

* fix: prefer current interpreter for source pip installs

* fix: avoid resolving venv python symlink for pip

* refactor: share runtime environment detection utilities

* fix: improve error message when pip module is unavailable

* fix: raise ImportError when pip module is unavailable

* fix: preserve ImportError semantics for missing pip

* fix: 修复非electron app环境更新时仍然显示electron更新对话框的问题

---------

Co-authored-by: Soulter <905617992@qq.com>

* fix: 'HandoffTool' object has no attribute 'agent' (#5005)

* fix: 移动agent的位置到super().__init__之后

* add: 添加一行注释

* chore(deps): bump the github-actions group with 2 updates (#5006)

Bumps the github-actions group with 2 updates: [astral-sh/setup-uv](https://github.com/astral-sh/setup-uv) and [actions/download-artifact](https://github.com/actions/download-artifact).


Updates `astral-sh/setup-uv` from 6 to 7
- [Release notes](https://github.com/astral-sh/setup-uv/releases)
- [Commits](https://github.com/astral-sh/setup-uv/compare/v6...v7)

Updates `actions/download-artifact` from 6 to 7
- [Release notes](https://github.com/actions/download-artifact/releases)
- [Commits](https://github.com/actions/download-artifact/compare/v6...v7)

---
updated-dependencies:
- dependency-name: astral-sh/setup-uv
  dependency-version: '7'
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: github-actions
- dependency-name: actions/download-artifact
  dependency-version: '7'
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: github-actions
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* fix: stabilize packaged runtime pip/ssl behavior and mac font fallback (#5007)

* fix: patch pip distlib finder for frozen electron runtime

* fix: use certifi CA bundle for runtime SSL requests

* fix: configure certifi CA before core imports

* fix: improve mac font fallback for dashboard text

* fix: harden frozen pip patch and unify TLS connector

* refactor: centralize dashboard CJK font fallback stacks

* perf: reuse TLS context and avoid repeated frozen pip patch

* refactor: bootstrap TLS setup before core imports

* fix: use async confirm dialog for provider deletions

* fix: replace native confirm dialogs in dashboard

- Add shared confirm helper in dashboard/src/utils/confirmDialog.ts for async dialog usage with safe fallback.

- Migrate provider, chat, config, session, platform, persona, MCP, backup, and knowledge-base delete/close confirmations to use the shared helper.

- Remove scattered inline confirm handling to keep behavior consistent and avoid native blocking dialog focus/caret issues in Electron.

* fix: capture runtime bootstrap logs after logger init

- Add bootstrap record buffer in runtime_bootstrap for early TLS patch logs before logger is ready.

- Flush buffered bootstrap logs to astrbot logger at process startup in main.py.

- Include concrete exception details for TLS bootstrap failures to improve diagnosis.

* fix: harden runtime bootstrap and unify confirm handling

- Simplify bootstrap log buffering and add a public initialize hook for non-main startup paths.

- Guard aiohttp TLS patching with feature/type checks and keep graceful fallback when internals are unavailable.

- Standardize dashboard confirmation flow via shared confirm helpers across composition and options API components.

* refactor: simplify runtime tls bootstrap and tighten confirm typing

* refactor: align ssl helper namespace and confirm usage

* fix: 修复 Windows 打包版后端重启失败问题 (#5009)

* fix: patch pip distlib finder for frozen electron runtime

* fix: use certifi CA bundle for runtime SSL requests

* fix: configure certifi CA before core imports

* fix: improve mac font fallback for dashboard text

* fix: harden frozen pip patch and unify TLS connector

* refactor: centralize dashboard CJK font fallback stacks

* perf: reuse TLS context and avoid repeated frozen pip patch

* refactor: bootstrap TLS setup before core imports

* fix: use async confirm dialog for provider deletions

* fix: replace native confirm dialogs in dashboard

- Add shared confirm helper in dashboard/src/utils/confirmDialog.ts for async dialog usage with safe fallback.

- Migrate provider, chat, config, session, platform, persona, MCP, backup, and knowledge-base delete/close confirmations to use the shared helper.

- Remove scattered inline confirm handling to keep behavior consistent and avoid native blocking dialog focus/caret issues in Electron.

* fix: capture runtime bootstrap logs after logger init

- Add bootstrap record buffer in runtime_bootstrap for early TLS patch logs before logger is ready.

- Flush buffered bootstrap logs to astrbot logger at process startup in main.py.

- Include concrete exception details for TLS bootstrap failures to improve diagnosis.

* fix: harden runtime bootstrap and unify confirm handling

- Simplify bootstrap log buffering and add a public initialize hook for non-main startup paths.

- Guard aiohttp TLS patching with feature/type checks and keep graceful fallback when internals are unavailable.

- Standardize dashboard confirmation flow via shared confirm helpers across composition and options API components.

* refactor: simplify runtime tls bootstrap and tighten confirm typing

* refactor: align ssl helper namespace and confirm usage

* fix: avoid frozen restart crash from multiprocessing import

* fix: include missing frozen dependencies for windows backend

* fix: use execv for stable backend reboot args

* Revert "fix: use execv for stable backend reboot args"

This reverts commit 9cc27becffeba0e117fea26aa5c2e1fe7afc6e36.

* Revert "fix: include missing frozen dependencies for windows backend"

This reverts commit 52554bea1fa61045451600c64447b7bf38cf6c92.

* Revert "fix: avoid frozen restart crash from multiprocessing import"

This reverts commit 10548645b0ba1e19b64194878ece478a48067959.

* fix: reset pyinstaller onefile env before reboot

* fix: unify electron restart path and tray-exit backend cleanup

* fix: stabilize desktop restart detection and frozen reboot args

* fix: make dashboard restart wait detection robust

* fix: revert dashboard restart waiting interaction tweaks

* fix: pass auth token for desktop graceful restart

* fix: avoid false failure during graceful restart wait

* fix: start restart waiting before electron restart call

* fix: harden restart waiting and reboot arg parsing

* fix: parse start_time as numeric timestamp

* fix: 修复app内重启异常,修复app内点击重启不能立刻提示重启,以及在后端就绪时及时刷新界面的问题 (#5013)

* fix: patch pip distlib finder for frozen electron runtime

* fix: use certifi CA bundle for runtime SSL requests

* fix: configure certifi CA before core imports

* fix: improve mac font fallback for dashboard text

* fix: harden frozen pip patch and unify TLS connector

* refactor: centralize dashboard CJK font fallback stacks

* perf: reuse TLS context and avoid repeated frozen pip patch

* refactor: bootstrap TLS setup before core imports

* fix: use async confirm dialog for provider deletions

* fix: replace native confirm dialogs in dashboard

- Add shared confirm helper in dashboard/src/utils/confirmDialog.ts for async dialog usage with safe fallback.

- Migrate provider, chat, config, session, platform, persona, MCP, backup, and knowledge-base delete/close confirmations to use the shared helper.

- Remove scattered inline confirm handling to keep behavior consistent and avoid native blocking dialog focus/caret issues in Electron.

* fix: capture runtime bootstrap logs after logger init

- Add bootstrap record buffer in runtime_bootstrap for early TLS patch logs before logger is ready.

- Flush buffered bootstrap logs to astrbot logger at process startup in main.py.

- Include concrete exception details for TLS bootstrap failures to improve diagnosis.

* fix: harden runtime bootstrap and unify confirm handling

- Simplify bootstrap log buffering and add a public initialize hook for non-main startup paths.

- Guard aiohttp TLS patching with feature/type checks and keep graceful fallback when internals are unavailable.

- Standardize dashboard confirmation flow via shared confirm helpers across composition and options API components.

* refactor: simplify runtime tls bootstrap and tighten confirm typing

* refactor: align ssl helper namespace and confirm usage

* fix: avoid frozen restart crash from multiprocessing import

* fix: include missing frozen dependencies for windows backend

* fix: use execv for stable backend reboot args

* Revert "fix: use execv for stable backend reboot args"

This reverts commit 9cc27becffeba0e117fea26aa5c2e1fe7afc6e36.

* Revert "fix: include missing frozen dependencies for windows backend"

This reverts commit 52554bea1fa61045451600c64447b7bf38cf6c92.

* Revert "fix: avoid frozen restart crash from multiprocessing import"

This reverts commit 10548645b0ba1e19b64194878ece478a48067959.

* fix: reset pyinstaller onefile env before reboot

* fix: unify electron restart path and tray-exit backend cleanup

* fix: stabilize desktop restart detection and frozen reboot args

* fix: make dashboard restart wait detection robust

* fix: revert dashboard restart waiting interaction tweaks

* fix: pass auth token for desktop graceful restart

* fix: avoid false failure during graceful restart wait

* fix: start restart waiting before electron restart call

* fix: harden restart waiting and reboot arg parsing

* fix: parse start_time as numeric timestamp

* fix: preserve windows frozen reboot argv quoting

* fix: align restart waiting with electron restart timing

* fix: tighten graceful restart and unmanaged kill safety

* chore: bump version to 4.15.0 (#5003)

* fix: add reminder for v4.14.8 users regarding manual redeployment due to a bug

* fix: harden plugin dependency loading in frozen app runtime (#5015)

* fix: compare plugin versions semantically in market updates

* fix: prioritize plugin site-packages for in-process pip

* fix: reload starlette from plugin target site-packages

* fix: harden plugin dependency import precedence in frozen runtime

* fix: improve plugin dependency conflict handling

* refactor: simplify plugin conflict checks and version utils

* fix: expand transitive plugin dependencies for conflict checks

* fix: recover conflicting plugin dependencies during module prefer

* fix: reuse renderer restart flow for tray backend restart

* fix: add recoverable plugin dependency conflict handling

* revert: remove plugin version comparison changes

* fix: add missing tray restart backend labels

* feat: adding support for media and quoted message attachments for feishu (#5018)

* docs: add AUR installation method (#4879)

* docs: sync system package manager installation instructions to all languages

* Update README.md

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update README.md

Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>

* fix/typo

* refactor: update system package manager installation instructions for Arch Linux across multiple language README files

* feat: add installation command for AstrBot in multiple language README files

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>
Co-authored-by: Soulter <37870767+Soulter@users.noreply.github.com>
Co-authored-by: Soulter <905617992@qq.com>

* fix(desktop): 为 Electron 与后端日志增加按大小轮转 (#5029)

* fix(desktop): rotate electron and backend logs

* refactor(desktop): centralize log rotation defaults and debug fs errors

* fix(desktop): harden rotation fs ops and buffer backend log writes

* refactor(desktop): extract buffered logger and reduce sync stat calls

* refactor(desktop): simplify rotation flow and harden logger config

* fix(desktop): make app logging async and flush-safe

* fix: harden app log path switching and debug-gated rotation errors

* fix: cap buffered log chunk size during path switch

* feat: add first notice feature with multilingual support and UI integration

* fix: 提升打包版桌面端启动稳定性并优化插件依赖处理 (#5031)

* fix(desktop): rotate electron and backend logs

* refactor(desktop): centralize log rotation defaults and debug fs errors

* fix(desktop): harden rotation fs ops and buffer backend log writes

* refactor(desktop): extract buffered logger and reduce sync stat calls

* refactor(desktop): simplify rotation flow and harden logger config

* fix(desktop): make app logging async and flush-safe

* fix: harden app log path switching and debug-gated rotation errors

* fix: cap buffered log chunk size during path switch

* fix: avoid redundant plugin reinstall and upgrade electron

* fix: stop webchat tasks cleanly and bind packaged backend to localhost

* fix: unify platform shutdown and await webchat listener cleanup

* fix: improve startup logs for dashboard and onebot listeners

* fix: revert extra startup service logs

* fix: harden plugin import recovery and webchat listener cleanup

* fix: pin dashboard ci node version to 24.13.0

* fix: avoid duplicate webchat listener cleanup on terminate

* refactor: clarify platform task lifecycle management

* fix: continue platform shutdown when terminate fails

* feat: temporary file handling and introduce TempDirCleaner (#5026)

* feat: temporary file handling and introduce TempDirCleaner

- Updated various modules to use `get_astrbot_temp_path()` instead of `get_astrbot_data_path()` for temporary file storage.
- Renamed temporary files for better identification and organization.
- Introduced `TempDirCleaner` to manage the size of the temporary directory, ensuring it does not exceed a specified limit by deleting the oldest files.
- Added configuration option for maximum temporary directory size in the dashboard.
- Implemented tests for `TempDirCleaner` to verify cleanup functionality and size management.

* ruff

* fix: close unawaited reset coroutine on early return (#5033)

When an OnLLMRequestEvent hook stops event propagation, the
reset_coro created by build_main_agent was never awaited, causing
a RuntimeWarning. Close the coroutine explicitly before returning.

Fixes #5032

Co-authored-by: Limitless2023 <limitless@users.noreply.github.com>

* fix: update error logging message for connection failures

* docs: clean and sync README (#5014)

* fix: close missing div in README

* fix: sync README_zh-TW with README

* fix: sync README

* fix: correct typo

correct url in README_en README_fr README_ru

* docs: sync README_en with README

* Update README_en.md

Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>

---------

Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>
Co-authored-by: Soulter <37870767+Soulter@users.noreply.github.com>

* fix: provider extra param dialog key display error

* chore: ruff format

* feat: add send_chat_action for Telegram platform adapter (#5037)

* feat: add send_chat_action for Telegram platform adapter

Add typing/upload indicator when sending messages via Telegram.
- Added _send_chat_action helper method for sending chat actions
- Send appropriate action (typing, upload_photo, upload_document, upload_voice)
  before sending different message types
- Support streaming mode with typing indicator
- Support supergroup with message_thread_id

* refactor(telegram): extract chat action helpers and add throttling

- Add ACTION_BY_TYPE mapping for message type to action priority
- Add _get_chat_action_for_chain() to determine action from message chain
- Add _send_media_with_action() for upload → send → restore typing pattern
- Add _ensure_typing() helper for typing status
- Add chat action throttling (0.5s) in streaming mode to avoid rate limits
- Update type annotation to ChatAction | str for better static checking

* feat(telegram): implement send_typing method for Telegram platform

---------

Co-authored-by: Soulter <905617992@qq.com>

* fix: 修复更新日志、官方文档弹窗双滚动条问题 (#5060)

* docs: sync and fix readme typo (#5055)

* docs: fix index typo

* docs: fix typo in README_en.md

- 移除英文README中意外出现的俄语,并替换为英语

* docs: fix html typo

- remove unused '</p>'

* docs: sync table with README

* docs: sync README header format

- keep the README header format consistent

* doc: sync key features

* style: format files

- Fix formatting issues from previous PR

* fix: correct md anchor link

* docs: correct typo in README_fr.md

Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>

* docs: correct typo in README_zh-TW.md

Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>

---------

Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>

* fix: 修复备份时缺失的人格文件夹映射 (#5042)

* feat: QQ 官方机器人平台支持主动推送消息、私聊场景下支持接收文件 (#5066)

* feat: QQ 官方机器人平台支持主动推送消息、私聊场景下支持接收文件

* feat: enhance QQOfficialWebhook to remember session scenes for group, channel, and friend messages

* perf: 优化分段回复间隔时间的初始化逻辑 (#5068)

fixes: #5059

* fix: chunk err when using openrouter deepseek (#5069)

* feat: add i18n supports for custom platform adapters (#5045)

* Feat: 为插件提供的适配器的元数据&i18n提供数据通路

* chore: update docstrings with pull request references

Added references to pull request 5045 in docstrings.

---------

Co-authored-by: Soulter <37870767+Soulter@users.noreply.github.com>

* fix: 完善转发引用解析与图片回退并支持配置化控制 (#5054)

* feat: support fallback image parsing for quoted messages

* fix: fallback parse quoted images when reply chain has placeholders

* style: format network utils with ruff

* test: expand quoted parser coverage and improve fallback diagnostics

* fix: fallback to text-only retry when image requests fail

* fix: tighten image fallback and resolve nested quoted forwards

* refactor: simplify quoted message extraction and dedupe images

* fix: harden quoted parsing and openai error candidates

* fix: harden quoted image ref normalization

* refactor: organize quoted parser settings and logging

* fix: cap quoted fallback images and avoid retry loops

* refactor: split quoted message parser into focused modules

* refactor: share onebot segment parsing logic

* refactor: unify quoted message parsing flow

* feat: move quoted parser tuning to provider settings

* fix: add missing i18n metadata for quoted parser settings

* chore: refine forwarded message setting labels

* fix: add config tabs and routing for normal and system configurations

* chore: bump version to 4.16.0 (#5074)

* feat: add LINE platform support with adapter and configuration (#5085)

* fix-correct-FIRST_NOTICE.md-locale-path-resolution (#5083) (#5082)

* fix:修改配置文件目录

* fix:添加备选的FIRST_NOTICE.zh-CN.md用于兼容

* fix: remove unnecessary frozen flag from requirements export in Dockerfile

fixes: #5089

* fix #5089: add uv lock step in Dockerfile before export (#5091)

Co-authored-by: Soulter <37870767+Soulter@users.noreply.github.com>

* feat: support hot reload after plugin load failure (#5043)

* add :Support hot reload after plugin load failure

* Apply suggestions from code review

Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>

* fix:reformat code

* fix:reformat code

---------

Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>

* feat: add fallback chat model chain in tool loop runner (#5109)

* feat: implement fallback provider support for chat models and update configuration

* feat: enhance provider selection display with count and chips for selected providers

* feat: update fallback chat providers to use provider settings and add warning for non-list fallback models

* feat: add Afdian support card to resources section in WelcomePage

* feat: replace colorlog with loguru for enhanced logging support (#5115)

* feat: add SSL configuration options for WebUI and update related logging (#5117)

* chore: bump version to 4.17.0

* fix: handle list format content from OpenAI-compatible APIs (#5128)

* fix: handle list format content from OpenAI-compatible APIs

Some LLM providers (e.g., GLM-4.5V via SiliconFlow) return content as
list[dict] format like [{'type': 'text', 'text': '...'}] instead of
plain string. This causes the raw list representation to be displayed
to users.

Changes:
- Add _normalize_content() helper to extract text from various content formats
- Use json.loads instead of ast.literal_eval for safer parsing
- Add size limit check (8KB) before attempting JSON parsing
- Only convert lists that match OpenAI content-part schema (has 'type': 'text')
  to avoid collapsing legitimate list-literal replies like ['foo', 'bar']
- Add strip parameter to preserve whitespace in streaming chunks
- Clean up orphan </think> tags that may leak from some models

Fixes #5124

* fix: improve content normalization safety

- Try json.loads first, fallback to ast.literal_eval for single-quoted
  Python literals to avoid corrupting apostrophes (e.g., "don't")
- Coerce text values to str to handle null or non-string text fields

* fix: update retention logic in LogManager to handle backup count correctly

* chore: bump version to 4.17.1

* docs: Added instructions for deploying AstrBot using AstrBot Launcher. (#5136)

Added instructions for deploying AstrBot using AstrBot Launcher.

* fix: add MCP tools to function tool set in _plugin_tool_fix (#5144)

* fix: add support for collecting data from builtin stars in electron pyinstaller build (#5145)

* chore: bump version to 4.17.1

* chore: ruff format

* fix: prevent updates for AstrBot launched via launcher

* fix(desktop): include runtime deps for builtin plugins in backend build (#5146)

* fix: 'Plain' object has no attribute 'text' when using python 3.14 (#5154)

* fix: enhance plugin metadata handling by injecting attributes before instantiation (#5155)

* fix: enhance handle_result to support event context and webchat image sending

* chore: bump version to 4.17.3

* chore: ruff format

* feat: add NVIDIA provider template (#5157)

fixes: #5156

* feat: enhance provider sources panel with styled menu and mobile support

* fix: improve permission denied message for local execution in Python and shell tools

* feat: enhance PersonaForm component with responsive design and improved styling (#5162)

fix: #5159

* ui(CronJobPage): fix action column buttons overlapping in CronJobPage (#5163)

- 修改前:操作列容器仅使用 `d-flex`,在页面宽度变窄时,子元素(开关和删除按钮)会因为宽度挤压而发生视觉重叠,甚至堆叠在一起。
- 修改后:
    1. 为容器添加了 `flex-nowrap`,强制禁止子元素换行。
    2. 设置了 `min-width: 140px`,确保该列拥有固定的保护空间,防止被其他长文本列挤压。
    3. 增加了 `gap: 12px` 间距,提升了操作辨识度并优化了点击体验。

* feat: add unsaved changes notice to configuration page and update messages

* feat: implement search functionality in configuration components and update UI (#5168)

* feat: add FAQ link to vertical sidebar and update navigation for localization

* feat: add announcement section to WelcomePage and localize announcement title

* chore: bump version to 4.17.4

* feat: supports send markdown message in qqofficial (#5173)

* feat: supports send markdown message in qqofficial

closes: #1093 #918 #4180 #4264

* ruff format

* fix: prevent duplicate error message when all LLM providers fail (#5183)

* fix: 修复选择配置文件进入配置文件管理弹窗直接关闭弹窗显示的配置文件不正确 (#5174)

* feat: add MarketPluginCard component and integrate random plugin feature in ExtensionPage (#5190)

* feat: add MarketPluginCard component and integrate random plugin feature in ExtensionPage

* feat: update random plugin selection logic to use pluginMarketData and refresh on relevant events

* feat: supports aihubmix

* docs: update readme

* chore: ruff format

* feat: add LINE support to multiple language README files

* feat(core): add plugin error hook for custom error routing (#5192)

* feat(core): add plugin error hook for custom error routing

* fix(core): align plugin error suppression with event stop state

* refactor: extract Voice_messages_forbidden fallback into shared helper with typed BadRequest exception (#5204)

- Add _send_voice_with_fallback helper to deduplicate voice forbidden handling
- Catch telegram.error.BadRequest instead of bare Exception with string matching
- Add text field to Record component to preserve TTS source text
- Store original text in Record during TTS conversion for use as document caption
- Skip _send_chat_action when chat_id is empty to avoid unnecessary warnings

* chore: bump version to 4.17.5

* feat: add admin permission checks for Python and Shell execution (#5214)

* fix: 改进微信公众号被动回复处理机制,引入缓冲与分片回复,并优化超时行为 (#5224)

* 修复wechat official 被动回复功能

* ruff format

---------

Co-authored-by: Soulter <905617992@qq.com>

* fix: 修复仅发送 JSON 消息段时的空消息回复报错 (#5208)

* Fix Register_Stage

· 补全 JSON 消息判断,修复发送 JSON 消息时遇到 “消息为空,跳过发送阶段” 的问题。
· 顺带补全其它消息类型判断。
Co-authored-by: Pizero <zhaory200707@outlook.com>

* Fix formatting and comments in stage.py

* Format stage.py

---------

Co-authored-by: Pizero <zhaory200707@outlook.com>

* docs: update related repo links

* fix(core): terminate active events on reset/new/del to prevent stale responses (#5225)

* fix(core): terminate active events on reset/new/del to prevent stale responses

Closes #5222

* style: fix import sorting in scheduler.py

* chore: remove Electron desktop pipeline and switch to tauri repo (#5226)

* ci: remove Electron desktop build from release pipeline

* chore: remove electron desktop and switch to tauri release trigger

* ci: remove desktop workflow dispatch trigger

* refactor: migrate data paths to astrbot_path helpers

* fix: point desktop update prompt to AstrBot-desktop releases

* fix: update feature request template for clarity and consistency in English and Chinese

* Feat/config leave confirm (#5249)

* feat: 配置文件增加未保存提示弹窗

* fix: 移除unsavedChangesDialog插件使用组件方式实现弹窗

* feat: add support for plugin astrbot-version and platform requirement checks (#5235)

* feat: add support for plugin astrbot-version and platform requirement checks

* fix: remove unsupported platform and version constraints from metadata.yaml

* fix: remove restriction on 'v' in astrbot_version specification format

* ruff format

* feat: add password confirmation when changing password (#5247)

* feat: add password confirmation when changing password

Fixes #5177

Adds a password confirmation field to prevent accidental password typos.

Changes:
- Backend: validate confirm_password matches new_password
- Frontend: add confirmation input with validation
- i18n: add labels and error messages for password mismatch

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix(auth): improve error message for password confirmation mismatch

* fix(auth): update password hashing logic and improve confirmation validation

---------

Co-authored-by: whatevertogo <whatevertogo@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix(provider): 修复 dict 格式 content 导致的 JSON 残留问题 (#5250)

* fix(provider): 修复 dict 格式 content 导致的 JSON 残留问题

修复 _normalize_content 函数未处理 dict 类型 content 的问题。
当 LLM 返回 {"type": "text", "text": "..."} 格式的 content 时,
现在会正确提取 text 字段而非直接转为字符串。

同时改进 fallback 行为,对 None 值返回空字符串。

Fixes #5244

* Update warning message for unexpected dict format

---------

Co-authored-by: Soulter <37870767+Soulter@users.noreply.github.com>

* chore: remove outdated heihe.md documentation file

* fix: all mcp tools exposed to main agent (#5252)

* fix: enhance PersonaForm layout and improve tool selection display

* fix: update tool status display and add localization for inactive tools

* fix: remove additionalProperties from tool schema properties (#5253)

fixes: #5217

* fix: simplify error messages for account edit validation

* fix: streamline error response for empty new username and password in account edit

* chore: bump vertion to 4.17.6

* feat: add OpenRouter provider support and icon

* chore: ruff format

* refactor(dashboard): replace legacy isElectron bridge fields with isDesktop (#5269)

* refactor dashboard desktop bridge fields from isElectron to isDesktop

* refactor dashboard runtime detection into shared helper

* fix: update contributor avatar image URL to include max size and columns (#5268)

* feat: astrbot http api (#5280)

* feat: astrbot http api

* Potential fix for code scanning alert no. 34: Use of a broken or weak cryptographic hashing algorithm on sensitive data

Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>

* fix: improve error handling for missing attachment path in file upload

* feat: implement paginated retrieval of platform sessions for creators

* feat: refactor attachment directory handling in ChatRoute

* feat: update API endpoint paths for file and message handling

* feat: add documentation link to API key management section in settings

* feat: update API key scopes and related configurations in API routes and tests

* feat: enhance API key expiration options and add warning for permanent keys

* feat: add UTC normalization and serialization for API key timestamps

* feat: implement chat session management and validation for usernames

* feat: ignore session_id type chunks in message processing

---------

Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>

* feat(dashboard): improve plugin platform support display and mobile accessibility (#5271)

* feat(dashboard): improve plugin platform support display and mobile accessibility

- Replace hover-based tooltips with interactive click menus for platform support information.
- Fix mobile touch issues by introducing explicit state control for status capsules.
- Enhance UI aesthetics with platform-specific icons and a structured vertical list layout.
- Add dynamic chevron icons to provide clear visual cues for expandable content.

* refactor(dashboard): refactor market card with computed properties for performance

* refactor(dashboard): unify plugin platform support UI with new reusable chip component

- Create shared 'PluginPlatformChip' component to encapsulate platform meta display.
- Fix mobile interaction bugs by simplifying menu triggers and event handling.
- Add stacked platform icon previews and dynamic chevron indicators within capsules.
- Improve information hierarchy using structured vertical lists for platform details.
- Optimize rendering efficiency with computed properties across both card views.

* fix: qq official guild message send error (#5287)

* fix: qq official guild message send error

* Update astrbot/core/platform/sources/qqofficial/qqofficial_message_event.py

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

---------

Co-authored-by: Soulter <37870767+Soulter@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* 更新readme文档,补充桌面app说明,并向前移动位置 (#5297)

* docs: update desktop deployment section in README

* docs: refine desktop and launcher deployment descriptions

* Update README.md

* feat: add Anthropic Claude Code OAuth provider and adaptive thinking support (#5209)

* feat: add Anthropic Claude Code OAuth provider and adaptive thinking support

* fix: add defensive guard for metadata overrides and align budget condition with docs

* refactor: adopt sourcery-ai suggestions for OAuth provider

- Use use_api_key=False in OAuth subclass to avoid redundant
  API-key client construction before replacing with auth_token client
- Generalize metadata override helper to merge all dict keys
  instead of only handling 'limit', improving extensibility

* Feat/telegram command alias register  #5233 (#5234)

* feat: support registering command aliases for Telegram

Now when registering commands with aliases, all aliases will be
registered as Telegram bot commands in addition to the main command.

Example:
    @register_command(command_name="draw", alias={"画", "gen"})
Now /draw, /画, and /gen will all appear in the Telegram command menu.

* feat(telegram): add duplicate command name warning when registering commands

Log a warning when duplicate command names are detected during Telegram
command registration to help identify configuration conflicts.

* refactor: remove Anthropic OAuth provider implementation and related metadata overrides

* fix: 修复新建对话时因缺少会话ID导致配置绑定失败的问题 (#5292)

* fix:尝试修改

* fix:添加详细日志

* fix:进行详细修改,并添加日志

* fix:删除所有日志

* fix: 增加安全访问函数

- 给 localStorage 访问加了 try/catch + 可用性判断:dashboard/src/utils/chatConfigBinding.ts:13
- 新增 getFromLocalStorage/setToLocalStorage(在受限存储/无痕模式下异常时回退/忽略)
- getStoredDashboardUsername() / getStoredSelectedChatConfigId() 改为走安全读取:dashboard/src/utils/chatConfigBinding.ts:36       - 新增 setStoredSelectedChatConfigId(),写入失败静默忽略:dashboard/src/utils/chatConfigBinding.ts:44
- 把 ConfigSelector.vue 里直接 localStorage.getItem/setItem 全部替换为上述安全方法:dashboard/src/components/chat/ConfigSelector.vue:81
- 已重新跑过 pnpm run typecheck,通过。

* rm:删除个人用的文档文件

* Revert "rm:删除个人用的文档文件"

This reverts commit 0fceee05434cfbcb11e45bb967a77d5fa93196bf.

* rm:删除个人用的文档文件

* rm:删除个人用的文档文件

* chore: bump version to 4.18.0

* fix(SubAgentPage): 当中间的介绍文本非常长时,Flex 布局会自动挤压右侧的控制按钮区域 (#5306)

* fix: 修复新版本插件市场出现插件显示为空白的 bug;纠正已安装插件卡片的排版,统一大小 (#5309)

* fix(ExtensionCard): 解决插件卡片大小不统一的问题

* fix(MarketPluginCard): 解决插件市场不加载插件的问题 (#5303)

* feat: supports spawn subagent as a background task that not block the main agent workflow (#5081)

* feat:为subagent添加后台任务参数

* ruff

* fix: update terminology from 'handoff mission' to 'background task' and refactor related logic

* fix: update terminology from 'background_mission' to 'background_task' in HandoffTool and related logic

* fix(HandoffTool): update background_task description for clarity on usage

---------

Co-authored-by: Soulter <905617992@qq.com>

* cho

* fix: 修复 aiohttp 版本过新导致 qq-botpy 报错的问题 (#5316)

* chore: ruff format

* fix: remove hard-coded 6s timeout from tavily request

* fix: remove changelogs directory from .dockerignore

* feat(dashboard): make release redirect base URL configurable (#5330)

* feat(dashboard): make desktop release base URL configurable

* refactor(dashboard): use generic release base URL env with upstream default

* fix(dashboard): guard release base URL normalization when env is unset

* refactor(dashboard): use generic release URL helpers and avoid latest suffix duplication

* feat: add stop functionality for active agent sessions and improve handling of stop requests (#5380)

* feat: add stop functionality for active agent sessions and improve handling of stop requests

* feat: update stop button icon and tooltip in ChatInput component

* fix: correct indentation in tool call handling within ChatRoute class

* fix: chatui cannot persist file segment (#5386)

* fix(plugin): update plugin directory handling for reserved plugins (#5369)

* fix(plugin): update plugin directory handling for reserved plugins

* fix(plugin): add warning logs for missing plugin name, object, directory, and changelog

* chore(README): updated with README.md (#5375)

* chore(README): updated with README.md

* Update README_fr.md

Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>

* Update README_zh-TW.md

Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>

---------

Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>

* feat: add image urls / paths supports for subagent (#5348)

* fix: 修复5081号PR在子代理执行后台任务时,未正确使用系统配置的流式/非流请求的问题(#5081)

* feat:为子代理增加远程图片URL参数支持

* fix: update description for image_urls parameter in HandoffTool to clarify usage in multimodal tasks

* ruff format

---------

Co-authored-by: Soulter <905617992@qq.com>

* feat: add hot reload when failed to load plugins (#5334)

* feat:add hot reload when failed to load plugins

* apply bot suggestions

* fix(chatui): add copy rollback path and error message. (#5352)

* fix(chatui): add copy rollback path and error message.

* fix(chatui): fixed textarea leak in the copy button.

* fix(chatui): use color styles from the component library.

* fix: 处理配置文件中的 UTF-8 BOM 编码问题 (#5376)

* fix(config): handle UTF-8 BOM in configuration file loading

Problem:
On Windows, some text editors (like Notepad) automatically add UTF-8 BOM
to JSON files when saving. This causes json.decoder.JSONDecodeError:
"Unexpected UTF-8 BOM" and AstrBot fails to start when cmd_config.json
contains BOM.

Solution:
Add defensive check to strip UTF-8 BOM (\ufeff) if present before
parsing JSON configuration file.

Impact:
- Improves robustness and cross-platform compatibility
- No breaking changes to existing functionality
- Fixes startup failure when configuration file has UTF-8 BOM encoding

Relates-to: Windows editor compatibility issues

* style: fix code formatting with ruff

Fix single quote to double quote to comply with project code style.

* feat: add plugin load&unload hook (#5331)

* 添加了插件的加载完成和卸载完成的钩子事件

* 添加了插件的加载完成和卸载完成的钩子事件

* format code with ruff

* ruff format

---------

Co-authored-by: Soulter <905617992@qq.com>

* test: enhance test framework with comprehensive fixtures and mocks (#5354)

* test: enhance test framework with comprehensive fixtures and mocks

- Add shared mock builders for aiocqhttp, discord, telegram
- Add test helpers for platform configs and mock objects
- Expand conftest.py with test profile support
- Update coverage test workflow configuration

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* refactor(tests): 移动并重构模拟 LLM 响应和消息组件函数

* fix(tests): 优化 pytest_runtest_setup 中的标记检查逻辑

---------

Co-authored-by: whatevertogo <whatevertogo@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>

* test: add comprehensive tests for message event handling (#5355)

* test: add comprehensive tests for message event handling

- Add AstrMessageEvent unit tests (688 lines)
- Add AstrBotMessage unit tests
- Enhance smoke tests with message event scenarios

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix: improve message type handling and add defensive tests

---------

Co-authored-by: whatevertogo <whatevertogo@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>

* feat: add support for showing tool call results in agent execution (#5388)

closes: #5329

* fix: resolve pipeline and star import cycles (#5353)

* fix: resolve pipeline and star import cycles

- Add bootstrap.py and stage_order.py to break circular dependencies
- Export Context, PluginManager, StarTools from star module
- Update pipeline __init__ to defer imports
- Split pipeline initialization into separate bootstrap module

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix: add logging for get_config() failure in Star class

* fix: reorder logger initialization in base.py

---------

Co-authored-by: whatevertogo <whatevertogo@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>

* feat: enable computer-use tools for subagent handoff (#5399)

* fix: enforce admin guard for sandbox file transfer tools (#5402)

* fix: enforce admin guard for sandbox file transfer tools

* refactor: deduplicate computer tools admin permission checks

* fix: add missing space in permission error message

* fix(core): 优化 File 组件处理逻辑并增强 OneBot 驱动层路径兼容性 (#5391)

* fix(core): 优化 File 组件处理逻辑并增强 OneBot 驱动层路径兼容性

原因 (Necessity):
1. 内核一致性:AstrBot 内核的 Record 和 Video 组件均具备识别 `file:///` 协议头的逻辑,但 File 组件此前缺失此功能,导致行为不统一。
2. OneBot 协议合规:OneBot 11 标准要求本地文件路径必须使用 `file:///` 协议头。此前驱动层未对裸路径进行自动转换,导致发送本地文件时常触发 retcode 1200 (识别URL失败) 错误。
3. 容器环境适配:在 Docker 等路径隔离环境下,裸路径更容易因驱动或协议端的解析歧义而失效。

更改 (Changes):
- [astrbot/core/message/components.py]:
  - 在 File.get_file() 中增加对 `file:///` 前缀的识别与剥离逻辑,使其与 Record/Video 组件行为对齐。
- [astrbot/core/platform/sources/aiocqhttp/aiocqhttp_message_event.py]:
  - 在发送文件前增加自动修正逻辑:若路径为绝对路径且未包含协议头,驱动层将自动补全 `file:///` 前缀。
  - 对 http、base64 及已有协议头,确保不干扰原有的正常传输逻辑。

影响 (Impact):
- 以完全兼容的方式增强了文件发送的鲁棒性。
- 解决了插件在发送日志等本地生成的压缩包时,因路径格式不规范导致的发送失败问题。

* refactor(core): 根据 cr 建议,规范化文件 URI 生成与解析逻辑,优化跨平台兼容性

原因 (Necessity):
1. 修复原生路径与 URI 转换在 Windows 下的不对称问题。
2. 规范化 file: 协议头处理,确保符合 RFC 标准并能在 Linux/Windows 间稳健切换。
3. 增强协议判定准确度,防止对普通绝对路径的误处理。

更改 (Changes):
- [astrbot/core/platform/sources/aiocqhttp]:
  - 弃用手动拼接,改用 `pathlib.Path.as_uri()` 生成标准 URI。
  - 将协议检测逻辑从前缀匹配优化为包含性检测 ("://")。
- [astrbot/core/message/components]:
  - 重构 `File.get_file` 解析逻辑,支持对称处理 2/3 斜杠格式。
  - 针对 Windows 环境增加了对 `file:///C:/` 格式的自动修正,避免 `os.path` 识别失效。
- [data/plugins/astrbot_plugin_logplus]:
  - 在直接 API 调用中同步应用 URI 规范化处理。

影响 (Impact):
- 解决 Docker 环境中因路径不规范导致的 "识别URL失败" 报错。
- 提升了本体框架在 Windows 系统下的文件操作鲁棒性。

* i18n(SubAgentPage): complete internationalization for subagent orchestration page (#5400)

* i18n: complete internationalization for subagent orchestration page

- Replace hardcoded English strings in [SubAgentPage.vue] with i18n keys.
- Update `en-US` and `zh-CN` locales with missing hints, validation messages, and empty state translations.
- Fix translation typos and improve consistency across the SubAgent orchestration UI.

* fix(bug_risk): 避免在模板中的翻译调用上使用 || 'Close' 作为回退值。

* fix(aiocqhttp): enhance shutdown process for aiocqhttp adapter (#5412)

* fix: pass embedding dimensions to provider apis (#5411)

* fix(context): log warning when platform not found for session

* fix(context): improve logging for platform not found in session

* chore: bump version to 4.18.2

* chore: bump version to 4.18.2

* chore: bump version to 4.18.2

* fix: Telegram voice message format (OGG instead of WAV) causing issues with OpenAI STT API (#5389)

* chore: ruff format

* feat(dashboard): add generic desktop app updater bridge (#5424)

* feat(dashboard): add generic desktop app updater bridge

* fix(dashboard): address updater bridge review feedback

* fix(dashboard): unify updater bridge types and error logging

* fix(dashboard): consolidate updater bridge typings

* fix(conversation): retain existing persona_id when updating conversation

* fix(dashboard): 修复设置页新建 API Key 后复制失败问题 (#5439)

* Fix: GitHub proxy not displaying correctly in WebUI (#5438)

* fix(dashboard): preserve custom GitHub proxy setting on reload

* fix(dashboard): keep github proxy selection persisted in settings

* fix(persona): enhance persona resolution logic for conversations and sessions

* fix: ensure tool call/response pairing in context truncation (#5417)

* fix: ensure tool call/response pairing in context truncation

* refactor: simplify fix_messages to single-pass state machine

* perf(cron): enhance future task session isolation

fixes: #5392

* feat: add useExtensionPage composable for managing plugin extensions

- Implemented a new composable `useExtensionPage` to handle various functionalities related to plugin management, including fetching extensions, handling updates, and managing UI states.
- Added support for conflict checking, plugin installation, and custom source management.
- Integrated search and filtering capabilities for plugins in the market.
- Enhanced user experience with dialogs for confirmations and notifications.
- Included pagination and sorting features for better plugin visibility.

* fix: clear markdown field when sending media messages via QQ Official Platform (#5445)

* fix: clear markdown field when sending media messages via QQ Official API

* refactor: use pop() to remove markdown key instead of setting None

* fix: cannot automatically get embedding dim when create embedding provider (#5442)

* fix(dashboard): 强化 API Key 复制临时节点清理逻辑

* fix(embedding): 自动检测改为探测 OpenAI embedding 最大可用维度

* fix: normalize openai embedding base url and add hint key

* i18n: add embedding_api_base hint translations

* i18n: localize provider embedding/proxy metadata hints

* fix: show provider-specific embedding API Base URL hint as field subtitle

* fix(embedding): cap OpenAI detect_dim probes with early short-circuit

* fix(dashboard): return generic error on provider adapter import failure

* 回退检测逻辑

* fix: 修复Pyright静态类型检查报错 (#5437)

* refactor: 修正 Sqlite 查询、下载回调、接口重构与类型调整

* feat: 为 OneBotClient 增加 CallAction 协议与异步调用支持

* fix(telegram): avoid duplicate message_thread_id in streaming (#5430)

* perf: batch metadata query in KB retrieval to fix N+1 problem (#5463)

* perf: batch metadata query in KB retrieval to fix N+1 problem

Replace N sequential get_document_with_metadata() calls with a single
get_documents_with_metadata_batch() call using SQL IN clause.

Benchmark results (local SQLite):
- 10 docs: 10.67ms → 1.47ms (7.3x faster)
- 20 docs: 26.00ms → 2.68ms (9.7x faster)
- 50 docs: 63.87ms → 2.79ms (22.9x faster)

* refactor: use set[str] param type and chunk IN clause for SQLite safety

Address review feedback:
- Change doc_ids param from list[str] to set[str] to avoid unnecessary conversion
- Chunk IN clause into batches of 900 to stay under SQLite's 999 parameter limit
- Remove list() wrapping at call site, pass set directly

* fix:fix the issue where incomplete cleanup of residual plugins occurs… (#5462)

* fix:fix the issue where incomplete cleanup of residual plugins occurs in the failed loading of plugins

* fix:ruff format,apply bot suggestions

* Apply suggestion from @gemini-code-assist[bot]

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

---------

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* chore: 为类型检查添加 TYPE_CHECKING 的导入与阶段类型引用 (#5474)

* fix(line): line adapter does not appear in the add platform dialog

fixes: #5477

* [bug]查看介入教程line前往错误界面的问题 (#5479)

Fixes #5478

* chore: bump version to 4.18.3

* feat: implement follow-up message handling in ToolLoopAgentRunner (#5484)

* feat: implement follow-up message handling in ToolLoopAgentRunner

* fix: correct import path for follow-up module in InternalAgentSubStage

* feat: implement websockets transport mode selection for chat (#5410)

* feat: implement websockets transport mode selection for chat

- Added transport mode selection (SSE/WebSocket) in the chat component.
- Updated conversation sidebar to include transport mode options.
- Integrated transport mode handling in message sending logic.
- Refactored message sending functions to support both SSE and WebSocket.
- Enhanced WebSocket connection management and message handling.
- Updated localization files for transport mode labels.
- Configured Vite to support WebSocket proxying.

* feat(webchat): refactor message parsing logic and integrate new parsing function

* feat(chat): add websocket API key extraction and scope validation

* Revert "可选后端,实现前后端分离" (#5536)

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: can <51474963+weijintaocode@users.noreply.github.com>
Co-authored-by: Soulter <905617992@qq.com>
Co-authored-by: Soulter <37870767+Soulter@users.noreply.github.com>
Co-authored-by: letr <123731298+letr007@users.noreply.github.com>
Co-authored-by: 搁浅 <id6543156918@gmail.com>
Co-authored-by: Helian Nuits <sxp20061207@163.com>
Co-authored-by: Gao Jinzhe <2968474907@qq.com>
Co-authored-by: DD斩首 <155905740+DDZS987@users.noreply.github.com>
Co-authored-by: Ubuntu <ubuntu@localhost.localdomain>
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
Co-authored-by: エイカク <62183434+zouyonghe@users.noreply.github.com>
Co-authored-by: 鸦羽 <Raven95676@gmail.com>
Co-authored-by: Dt8333 <25431943+Dt8333@users.noreply.github.com>
Co-authored-by: Copilot <198982749+Copilot@users.noreply.github.com>
Co-authored-by: Li-shi-ling <114913764+Li-shi-ling@users.noreply.github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>
Co-authored-by: Limitless <127183162+Limitless2023@users.noreply.github.com>
Co-authored-by: Limitless2023 <limitless@users.noreply.github.com>
Co-authored-by: evpeople <54983536+evpeople@users.noreply.github.com>
Co-authored-by: SnowNightt <127504703+SnowNightt@users.noreply.github.com>
Co-authored-by: xzj0898 <62733743+xzj0898@users.noreply.github.com>
Co-authored-by: stevessr <89645372+stevessr@users.noreply.github.com>
Co-authored-by: Waterwzy <2916963017@qq.com>
Co-authored-by: NayukiMeko <MekoNayuki@outlook.com>
Co-authored-by: 時壹 <137363396+KBVsent@users.noreply.github.com>
Co-authored-by: sanyekana <Clhikari@qq.com>
Co-authored-by: Chiu Chun-Hsien <95356121+911218sky@users.noreply.github.com>
Co-authored-by: Dream Tokenizer <60459821+Trance-0@users.noreply.github.com>
Co-authored-by: NanoRocky <76585834+NanoRocky@users.noreply.github.com>
Co-authored-by: Pizero <zhaory200707@outlook.com>
Co-authored-by: 雪語 <167516635+YukiRa1n@users.noreply.github.com>
Co-authored-by: whatevertogo <1879483647@qq.com>
Co-authored-by: whatevertogo <whatevertogo@users.noreply.github.com>
Co-authored-…
LIghtJUNction added a commit that referenced this pull request Feb 27, 2026
* feat: add bocha web search tool (#4902)

* add bocha web search tool

* Revert "add bocha web search tool"

This reverts commit 1b36d75a17b4c4751828f31f6759357cd2d4000a.

* add bocha web search tool

* fix: correct temporary_cache spelling and update supported tools for web search

* ruff

---------

Co-authored-by: Soulter <905617992@qq.com>

* fix: messages[x] assistant content must contain at least one part (#4928)

* fix: messages[x] assistant content must contain at least one part

fixes: #4876

* ruff format

* chore: bump version to 4.14.5 (#4930)

* feat: implement feishu / lark media file handling utilities for file, audio and video processing (#4938)

* feat: implement media file handling utilities for audio and video processing

* feat: refactor file upload handling for audio and video in LarkMessageEvent

* feat: add cleanup for failed audio and video conversion outputs in media_utils

* feat: add utility methods for sending messages and uploading files in LarkMessageEvent

* fix: correct spelling of 'temporary' in SharedPreferences class

* perf: optimize webchat and wecom ai queue lifecycle (#4941)

* perf: optimize webchat and wecom ai queue lifecycle

* perf: enhance webchat back queue management with conversation ID support

* fix: localize provider source config UI (#4933)

* fix: localize provider source ui

* feat: localize provider metadata keys

* chore: add provider metadata translations

* chore: format provider i18n changes

* fix: preserve metadata fields in i18n conversion

* fix: internationalize platform config and dialog

* fix: add Weixin official account platform icon

---------

Co-authored-by: Soulter <905617992@qq.com>

* chore: bump version to 4.14.6

* feat: add provider-souce-level proxy (#4949)

* feat: 添加 Provider 级别代理支持及请求失败日志

* refactor: simplify provider source configuration structure

* refactor: move env proxy fallback logic to log_connection_failure

* refactor: update client proxy handling and add terminate method for cleanup

* refactor: update no_proxy configuration to remove redundant subnet

---------

Co-authored-by: Soulter <905617992@qq.com>

* feat(ComponentPanel):  implement permission management for dashboard (#4887)

* feat(backend): add permission update api

* feat(useCommandActions): add updatePermission action and translations

* feat(dashboard): implement permission editing ui

* style: fix import sorting in command.py

* refactor(backend): extract permission update logic to service

* feat(i18n): add success and failure messages for command updates

---------

Co-authored-by: Soulter <905617992@qq.com>

* feat: 允许 LLM 预览工具返回的图片并自主决定是否发送 (#4895)

* feat: 允许 LLM 预览工具返回的图片并自主决定是否发送

* 复用 send_message_to_user 替代独立的图片发送工具

* feat: implement _HandleFunctionToolsResult class for improved tool response handling

* docs: add path handling guidelines to AGENTS.md

---------

Co-authored-by: Soulter <905617992@qq.com>

* feat(telegram): 添加媒体组(相册)支持 / add media group (album) support (#4893)

* feat(telegram): 添加媒体组(相册)支持 / add media group (album) support

## 功能说明
支持 Telegram 的媒体组消息(相册),将多张图片/视频合并为一条消息处理,而不是分散成多条消息。

## 主要改动

### 1. 初始化媒体组缓存 (__init__)
- 添加 `media_group_cache` 字典存储待处理的媒体组消息
- 使用 2.5 秒超时收集媒体组消息(基于社区最佳实践)
- 最大等待时间 10 秒(防止永久等待)

### 2. 消息处理流程 (message_handler)
- 检测 `media_group_id` 判断是否为媒体组消息
- 媒体组消息走特殊处理流程,避免分散处理

### 3. 媒体组消息缓存 (handle_media_group_message)
- 缓存收到的媒体组消息
- 使用 APScheduler 实现防抖(debounce)机制
- 每收到新消息时重置超时计时器
- 超时后触发统一处理

### 4. 媒体组合并处理 (process_media_group)
- 从缓存中取出所有媒体项
- 使用第一条消息作为基础(保留文本、回复等信息)
- 依次添加所有图片、视频、文档到消息链
- 将合并后的消息发送到处理流程

## 技术方案论证

Telegram Bot API 在处理媒体组时的设计限制:
1. 将媒体组的每个消息作为独立的 update 发送
2. 每个 update 带有相同的 `media_group_id`
3. **不提供**组的总数、结束标志或一次性完整组的机制

因此,bot 必须自行收集消息,并通过硬编码超时(timeout/delay)等待可能延迟到达的消息。
这是目前唯一可靠的方案,被官方实现、主流框架和开发者社区广泛采用。

### 官方和社区证据:
- **Telegram Bot API 服务器实现(tdlib)**:明确指出缺少结束标志或总数信息
  https://github.com/tdlib/telegram-bot-api/issues/643

- **Telegram Bot API 服务器 issue**:讨论媒体组处理的不便性,推荐使用超时机制
  https://github.com/tdlib/telegram-bot-api/issues/339

- **Telegraf(Node.js 框架)**:专用媒体组中间件使用 timeout 控制等待时间
  https://github.com/DieTime/telegraf-media-group

- **StackOverflow 讨论**:无法一次性获取媒体组所有文件,必须手动收集
  https://stackoverflow.com/questions/50180048/telegram-api-get-all-uploaded-photos-by-media-group-id

- **python-telegram-bot 社区**:确认媒体组消息单独到达,需手动处理
  https://github.com/python-telegram-bot/python-telegram-bot/discussions/3143

- **Telegram Bot API 官方文档**:仅定义 `media_group_id` 为可选字段,不提供获取完整组的接口
  https://core.telegram.org/bots/api#message

## 实现细节
- 使用 2.5 秒超时收集媒体组消息(基于社区最佳实践)
- 最大等待时间 10 秒(防止永久等待)
- 采用防抖(debounce)机制:每收到新消息重置计时器
- 利用 APScheduler 实现延迟处理和任务调度

## 测试验证
- ✅ 发送 5 张图片相册,成功合并为一条消息
- ✅ 保留原始文本说明和回复信息
- ✅ 支持图片、视频、文档混合的媒体组
- ✅ 日志显示 Processing media group <media_group_id> with 5 items

## 代码变更
- 文件:astrbot/core/platform/sources/telegram/tg_adapter.py
- 新增代码:124 行
- 新增方法:handle_media_group_message(), process_media_group()

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>

* refactor(telegram): 优化媒体组处理性能和可靠性

根据代码审查反馈改进:

1. 实现 media_group_max_wait 防止无限延迟
   - 跟踪媒体组创建时间,超过最大等待时间立即处理
   - 最坏情况下 10 秒内必定处理,防止消息持续到达导致无限延迟

2. 移除手动 job 查找优化性能
   - 删除 O(N) 的 get_jobs() 循环扫描
   - 依赖 replace_existing=True 自动替换任务

3. 重用 convert_message 减少代码重复
   - 统一所有媒体类型转换逻辑
   - 未来添加新媒体类型只需修改一处

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>

* fix(telegram): handle missing message in media group processing and improve logging messages

---------

Co-authored-by: Ubuntu <ubuntu@localhost.localdomain>
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
Co-authored-by: Soulter <905617992@qq.com>

* feat: add welcome feature with localized content and onboarding steps

* fix: correct height attribute to max-height for dialog component

* feat: supports electron app (#4952)

* feat: add desktop wrapper with frontend-only packaging

* docs: add desktop build docs and track dashboard lockfile

* fix: track desktop lockfile for npm ci

* fix: allow custom install directory for windows installer

* chore: migrate desktop workflow to pnpm

* fix(desktop): build AppImage only on Linux

* fix(desktop): harden packaged startup and backend bundling

* fix(desktop): adapt packaged restart and plugin dependency flow

* fix(desktop): prevent backend respawn race on quit

* fix(desktop): prefer pyproject version for desktop packaging

* fix(desktop): improve startup loading UX and reduce flicker

* ci: add desktop multi-platform release workflow

* ci: fix desktop release build and mac runner labels

* ci: disable electron-builder auto publish in desktop build

* ci: avoid electron-builder publish path in build matrix

* ci: normalize desktop release artifact names

* ci: exclude blockmap files from desktop release assets

* ci: prefix desktop release assets with AstrBot and purge blockmaps

* feat: add electron bridge types and expose backend control methods in preload script

* Update startup screen assets and styles

- Changed the icon from PNG to SVG format for better scalability.
- Updated the border color from #d0d0d0 to #eeeeee for a softer appearance.
- Adjusted the width of the startup screen from 460px to 360px for improved responsiveness.

* Update .gitignore to include package.json

* chore: remove desktop gitkeep ignore exceptions

* docs: update desktop troubleshooting for current runtime behavior

* refactor(desktop): modularize runtime and harden startup flow

---------

Co-authored-by: Soulter <905617992@qq.com>
Co-authored-by: Soulter <37870767+Soulter@users.noreply.github.com>

* fix: dedupe preset messages (#4961)

* feat: enhance package.json with resource filters and compression settings

* chore: update Python version requirements to 3.12 (#4963)

* chore: bump version to 4.14.7

* feat: refactor release workflow and add special update handling for electron app (#4969)

* chore: bump version to 4.14.8 and bump faiss-cpu version up to date

* chore: auto ann fix by ruff (#4903)

* chore: auto fix by ruff

* refactor: 统一修正返回类型注解为 None/bool 以匹配实现

* refactor: 将 _get_next_page 改为异步并移除多余的请求错误抛出

* refactor: 将 get_client 的返回类型改为 object

* style: 为 LarkMessageEvent 的相关方法添加返回类型注解 None

---------

Co-authored-by: Soulter <37870767+Soulter@users.noreply.github.com>

* fix: prepare OpenSSL via vcpkg for Windows ARM64

* ci: change ghcr namespace

* chore: update pydantic dependency version (#4980)

* feat: add delete button to persona management dialog (#4978)

* Initial plan

* feat: add delete button to persona management dialog

- Added delete button to PersonaForm dialog (only visible when editing)
- Implemented deletePersona method with confirmation dialog
- Connected delete event to PersonaManager for proper handling
- Button positioned on left side of dialog actions for clear separation
- Uses existing i18n translations for delete button and messages

Co-authored-by: Soulter <37870767+Soulter@users.noreply.github.com>

* fix: use finally block to ensure saving state is reset

- Moved `this.saving = false` to finally block in deletePersona
- Ensures UI doesn't stay in saving state after errors
- Follows best practices for state management

Co-authored-by: Soulter <37870767+Soulter@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: Soulter <37870767+Soulter@users.noreply.github.com>

* feat: enhance Dingtalk adapter with active push message and image, video, audio message type (#4986)

* fix: handle pip install execution in frozen runtime (#4985)

* fix: handle pip install execution in frozen runtime

* fix: harden pip subprocess fallback handling

* fix: collect certifi data in desktop backend build (#4995)

* feat: 企业微信应用 支持主动消息推送,并优化企微应用、微信公众号、微信客服音频相关的处理 (#4998)

* feat: 企业微信智能机器人支持主动消息推送以及发送视频、文件等消息类型支持 (#4999)

* feat: enhance WecomAIBotAdapter and WecomAIBotMessageEvent for improved streaming message handling (#5000)

fixes: #3965

* feat: enhance persona tool management and update UI localization for subagent orchestration (#4990)

* feat: enhance persona tool management and update UI localization for subagent orchestration

* fix: remove debug logging for final ProviderRequest in build_main_agent function

* perf: 稳定源码与 Electron 打包环境下的 pip 安装行为,并修复非 Electron 环境下点击 WebUI 更新按钮时出现跳转对话框的问题 (#4996)

* fix: handle pip install execution in frozen runtime

* fix: harden pip subprocess fallback handling

* fix: scope global data root to packaged electron runtime

* refactor: inline frozen runtime check for electron guard

* fix: prefer current interpreter for source pip installs

* fix: avoid resolving venv python symlink for pip

* refactor: share runtime environment detection utilities

* fix: improve error message when pip module is unavailable

* fix: raise ImportError when pip module is unavailable

* fix: preserve ImportError semantics for missing pip

* fix: 修复非electron app环境更新时仍然显示electron更新对话框的问题

---------

Co-authored-by: Soulter <905617992@qq.com>

* fix: 'HandoffTool' object has no attribute 'agent' (#5005)

* fix: 移动agent的位置到super().__init__之后

* add: 添加一行注释

* chore(deps): bump the github-actions group with 2 updates (#5006)

Bumps the github-actions group with 2 updates: [astral-sh/setup-uv](https://github.com/astral-sh/setup-uv) and [actions/download-artifact](https://github.com/actions/download-artifact).


Updates `astral-sh/setup-uv` from 6 to 7
- [Release notes](https://github.com/astral-sh/setup-uv/releases)
- [Commits](https://github.com/astral-sh/setup-uv/compare/v6...v7)

Updates `actions/download-artifact` from 6 to 7
- [Release notes](https://github.com/actions/download-artifact/releases)
- [Commits](https://github.com/actions/download-artifact/compare/v6...v7)

---
updated-dependencies:
- dependency-name: astral-sh/setup-uv
  dependency-version: '7'
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: github-actions
- dependency-name: actions/download-artifact
  dependency-version: '7'
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: github-actions
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* fix: stabilize packaged runtime pip/ssl behavior and mac font fallback (#5007)

* fix: patch pip distlib finder for frozen electron runtime

* fix: use certifi CA bundle for runtime SSL requests

* fix: configure certifi CA before core imports

* fix: improve mac font fallback for dashboard text

* fix: harden frozen pip patch and unify TLS connector

* refactor: centralize dashboard CJK font fallback stacks

* perf: reuse TLS context and avoid repeated frozen pip patch

* refactor: bootstrap TLS setup before core imports

* fix: use async confirm dialog for provider deletions

* fix: replace native confirm dialogs in dashboard

- Add shared confirm helper in dashboard/src/utils/confirmDialog.ts for async dialog usage with safe fallback.

- Migrate provider, chat, config, session, platform, persona, MCP, backup, and knowledge-base delete/close confirmations to use the shared helper.

- Remove scattered inline confirm handling to keep behavior consistent and avoid native blocking dialog focus/caret issues in Electron.

* fix: capture runtime bootstrap logs after logger init

- Add bootstrap record buffer in runtime_bootstrap for early TLS patch logs before logger is ready.

- Flush buffered bootstrap logs to astrbot logger at process startup in main.py.

- Include concrete exception details for TLS bootstrap failures to improve diagnosis.

* fix: harden runtime bootstrap and unify confirm handling

- Simplify bootstrap log buffering and add a public initialize hook for non-main startup paths.

- Guard aiohttp TLS patching with feature/type checks and keep graceful fallback when internals are unavailable.

- Standardize dashboard confirmation flow via shared confirm helpers across composition and options API components.

* refactor: simplify runtime tls bootstrap and tighten confirm typing

* refactor: align ssl helper namespace and confirm usage

* fix: 修复 Windows 打包版后端重启失败问题 (#5009)

* fix: patch pip distlib finder for frozen electron runtime

* fix: use certifi CA bundle for runtime SSL requests

* fix: configure certifi CA before core imports

* fix: improve mac font fallback for dashboard text

* fix: harden frozen pip patch and unify TLS connector

* refactor: centralize dashboard CJK font fallback stacks

* perf: reuse TLS context and avoid repeated frozen pip patch

* refactor: bootstrap TLS setup before core imports

* fix: use async confirm dialog for provider deletions

* fix: replace native confirm dialogs in dashboard

- Add shared confirm helper in dashboard/src/utils/confirmDialog.ts for async dialog usage with safe fallback.

- Migrate provider, chat, config, session, platform, persona, MCP, backup, and knowledge-base delete/close confirmations to use the shared helper.

- Remove scattered inline confirm handling to keep behavior consistent and avoid native blocking dialog focus/caret issues in Electron.

* fix: capture runtime bootstrap logs after logger init

- Add bootstrap record buffer in runtime_bootstrap for early TLS patch logs before logger is ready.

- Flush buffered bootstrap logs to astrbot logger at process startup in main.py.

- Include concrete exception details for TLS bootstrap failures to improve diagnosis.

* fix: harden runtime bootstrap and unify confirm handling

- Simplify bootstrap log buffering and add a public initialize hook for non-main startup paths.

- Guard aiohttp TLS patching with feature/type checks and keep graceful fallback when internals are unavailable.

- Standardize dashboard confirmation flow via shared confirm helpers across composition and options API components.

* refactor: simplify runtime tls bootstrap and tighten confirm typing

* refactor: align ssl helper namespace and confirm usage

* fix: avoid frozen restart crash from multiprocessing import

* fix: include missing frozen dependencies for windows backend

* fix: use execv for stable backend reboot args

* Revert "fix: use execv for stable backend reboot args"

This reverts commit 9cc27becffeba0e117fea26aa5c2e1fe7afc6e36.

* Revert "fix: include missing frozen dependencies for windows backend"

This reverts commit 52554bea1fa61045451600c64447b7bf38cf6c92.

* Revert "fix: avoid frozen restart crash from multiprocessing import"

This reverts commit 10548645b0ba1e19b64194878ece478a48067959.

* fix: reset pyinstaller onefile env before reboot

* fix: unify electron restart path and tray-exit backend cleanup

* fix: stabilize desktop restart detection and frozen reboot args

* fix: make dashboard restart wait detection robust

* fix: revert dashboard restart waiting interaction tweaks

* fix: pass auth token for desktop graceful restart

* fix: avoid false failure during graceful restart wait

* fix: start restart waiting before electron restart call

* fix: harden restart waiting and reboot arg parsing

* fix: parse start_time as numeric timestamp

* fix: 修复app内重启异常,修复app内点击重启不能立刻提示重启,以及在后端就绪时及时刷新界面的问题 (#5013)

* fix: patch pip distlib finder for frozen electron runtime

* fix: use certifi CA bundle for runtime SSL requests

* fix: configure certifi CA before core imports

* fix: improve mac font fallback for dashboard text

* fix: harden frozen pip patch and unify TLS connector

* refactor: centralize dashboard CJK font fallback stacks

* perf: reuse TLS context and avoid repeated frozen pip patch

* refactor: bootstrap TLS setup before core imports

* fix: use async confirm dialog for provider deletions

* fix: replace native confirm dialogs in dashboard

- Add shared confirm helper in dashboard/src/utils/confirmDialog.ts for async dialog usage with safe fallback.

- Migrate provider, chat, config, session, platform, persona, MCP, backup, and knowledge-base delete/close confirmations to use the shared helper.

- Remove scattered inline confirm handling to keep behavior consistent and avoid native blocking dialog focus/caret issues in Electron.

* fix: capture runtime bootstrap logs after logger init

- Add bootstrap record buffer in runtime_bootstrap for early TLS patch logs before logger is ready.

- Flush buffered bootstrap logs to astrbot logger at process startup in main.py.

- Include concrete exception details for TLS bootstrap failures to improve diagnosis.

* fix: harden runtime bootstrap and unify confirm handling

- Simplify bootstrap log buffering and add a public initialize hook for non-main startup paths.

- Guard aiohttp TLS patching with feature/type checks and keep graceful fallback when internals are unavailable.

- Standardize dashboard confirmation flow via shared confirm helpers across composition and options API components.

* refactor: simplify runtime tls bootstrap and tighten confirm typing

* refactor: align ssl helper namespace and confirm usage

* fix: avoid frozen restart crash from multiprocessing import

* fix: include missing frozen dependencies for windows backend

* fix: use execv for stable backend reboot args

* Revert "fix: use execv for stable backend reboot args"

This reverts commit 9cc27becffeba0e117fea26aa5c2e1fe7afc6e36.

* Revert "fix: include missing frozen dependencies for windows backend"

This reverts commit 52554bea1fa61045451600c64447b7bf38cf6c92.

* Revert "fix: avoid frozen restart crash from multiprocessing import"

This reverts commit 10548645b0ba1e19b64194878ece478a48067959.

* fix: reset pyinstaller onefile env before reboot

* fix: unify electron restart path and tray-exit backend cleanup

* fix: stabilize desktop restart detection and frozen reboot args

* fix: make dashboard restart wait detection robust

* fix: revert dashboard restart waiting interaction tweaks

* fix: pass auth token for desktop graceful restart

* fix: avoid false failure during graceful restart wait

* fix: start restart waiting before electron restart call

* fix: harden restart waiting and reboot arg parsing

* fix: parse start_time as numeric timestamp

* fix: preserve windows frozen reboot argv quoting

* fix: align restart waiting with electron restart timing

* fix: tighten graceful restart and unmanaged kill safety

* chore: bump version to 4.15.0 (#5003)

* fix: add reminder for v4.14.8 users regarding manual redeployment due to a bug

* fix: harden plugin dependency loading in frozen app runtime (#5015)

* fix: compare plugin versions semantically in market updates

* fix: prioritize plugin site-packages for in-process pip

* fix: reload starlette from plugin target site-packages

* fix: harden plugin dependency import precedence in frozen runtime

* fix: improve plugin dependency conflict handling

* refactor: simplify plugin conflict checks and version utils

* fix: expand transitive plugin dependencies for conflict checks

* fix: recover conflicting plugin dependencies during module prefer

* fix: reuse renderer restart flow for tray backend restart

* fix: add recoverable plugin dependency conflict handling

* revert: remove plugin version comparison changes

* fix: add missing tray restart backend labels

* feat: adding support for media and quoted message attachments for feishu (#5018)

* docs: add AUR installation method (#4879)

* docs: sync system package manager installation instructions to all languages

* Update README.md

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update README.md

Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>

* fix/typo

* refactor: update system package manager installation instructions for Arch Linux across multiple language README files

* feat: add installation command for AstrBot in multiple language README files

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>
Co-authored-by: Soulter <37870767+Soulter@users.noreply.github.com>
Co-authored-by: Soulter <905617992@qq.com>

* fix(desktop): 为 Electron 与后端日志增加按大小轮转 (#5029)

* fix(desktop): rotate electron and backend logs

* refactor(desktop): centralize log rotation defaults and debug fs errors

* fix(desktop): harden rotation fs ops and buffer backend log writes

* refactor(desktop): extract buffered logger and reduce sync stat calls

* refactor(desktop): simplify rotation flow and harden logger config

* fix(desktop): make app logging async and flush-safe

* fix: harden app log path switching and debug-gated rotation errors

* fix: cap buffered log chunk size during path switch

* feat: add first notice feature with multilingual support and UI integration

* fix: 提升打包版桌面端启动稳定性并优化插件依赖处理 (#5031)

* fix(desktop): rotate electron and backend logs

* refactor(desktop): centralize log rotation defaults and debug fs errors

* fix(desktop): harden rotation fs ops and buffer backend log writes

* refactor(desktop): extract buffered logger and reduce sync stat calls

* refactor(desktop): simplify rotation flow and harden logger config

* fix(desktop): make app logging async and flush-safe

* fix: harden app log path switching and debug-gated rotation errors

* fix: cap buffered log chunk size during path switch

* fix: avoid redundant plugin reinstall and upgrade electron

* fix: stop webchat tasks cleanly and bind packaged backend to localhost

* fix: unify platform shutdown and await webchat listener cleanup

* fix: improve startup logs for dashboard and onebot listeners

* fix: revert extra startup service logs

* fix: harden plugin import recovery and webchat listener cleanup

* fix: pin dashboard ci node version to 24.13.0

* fix: avoid duplicate webchat listener cleanup on terminate

* refactor: clarify platform task lifecycle management

* fix: continue platform shutdown when terminate fails

* feat: temporary file handling and introduce TempDirCleaner (#5026)

* feat: temporary file handling and introduce TempDirCleaner

- Updated various modules to use `get_astrbot_temp_path()` instead of `get_astrbot_data_path()` for temporary file storage.
- Renamed temporary files for better identification and organization.
- Introduced `TempDirCleaner` to manage the size of the temporary directory, ensuring it does not exceed a specified limit by deleting the oldest files.
- Added configuration option for maximum temporary directory size in the dashboard.
- Implemented tests for `TempDirCleaner` to verify cleanup functionality and size management.

* ruff

* fix: close unawaited reset coroutine on early return (#5033)

When an OnLLMRequestEvent hook stops event propagation, the
reset_coro created by build_main_agent was never awaited, causing
a RuntimeWarning. Close the coroutine explicitly before returning.

Fixes #5032

Co-authored-by: Limitless2023 <limitless@users.noreply.github.com>

* fix: update error logging message for connection failures

* docs: clean and sync README (#5014)

* fix: close missing div in README

* fix: sync README_zh-TW with README

* fix: sync README

* fix: correct typo

correct url in README_en README_fr README_ru

* docs: sync README_en with README

* Update README_en.md

Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>

---------

Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>
Co-authored-by: Soulter <37870767+Soulter@users.noreply.github.com>

* fix: provider extra param dialog key display error

* chore: ruff format

* feat: add send_chat_action for Telegram platform adapter (#5037)

* feat: add send_chat_action for Telegram platform adapter

Add typing/upload indicator when sending messages via Telegram.
- Added _send_chat_action helper method for sending chat actions
- Send appropriate action (typing, upload_photo, upload_document, upload_voice)
  before sending different message types
- Support streaming mode with typing indicator
- Support supergroup with message_thread_id

* refactor(telegram): extract chat action helpers and add throttling

- Add ACTION_BY_TYPE mapping for message type to action priority
- Add _get_chat_action_for_chain() to determine action from message chain
- Add _send_media_with_action() for upload → send → restore typing pattern
- Add _ensure_typing() helper for typing status
- Add chat action throttling (0.5s) in streaming mode to avoid rate limits
- Update type annotation to ChatAction | str for better static checking

* feat(telegram): implement send_typing method for Telegram platform

---------

Co-authored-by: Soulter <905617992@qq.com>

* fix: 修复更新日志、官方文档弹窗双滚动条问题 (#5060)

* docs: sync and fix readme typo (#5055)

* docs: fix index typo

* docs: fix typo in README_en.md

- 移除英文README中意外出现的俄语,并替换为英语

* docs: fix html typo

- remove unused '</p>'

* docs: sync table with README

* docs: sync README header format

- keep the README header format consistent

* doc: sync key features

* style: format files

- Fix formatting issues from previous PR

* fix: correct md anchor link

* docs: correct typo in README_fr.md

Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>

* docs: correct typo in README_zh-TW.md

Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>

---------

Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>

* fix: 修复备份时缺失的人格文件夹映射 (#5042)

* feat: QQ 官方机器人平台支持主动推送消息、私聊场景下支持接收文件 (#5066)

* feat: QQ 官方机器人平台支持主动推送消息、私聊场景下支持接收文件

* feat: enhance QQOfficialWebhook to remember session scenes for group, channel, and friend messages

* perf: 优化分段回复间隔时间的初始化逻辑 (#5068)

fixes: #5059

* fix: chunk err when using openrouter deepseek (#5069)

* feat: add i18n supports for custom platform adapters (#5045)

* Feat: 为插件提供的适配器的元数据&i18n提供数据通路

* chore: update docstrings with pull request references

Added references to pull request 5045 in docstrings.

---------

Co-authored-by: Soulter <37870767+Soulter@users.noreply.github.com>

* fix: 完善转发引用解析与图片回退并支持配置化控制 (#5054)

* feat: support fallback image parsing for quoted messages

* fix: fallback parse quoted images when reply chain has placeholders

* style: format network utils with ruff

* test: expand quoted parser coverage and improve fallback diagnostics

* fix: fallback to text-only retry when image requests fail

* fix: tighten image fallback and resolve nested quoted forwards

* refactor: simplify quoted message extraction and dedupe images

* fix: harden quoted parsing and openai error candidates

* fix: harden quoted image ref normalization

* refactor: organize quoted parser settings and logging

* fix: cap quoted fallback images and avoid retry loops

* refactor: split quoted message parser into focused modules

* refactor: share onebot segment parsing logic

* refactor: unify quoted message parsing flow

* feat: move quoted parser tuning to provider settings

* fix: add missing i18n metadata for quoted parser settings

* chore: refine forwarded message setting labels

* fix: add config tabs and routing for normal and system configurations

* chore: bump version to 4.16.0 (#5074)

* feat: add LINE platform support with adapter and configuration (#5085)

* fix-correct-FIRST_NOTICE.md-locale-path-resolution (#5083) (#5082)

* fix:修改配置文件目录

* fix:添加备选的FIRST_NOTICE.zh-CN.md用于兼容

* fix: remove unnecessary frozen flag from requirements export in Dockerfile

fixes: #5089

* fix #5089: add uv lock step in Dockerfile before export (#5091)

Co-authored-by: Soulter <37870767+Soulter@users.noreply.github.com>

* feat: support hot reload after plugin load failure (#5043)

* add :Support hot reload after plugin load failure

* Apply suggestions from code review

Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>

* fix:reformat code

* fix:reformat code

---------

Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>

* feat: add fallback chat model chain in tool loop runner (#5109)

* feat: implement fallback provider support for chat models and update configuration

* feat: enhance provider selection display with count and chips for selected providers

* feat: update fallback chat providers to use provider settings and add warning for non-list fallback models

* feat: add Afdian support card to resources section in WelcomePage

* feat: replace colorlog with loguru for enhanced logging support (#5115)

* feat: add SSL configuration options for WebUI and update related logging (#5117)

* chore: bump version to 4.17.0

* fix: handle list format content from OpenAI-compatible APIs (#5128)

* fix: handle list format content from OpenAI-compatible APIs

Some LLM providers (e.g., GLM-4.5V via SiliconFlow) return content as
list[dict] format like [{'type': 'text', 'text': '...'}] instead of
plain string. This causes the raw list representation to be displayed
to users.

Changes:
- Add _normalize_content() helper to extract text from various content formats
- Use json.loads instead of ast.literal_eval for safer parsing
- Add size limit check (8KB) before attempting JSON parsing
- Only convert lists that match OpenAI content-part schema (has 'type': 'text')
  to avoid collapsing legitimate list-literal replies like ['foo', 'bar']
- Add strip parameter to preserve whitespace in streaming chunks
- Clean up orphan </think> tags that may leak from some models

Fixes #5124

* fix: improve content normalization safety

- Try json.loads first, fallback to ast.literal_eval for single-quoted
  Python literals to avoid corrupting apostrophes (e.g., "don't")
- Coerce text values to str to handle null or non-string text fields

* fix: update retention logic in LogManager to handle backup count correctly

* chore: bump version to 4.17.1

* docs: Added instructions for deploying AstrBot using AstrBot Launcher. (#5136)

Added instructions for deploying AstrBot using AstrBot Launcher.

* fix: add MCP tools to function tool set in _plugin_tool_fix (#5144)

* fix: add support for collecting data from builtin stars in electron pyinstaller build (#5145)

* chore: bump version to 4.17.1

* chore: ruff format

* fix: prevent updates for AstrBot launched via launcher

* fix(desktop): include runtime deps for builtin plugins in backend build (#5146)

* fix: 'Plain' object has no attribute 'text' when using python 3.14 (#5154)

* fix: enhance plugin metadata handling by injecting attributes before instantiation (#5155)

* fix: enhance handle_result to support event context and webchat image sending

* chore: bump version to 4.17.3

* chore: ruff format

* feat: add NVIDIA provider template (#5157)

fixes: #5156

* feat: enhance provider sources panel with styled menu and mobile support

* fix: improve permission denied message for local execution in Python and shell tools

* feat: enhance PersonaForm component with responsive design and improved styling (#5162)

fix: #5159

* ui(CronJobPage): fix action column buttons overlapping in CronJobPage (#5163)

- 修改前:操作列容器仅使用 `d-flex`,在页面宽度变窄时,子元素(开关和删除按钮)会因为宽度挤压而发生视觉重叠,甚至堆叠在一起。
- 修改后:
    1. 为容器添加了 `flex-nowrap`,强制禁止子元素换行。
    2. 设置了 `min-width: 140px`,确保该列拥有固定的保护空间,防止被其他长文本列挤压。
    3. 增加了 `gap: 12px` 间距,提升了操作辨识度并优化了点击体验。

* feat: add unsaved changes notice to configuration page and update messages

* feat: implement search functionality in configuration components and update UI (#5168)

* feat: add FAQ link to vertical sidebar and update navigation for localization

* feat: add announcement section to WelcomePage and localize announcement title

* chore: bump version to 4.17.4

* feat: supports send markdown message in qqofficial (#5173)

* feat: supports send markdown message in qqofficial

closes: #1093 #918 #4180 #4264

* ruff format

* fix: prevent duplicate error message when all LLM providers fail (#5183)

* fix: 修复选择配置文件进入配置文件管理弹窗直接关闭弹窗显示的配置文件不正确 (#5174)

* feat: add MarketPluginCard component and integrate random plugin feature in ExtensionPage (#5190)

* feat: add MarketPluginCard component and integrate random plugin feature in ExtensionPage

* feat: update random plugin selection logic to use pluginMarketData and refresh on relevant events

* feat: supports aihubmix

* docs: update readme

* chore: ruff format

* feat: add LINE support to multiple language README files

* feat(core): add plugin error hook for custom error routing (#5192)

* feat(core): add plugin error hook for custom error routing

* fix(core): align plugin error suppression with event stop state

* refactor: extract Voice_messages_forbidden fallback into shared helper with typed BadRequest exception (#5204)

- Add _send_voice_with_fallback helper to deduplicate voice forbidden handling
- Catch telegram.error.BadRequest instead of bare Exception with string matching
- Add text field to Record component to preserve TTS source text
- Store original text in Record during TTS conversion for use as document caption
- Skip _send_chat_action when chat_id is empty to avoid unnecessary warnings

* chore: bump version to 4.17.5

* feat: add admin permission checks for Python and Shell execution (#5214)

* fix: 改进微信公众号被动回复处理机制,引入缓冲与分片回复,并优化超时行为 (#5224)

* 修复wechat official 被动回复功能

* ruff format

---------

Co-authored-by: Soulter <905617992@qq.com>

* fix: 修复仅发送 JSON 消息段时的空消息回复报错 (#5208)

* Fix Register_Stage

· 补全 JSON 消息判断,修复发送 JSON 消息时遇到 “消息为空,跳过发送阶段” 的问题。
· 顺带补全其它消息类型判断。
Co-authored-by: Pizero <zhaory200707@outlook.com>

* Fix formatting and comments in stage.py

* Format stage.py

---------

Co-authored-by: Pizero <zhaory200707@outlook.com>

* docs: update related repo links

* fix(core): terminate active events on reset/new/del to prevent stale responses (#5225)

* fix(core): terminate active events on reset/new/del to prevent stale responses

Closes #5222

* style: fix import sorting in scheduler.py

* chore: remove Electron desktop pipeline and switch to tauri repo (#5226)

* ci: remove Electron desktop build from release pipeline

* chore: remove electron desktop and switch to tauri release trigger

* ci: remove desktop workflow dispatch trigger

* refactor: migrate data paths to astrbot_path helpers

* fix: point desktop update prompt to AstrBot-desktop releases

* fix: update feature request template for clarity and consistency in English and Chinese

* Feat/config leave confirm (#5249)

* feat: 配置文件增加未保存提示弹窗

* fix: 移除unsavedChangesDialog插件使用组件方式实现弹窗

* feat: add support for plugin astrbot-version and platform requirement checks (#5235)

* feat: add support for plugin astrbot-version and platform requirement checks

* fix: remove unsupported platform and version constraints from metadata.yaml

* fix: remove restriction on 'v' in astrbot_version specification format

* ruff format

* feat: add password confirmation when changing password (#5247)

* feat: add password confirmation when changing password

Fixes #5177

Adds a password confirmation field to prevent accidental password typos.

Changes:
- Backend: validate confirm_password matches new_password
- Frontend: add confirmation input with validation
- i18n: add labels and error messages for password mismatch

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix(auth): improve error message for password confirmation mismatch

* fix(auth): update password hashing logic and improve confirmation validation

---------

Co-authored-by: whatevertogo <whatevertogo@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix(provider): 修复 dict 格式 content 导致的 JSON 残留问题 (#5250)

* fix(provider): 修复 dict 格式 content 导致的 JSON 残留问题

修复 _normalize_content 函数未处理 dict 类型 content 的问题。
当 LLM 返回 {"type": "text", "text": "..."} 格式的 content 时,
现在会正确提取 text 字段而非直接转为字符串。

同时改进 fallback 行为,对 None 值返回空字符串。

Fixes #5244

* Update warning message for unexpected dict format

---------

Co-authored-by: Soulter <37870767+Soulter@users.noreply.github.com>

* chore: remove outdated heihe.md documentation file

* fix: all mcp tools exposed to main agent (#5252)

* fix: enhance PersonaForm layout and improve tool selection display

* fix: update tool status display and add localization for inactive tools

* fix: remove additionalProperties from tool schema properties (#5253)

fixes: #5217

* fix: simplify error messages for account edit validation

* fix: streamline error response for empty new username and password in account edit

* chore: bump vertion to 4.17.6

* feat: add OpenRouter provider support and icon

* chore: ruff format

* refactor(dashboard): replace legacy isElectron bridge fields with isDesktop (#5269)

* refactor dashboard desktop bridge fields from isElectron to isDesktop

* refactor dashboard runtime detection into shared helper

* fix: update contributor avatar image URL to include max size and columns (#5268)

* feat: astrbot http api (#5280)

* feat: astrbot http api

* Potential fix for code scanning alert no. 34: Use of a broken or weak cryptographic hashing algorithm on sensitive data

Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>

* fix: improve error handling for missing attachment path in file upload

* feat: implement paginated retrieval of platform sessions for creators

* feat: refactor attachment directory handling in ChatRoute

* feat: update API endpoint paths for file and message handling

* feat: add documentation link to API key management section in settings

* feat: update API key scopes and related configurations in API routes and tests

* feat: enhance API key expiration options and add warning for permanent keys

* feat: add UTC normalization and serialization for API key timestamps

* feat: implement chat session management and validation for usernames

* feat: ignore session_id type chunks in message processing

---------

Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>

* feat(dashboard): improve plugin platform support display and mobile accessibility (#5271)

* feat(dashboard): improve plugin platform support display and mobile accessibility

- Replace hover-based tooltips with interactive click menus for platform support information.
- Fix mobile touch issues by introducing explicit state control for status capsules.
- Enhance UI aesthetics with platform-specific icons and a structured vertical list layout.
- Add dynamic chevron icons to provide clear visual cues for expandable content.

* refactor(dashboard): refactor market card with computed properties for performance

* refactor(dashboard): unify plugin platform support UI with new reusable chip component

- Create shared 'PluginPlatformChip' component to encapsulate platform meta display.
- Fix mobile interaction bugs by simplifying menu triggers and event handling.
- Add stacked platform icon previews and dynamic chevron indicators within capsules.
- Improve information hierarchy using structured vertical lists for platform details.
- Optimize rendering efficiency with computed properties across both card views.

* fix: qq official guild message send error (#5287)

* fix: qq official guild message send error

* Update astrbot/core/platform/sources/qqofficial/qqofficial_message_event.py

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

---------

Co-authored-by: Soulter <37870767+Soulter@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* 更新readme文档,补充桌面app说明,并向前移动位置 (#5297)

* docs: update desktop deployment section in README

* docs: refine desktop and launcher deployment descriptions

* Update README.md

* feat: add Anthropic Claude Code OAuth provider and adaptive thinking support (#5209)

* feat: add Anthropic Claude Code OAuth provider and adaptive thinking support

* fix: add defensive guard for metadata overrides and align budget condition with docs

* refactor: adopt sourcery-ai suggestions for OAuth provider

- Use use_api_key=False in OAuth subclass to avoid redundant
  API-key client construction before replacing with auth_token client
- Generalize metadata override helper to merge all dict keys
  instead of only handling 'limit', improving extensibility

* Feat/telegram command alias register  #5233 (#5234)

* feat: support registering command aliases for Telegram

Now when registering commands with aliases, all aliases will be
registered as Telegram bot commands in addition to the main command.

Example:
    @register_command(command_name="draw", alias={"画", "gen"})
Now /draw, /画, and /gen will all appear in the Telegram command menu.

* feat(telegram): add duplicate command name warning when registering commands

Log a warning when duplicate command names are detected during Telegram
command registration to help identify configuration conflicts.

* refactor: remove Anthropic OAuth provider implementation and related metadata overrides

* fix: 修复新建对话时因缺少会话ID导致配置绑定失败的问题 (#5292)

* fix:尝试修改

* fix:添加详细日志

* fix:进行详细修改,并添加日志

* fix:删除所有日志

* fix: 增加安全访问函数

- 给 localStorage 访问加了 try/catch + 可用性判断:dashboard/src/utils/chatConfigBinding.ts:13
- 新增 getFromLocalStorage/setToLocalStorage(在受限存储/无痕模式下异常时回退/忽略)
- getStoredDashboardUsername() / getStoredSelectedChatConfigId() 改为走安全读取:dashboard/src/utils/chatConfigBinding.ts:36       - 新增 setStoredSelectedChatConfigId(),写入失败静默忽略:dashboard/src/utils/chatConfigBinding.ts:44
- 把 ConfigSelector.vue 里直接 localStorage.getItem/setItem 全部替换为上述安全方法:dashboard/src/components/chat/ConfigSelector.vue:81
- 已重新跑过 pnpm run typecheck,通过。

* rm:删除个人用的文档文件

* Revert "rm:删除个人用的文档文件"

This reverts commit 0fceee05434cfbcb11e45bb967a77d5fa93196bf.

* rm:删除个人用的文档文件

* rm:删除个人用的文档文件

* chore: bump version to 4.18.0

* fix(SubAgentPage): 当中间的介绍文本非常长时,Flex 布局会自动挤压右侧的控制按钮区域 (#5306)

* fix: 修复新版本插件市场出现插件显示为空白的 bug;纠正已安装插件卡片的排版,统一大小 (#5309)

* fix(ExtensionCard): 解决插件卡片大小不统一的问题

* fix(MarketPluginCard): 解决插件市场不加载插件的问题 (#5303)

* feat: supports spawn subagent as a background task that not block the main agent workflow (#5081)

* feat:为subagent添加后台任务参数

* ruff

* fix: update terminology from 'handoff mission' to 'background task' and refactor related logic

* fix: update terminology from 'background_mission' to 'background_task' in HandoffTool and related logic

* fix(HandoffTool): update background_task description for clarity on usage

---------

Co-authored-by: Soulter <905617992@qq.com>

* cho

* fix: 修复 aiohttp 版本过新导致 qq-botpy 报错的问题 (#5316)

* chore: ruff format

* fix: remove hard-coded 6s timeout from tavily request

* fix: remove changelogs directory from .dockerignore

* feat(dashboard): make release redirect base URL configurable (#5330)

* feat(dashboard): make desktop release base URL configurable

* refactor(dashboard): use generic release base URL env with upstream default

* fix(dashboard): guard release base URL normalization when env is unset

* refactor(dashboard): use generic release URL helpers and avoid latest suffix duplication

* feat: add stop functionality for active agent sessions and improve handling of stop requests (#5380)

* feat: add stop functionality for active agent sessions and improve handling of stop requests

* feat: update stop button icon and tooltip in ChatInput component

* fix: correct indentation in tool call handling within ChatRoute class

* fix: chatui cannot persist file segment (#5386)

* fix(plugin): update plugin directory handling for reserved plugins (#5369)

* fix(plugin): update plugin directory handling for reserved plugins

* fix(plugin): add warning logs for missing plugin name, object, directory, and changelog

* chore(README): updated with README.md (#5375)

* chore(README): updated with README.md

* Update README_fr.md

Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>

* Update README_zh-TW.md

Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>

---------

Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>

* feat: add image urls / paths supports for subagent (#5348)

* fix: 修复5081号PR在子代理执行后台任务时,未正确使用系统配置的流式/非流请求的问题(#5081)

* feat:为子代理增加远程图片URL参数支持

* fix: update description for image_urls parameter in HandoffTool to clarify usage in multimodal tasks

* ruff format

---------

Co-authored-by: Soulter <905617992@qq.com>

* feat: add hot reload when failed to load plugins (#5334)

* feat:add hot reload when failed to load plugins

* apply bot suggestions

* fix(chatui): add copy rollback path and error message. (#5352)

* fix(chatui): add copy rollback path and error message.

* fix(chatui): fixed textarea leak in the copy button.

* fix(chatui): use color styles from the component library.

* fix: 处理配置文件中的 UTF-8 BOM 编码问题 (#5376)

* fix(config): handle UTF-8 BOM in configuration file loading

Problem:
On Windows, some text editors (like Notepad) automatically add UTF-8 BOM
to JSON files when saving. This causes json.decoder.JSONDecodeError:
"Unexpected UTF-8 BOM" and AstrBot fails to start when cmd_config.json
contains BOM.

Solution:
Add defensive check to strip UTF-8 BOM (\ufeff) if present before
parsing JSON configuration file.

Impact:
- Improves robustness and cross-platform compatibility
- No breaking changes to existing functionality
- Fixes startup failure when configuration file has UTF-8 BOM encoding

Relates-to: Windows editor compatibility issues

* style: fix code formatting with ruff

Fix single quote to double quote to comply with project code style.

* feat: add plugin load&unload hook (#5331)

* 添加了插件的加载完成和卸载完成的钩子事件

* 添加了插件的加载完成和卸载完成的钩子事件

* format code with ruff

* ruff format

---------

Co-authored-by: Soulter <905617992@qq.com>

* test: enhance test framework with comprehensive fixtures and mocks (#5354)

* test: enhance test framework with comprehensive fixtures and mocks

- Add shared mock builders for aiocqhttp, discord, telegram
- Add test helpers for platform configs and mock objects
- Expand conftest.py with test profile support
- Update coverage test workflow configuration

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* refactor(tests): 移动并重构模拟 LLM 响应和消息组件函数

* fix(tests): 优化 pytest_runtest_setup 中的标记检查逻辑

---------

Co-authored-by: whatevertogo <whatevertogo@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>

* test: add comprehensive tests for message event handling (#5355)

* test: add comprehensive tests for message event handling

- Add AstrMessageEvent unit tests (688 lines)
- Add AstrBotMessage unit tests
- Enhance smoke tests with message event scenarios

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix: improve message type handling and add defensive tests

---------

Co-authored-by: whatevertogo <whatevertogo@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>

* feat: add support for showing tool call results in agent execution (#5388)

closes: #5329

* fix: resolve pipeline and star import cycles (#5353)

* fix: resolve pipeline and star import cycles

- Add bootstrap.py and stage_order.py to break circular dependencies
- Export Context, PluginManager, StarTools from star module
- Update pipeline __init__ to defer imports
- Split pipeline initialization into separate bootstrap module

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix: add logging for get_config() failure in Star class

* fix: reorder logger initialization in base.py

---------

Co-authored-by: whatevertogo <whatevertogo@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>

* feat: enable computer-use tools for subagent handoff (#5399)

* fix: enforce admin guard for sandbox file transfer tools (#5402)

* fix: enforce admin guard for sandbox file transfer tools

* refactor: deduplicate computer tools admin permission checks

* fix: add missing space in permission error message

* fix(core): 优化 File 组件处理逻辑并增强 OneBot 驱动层路径兼容性 (#5391)

* fix(core): 优化 File 组件处理逻辑并增强 OneBot 驱动层路径兼容性

原因 (Necessity):
1. 内核一致性:AstrBot 内核的 Record 和 Video 组件均具备识别 `file:///` 协议头的逻辑,但 File 组件此前缺失此功能,导致行为不统一。
2. OneBot 协议合规:OneBot 11 标准要求本地文件路径必须使用 `file:///` 协议头。此前驱动层未对裸路径进行自动转换,导致发送本地文件时常触发 retcode 1200 (识别URL失败) 错误。
3. 容器环境适配:在 Docker 等路径隔离环境下,裸路径更容易因驱动或协议端的解析歧义而失效。

更改 (Changes):
- [astrbot/core/message/components.py]:
  - 在 File.get_file() 中增加对 `file:///` 前缀的识别与剥离逻辑,使其与 Record/Video 组件行为对齐。
- [astrbot/core/platform/sources/aiocqhttp/aiocqhttp_message_event.py]:
  - 在发送文件前增加自动修正逻辑:若路径为绝对路径且未包含协议头,驱动层将自动补全 `file:///` 前缀。
  - 对 http、base64 及已有协议头,确保不干扰原有的正常传输逻辑。

影响 (Impact):
- 以完全兼容的方式增强了文件发送的鲁棒性。
- 解决了插件在发送日志等本地生成的压缩包时,因路径格式不规范导致的发送失败问题。

* refactor(core): 根据 cr 建议,规范化文件 URI 生成与解析逻辑,优化跨平台兼容性

原因 (Necessity):
1. 修复原生路径与 URI 转换在 Windows 下的不对称问题。
2. 规范化 file: 协议头处理,确保符合 RFC 标准并能在 Linux/Windows 间稳健切换。
3. 增强协议判定准确度,防止对普通绝对路径的误处理。

更改 (Changes):
- [astrbot/core/platform/sources/aiocqhttp]:
  - 弃用手动拼接,改用 `pathlib.Path.as_uri()` 生成标准 URI。
  - 将协议检测逻辑从前缀匹配优化为包含性检测 ("://")。
- [astrbot/core/message/components]:
  - 重构 `File.get_file` 解析逻辑,支持对称处理 2/3 斜杠格式。
  - 针对 Windows 环境增加了对 `file:///C:/` 格式的自动修正,避免 `os.path` 识别失效。
- [data/plugins/astrbot_plugin_logplus]:
  - 在直接 API 调用中同步应用 URI 规范化处理。

影响 (Impact):
- 解决 Docker 环境中因路径不规范导致的 "识别URL失败" 报错。
- 提升了本体框架在 Windows 系统下的文件操作鲁棒性。

* i18n(SubAgentPage): complete internationalization for subagent orchestration page (#5400)

* i18n: complete internationalization for subagent orchestration page

- Replace hardcoded English strings in [SubAgentPage.vue] with i18n keys.
- Update `en-US` and `zh-CN` locales with missing hints, validation messages, and empty state translations.
- Fix translation typos and improve consistency across the SubAgent orchestration UI.

* fix(bug_risk): 避免在模板中的翻译调用上使用 || 'Close' 作为回退值。

* fix(aiocqhttp): enhance shutdown process for aiocqhttp adapter (#5412)

* fix: pass embedding dimensions to provider apis (#5411)

* fix(context): log warning when platform not found for session

* fix(context): improve logging for platform not found in session

* chore: bump version to 4.18.2

* chore: bump version to 4.18.2

* chore: bump version to 4.18.2

* fix: Telegram voice message format (OGG instead of WAV) causing issues with OpenAI STT API (#5389)

* chore: ruff format

* feat(dashboard): add generic desktop app updater bridge (#5424)

* feat(dashboard): add generic desktop app updater bridge

* fix(dashboard): address updater bridge review feedback

* fix(dashboard): unify updater bridge types and error logging

* fix(dashboard): consolidate updater bridge typings

* fix(conversation): retain existing persona_id when updating conversation

* fix(dashboard): 修复设置页新建 API Key 后复制失败问题 (#5439)

* Fix: GitHub proxy not displaying correctly in WebUI (#5438)

* fix(dashboard): preserve custom GitHub proxy setting on reload

* fix(dashboard): keep github proxy selection persisted in settings

* fix(persona): enhance persona resolution logic for conversations and sessions

* fix: ensure tool call/response pairing in context truncation (#5417)

* fix: ensure tool call/response pairing in context truncation

* refactor: simplify fix_messages to single-pass state machine

* perf(cron): enhance future task session isolation

fixes: #5392

* feat: add useExtensionPage composable for managing plugin extensions

- Implemented a new composable `useExtensionPage` to handle various functionalities related to plugin management, including fetching extensions, handling updates, and managing UI states.
- Added support for conflict checking, plugin installation, and custom source management.
- Integrated search and filtering capabilities for plugins in the market.
- Enhanced user experience with dialogs for confirmations and notifications.
- Included pagination and sorting features for better plugin visibility.

* fix: clear markdown field when sending media messages via QQ Official Platform (#5445)

* fix: clear markdown field when sending media messages via QQ Official API

* refactor: use pop() to remove markdown key instead of setting None

* fix: cannot automatically get embedding dim when create embedding provider (#5442)

* fix(dashboard): 强化 API Key 复制临时节点清理逻辑

* fix(embedding): 自动检测改为探测 OpenAI embedding 最大可用维度

* fix: normalize openai embedding base url and add hint key

* i18n: add embedding_api_base hint translations

* i18n: localize provider embedding/proxy metadata hints

* fix: show provider-specific embedding API Base URL hint as field subtitle

* fix(embedding): cap OpenAI detect_dim probes with early short-circuit

* fix(dashboard): return generic error on provider adapter import failure

* 回退检测逻辑

* fix: 修复Pyright静态类型检查报错 (#5437)

* refactor: 修正 Sqlite 查询、下载回调、接口重构与类型调整

* feat: 为 OneBotClient 增加 CallAction 协议与异步调用支持

* fix(telegram): avoid duplicate message_thread_id in streaming (#5430)

* perf: batch metadata query in KB retrieval to fix N+1 problem (#5463)

* perf: batch metadata query in KB retrieval to fix N+1 problem

Replace N sequential get_document_with_metadata() calls with a single
get_documents_with_metadata_batch() call using SQL IN clause.

Benchmark results (local SQLite):
- 10 docs: 10.67ms → 1.47ms (7.3x faster)
- 20 docs: 26.00ms → 2.68ms (9.7x faster)
- 50 docs: 63.87ms → 2.79ms (22.9x faster)

* refactor: use set[str] param type and chunk IN clause for SQLite safety

Address review feedback:
- Change doc_ids param from list[str] to set[str] to avoid unnecessary conversion
- Chunk IN clause into batches of 900 to stay under SQLite's 999 parameter limit
- Remove list() wrapping at call site, pass set directly

* fix:fix the issue where incomplete cleanup of residual plugins occurs… (#5462)

* fix:fix the issue where incomplete cleanup of residual plugins occurs in the failed loading of plugins

* fix:ruff format,apply bot suggestions

* Apply suggestion from @gemini-code-assist[bot]

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

---------

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* chore: 为类型检查添加 TYPE_CHECKING 的导入与阶段类型引用 (#5474)

* fix(line): line adapter does not appear in the add platform dialog

fixes: #5477

* [bug]查看介入教程line前往错误界面的问题 (#5479)

Fixes #5478

* chore: bump version to 4.18.3

* feat: implement follow-up message handling in ToolLoopAgentRunner (#5484)

* feat: implement follow-up message handling in ToolLoopAgentRunner

* fix: correct import path for follow-up module in InternalAgentSubStage

* feat: implement websockets transport mode selection for chat (#5410)

* feat: implement websockets transport mode selection for chat

- Added transport mode selection (SSE/WebSocket) in the chat component.
- Updated conversation sidebar to include transport mode options.
- Integrated transport mode handling in message sending logic.
- Refactored message sending functions to support both SSE and WebSocket.
- Enhanced WebSocket connection management and message handling.
- Updated localization files for transport mode labels.
- Configured Vite to support WebSocket proxying.

* feat(webchat): refactor message parsing logic and integrate new parsing function

* feat(chat): add websocket API key extraction and scope validation

* Revert "可选后端,实现前后端分离" (#5536)

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: can <51474963+weijintaocode@users.noreply.github.com>
Co-authored-by: Soulter <905617992@qq.com>
Co-authored-by: Soulter <37870767+Soulter@users.noreply.github.com>
Co-authored-by: letr <123731298+letr007@users.noreply.github.com>
Co-authored-by: 搁浅 <id6543156918@gmail.com>
Co-authored-by: Helian Nuits <sxp20061207@163.com>
Co-authored-by: Gao Jinzhe <2968474907@qq.com>
Co-authored-by: DD斩首 <155905740+DDZS987@users.noreply.github.com>
Co-authored-by: Ubuntu <ubuntu@localhost.localdomain>
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
Co-authored-by: エイカク <62183434+zouyonghe@users.noreply.github.com>
Co-authored-by: 鸦羽 <Raven95676@gmail.com>
Co-authored-by: Dt8333 <25431943+Dt8333@users.noreply.github.com>
Co-authored-by: Copilot <198982749+Copilot@users.noreply.github.com>
Co-authored-by: Li-shi-ling <114913764+Li-shi-ling@users.noreply.github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>
Co-authored-by: Limitless <127183162+Limitless2023@users.noreply.github.com>
Co-authored-by: Limitless2023 <limitless@users.noreply.github.com>
Co-authored-by: evpeople <54983536+evpeople@users.noreply.github.com>
Co-authored-by: SnowNightt <127504703+SnowNightt@users.noreply.github.com>
Co-authored-by: xzj0898 <62733743+xzj0898@users.noreply.github.com>
Co-authored-by: stevessr <89645372+stevessr@users.noreply.github.com>
Co-authored-by: Waterwzy <2916963017@qq.com>
Co-authored-by: NayukiMeko <MekoNayuki@outlook.com>
Co-authored-by: 時壹 <137363396+KBVsent@users.noreply.github.com>
Co-authored-by: sanyekana <Clhikari@qq.com>
Co-authored-by: Chiu Chun-Hsien <95356121+911218sky@users.noreply.github.com>
Co-authored-by: Dream Tokenizer <60459821+Trance-0@users.noreply.github.com>
Co-authored-by: NanoRocky <76585834+NanoRocky@users.noreply.github.com>
Co-authored-by: Pizero <zhaory200707@outlook.com>
Co-authored-by: 雪語 <167516635+YukiRa1n@users.noreply.github.com>
Co-authored-by: whatevertogo <1879483647@qq.com>
Co-authored-by: whatevertogo <whatevertogo@users.noreply.github.com>
Co-authore…
miaoxutao123 pushed a commit to miaoxutao123/AstrBot that referenced this pull request Feb 28, 2026
…ndling of stop requests (AstrBotDevs#5380)

* feat: add stop functionality for active agent sessions and improve handling of stop requests

* feat: update stop button icon and tooltip in ChatInput component

* fix: correct indentation in tool call handling within ChatRoute class
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area:core The bug / feature is about astrbot's core, backend feature:chatui The bug / feature is about astrbot's chatui, webchat size:L This PR changes 100-499 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Feature] agent连续执行是不能打断,目前没有找到任何打断agent执行的方式。。

1 participant