Skip to content

fix: remove hardcoded assistant ack after auto_compact to fix 400#118

Open
deanbear wants to merge 2 commits intoshareAI-lab:mainfrom
deanbear:auto_compact_opt
Open

fix: remove hardcoded assistant ack after auto_compact to fix 400#118
deanbear wants to merge 2 commits intoshareAI-lab:mainfrom
deanbear:auto_compact_opt

Conversation

@deanbear
Copy link

@deanbear deanbear commented Mar 23, 2026

Summary

  • auto_compact 压缩后返回了一条硬编码的 assistant 消息,导致 auto_compact 路径中 messages 末尾是 assistant,下一次 API 调用报 400(conversation must end with a user message)
  • manual compact 触发后未 return,导致循环继续,同样以 assistant 结尾调用 API

Changes

  • 删除 auto_compact 返回值中的假 assistant 消息(s06_context_compact.py、s_full.py)
  • manual compact 后加 return,将控制权还给外层 REPL 等待用户输入
  • 同步更新三语文档中的代码示例(docs/zh、docs/en、docs/ja)

Test plan

  • 运行 python agents/s06_context_compact.py,输入 /compact,确认不再报 400
  • 触发 auto_compact(token 超阈值),确认对话可以继续
  • 验证 manual compact 后等待用户输入而非继续循环

Additional fix: remove hardcoded assistant acks (s08–s11)

  • 在后续的教程中又遇到了相似的 API 错误 400(conversation must end with a user message)
  • background-results 和 inbox 注入后紧跟的假 assistant 消息("Noted background results." / "Noted inbox messages.")一并删除
  • 这类消息没有功能价值,连续两条 user 消息在目标模型(claude sonnet 4.6)上正常工作
  • 涉及 s08_background_tasks.py、s09_agent_teams.py、s10_team_protocols.py、s11_autonomous_agents.py、s_full.py 及三语文档
> compact: Compressing...
[manual compact]
[transcript saved: /Users/bear/github/learn-claude-code/.transcripts/transcript_1774252544.jsonl]
Traceback (most recent call last):
  File "/Users/bear/github/learn-claude-code/agents/s06_context_compact.py", line 242, in <module>
    agent_loop(history)
    ~~~~~~~~~~^^^^^^^^^
  File "/Users/bear/github/learn-claude-code/agents/s06_context_compact.py", line 203, in agent_loop
    response = client.messages.create(
        model=MODEL, system=SYSTEM, messages=messages,
        tools=TOOLS, max_tokens=8000,
    )
  File "/Users/bear/github/learn-claude-code/.venv/lib/python3.14/site-packages/anthropic/_utils/_utils.py", line 282, in wrapper
    return func(*args, **kwargs)
  File "/Users/bear/github/learn-claude-code/.venv/lib/python3.14/site-packages/anthropic/resources/messages/messages.py", line 996, in create
    return self._post(
           ~~~~~~~~~~^
        "/v1/messages",
        ^^^^^^^^^^^^^^^
    ...<30 lines>...
        stream_cls=Stream[RawMessageStreamEvent],
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    )
    ^
  File "/Users/bear/github/learn-claude-code/.venv/lib/python3.14/site-packages/anthropic/_base_client.py", line 1364, in post
    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
                           ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/bear/github/learn-claude-code/.venv/lib/python3.14/site-packages/anthropic/_base_client.py", line 1137, in request
    raise self._make_status_error_from_response(err.response) from None
anthropic.BadRequestError: Error code: 400 - {'error': {'message': '{"error":{"code":"400","type":"invalid_params","message":"This model does not support assistant message prefill. The conversation must end with a user message."}}. Received Model Group=claude-sonnet-4-6\nAvailable Model Group Fallbacks=None', 'type': 'None', 'param': 'None', 'code': '400'}}

@vercel
Copy link

vercel bot commented Mar 23, 2026

@deanbear is attempting to deploy a commit to the crazyboym's projects Team on Vercel.

A member of the Team first needs to authorize it.

  s08/s09/s10/s11/s_full inject background-results and inbox as user
  messages before LLM calls. The paired hardcoded assistant "Noted..."
  responses were unnecessary — consecutive user messages work fine and
  the fake acks added noise with no functional value.

  Affected: agents/s08, s09, s10, s11, s_full + docs (zh/en/ja s08, s09)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant