Skip to content

[Cherry-Pick][BugFix] redefine tmp_workspace using full tensor in append_attn(#6999)#7002

Open
lizhenyun01 wants to merge 9 commits intoPaddlePaddle:feature/rl/cpu-cache-20250324from
lizhenyun01:0324_buffer
Open

[Cherry-Pick][BugFix] redefine tmp_workspace using full tensor in append_attn(#6999)#7002
lizhenyun01 wants to merge 9 commits intoPaddlePaddle:feature/rl/cpu-cache-20250324from
lizhenyun01:0324_buffer

Conversation

@lizhenyun01
Copy link
Collaborator

Motivation

将append_attn算子 split_kv场景下中使用的tmp_workspace以及tmp_m,tmp_d buffer改为由backend传入并在层间共享

Modifications

Usage or Command

Accuracy Tests

Checklist

  • Add at least a tag in the PR title.
    • Tag list: [[FDConfig],[APIServer],[Engine], [Scheduler], [PD Disaggregation], [Executor], [Graph Optimization], [Speculative Decoding], [RL], [Models], [Quantization], [Loader], [OP], [KVCache], [DataProcessor], [BugFix], [Docs], [CI], [Optimization], [Feature], [Benchmark], [Others], [XPU], [HPU], [GCU], [DCU], [Iluvatar], [Metax]]
    • You can add new tags based on the PR content, but the semantics must be clear.
  • Format your code, run pre-commit before commit.
  • Add unit tests. Please write the reason in this PR if no unit tests.
  • Provide accuracy results.
  • If the current PR is submitting to the release branch, make sure the PR has been submitted to the develop branch, then cherry-pick it to the release branch with the [Cherry-Pick] PR tag.

Copilot AI and others added 9 commits March 1, 2026 13:47
… in API server (PaddlePaddle#6551) (PaddlePaddle#6554)

* Initial plan

* [BugFix][APIServer] Add control_socket_disable to gunicorn options (cherry-pick of PaddlePaddle#6551)

Co-authored-by: Jiang-Jia-Jun <163579578+Jiang-Jia-Jun@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: Jiang-Jia-Jun <163579578+Jiang-Jia-Jun@users.noreply.github.com>
…refix_tree_status_signal not initialized(PaddlePaddle#6531) (PaddlePaddle#6559)

* fix mtp acceptance rate decline

* [BugFix] Fix AttributeError in recycle_gpu_blocks when prefix_tree_status_signal not initialized

- Add hasattr check before accessing prefix_tree_status_signal
- The signal is only initialized in launch_cache_messager, not in __init__
- Fixes CI test failure in test_prefix_cache_manager.py

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* [BugFix] Reset prefix cache when model weights are updating

- Call self.reset() before setting status to NORMAL in UPDATING state
- Ensure cache consistency when model weights change
- Consistent with CLEARING state handling

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
…Paddle#6597)

Co-authored-by: YuBaoku <49938469+EmmonsCurse@users.noreply.github.com>
…#6655)

When `is_dummy_run=True`, calling `empty_input_forward` can cause
unexpected behavior. Add `and not is_dummy_run` guard for both
`_propose_cuda` and `_propose_xpu` paths.

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
@paddle-bot
Copy link

paddle-bot bot commented Mar 24, 2026

Thanks for your contribution!

@CLAassistant
Copy link

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you all sign our Contributor License Agreement before we can accept your contribution.
6 out of 7 committers have signed the CLA.

✅ liyonghua0910
✅ kevincheng2
✅ gongshaotian
✅ lizhenyun01
✅ yuanlehome
✅ Deleter-D
❌ Copilot
You have signed the CLA already but the status is still pending? Let us recheck it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants