[None][fix] Port KV cache V2 follow-up fixes#13488
[None][fix] Port KV cache V2 follow-up fixes#13488yizhang-nv wants to merge 1 commit intoNVIDIA:mainfrom
Conversation
Signed-off-by: Yi Zhang <187001205+yizhang-nv@users.noreply.github.com>
|
/bot run |
📝 WalkthroughWalkthroughThe changes extend workspace-sizing methods across C++ and Python layers by adding a Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes 🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@cpp/tensorrt_llm/common/attentionOp.cpp`:
- Around line 806-811: total_kv_len may be negative and casting it directly to
size_t causes wrap/large values; before computing kv_buf_tokens (used to size
fp8_k_buf_size and fp8_v_buf_size and matched to enqueueContext), clamp or check
total_kv_len to ensure non-negative and then cast: compute a safe size_t kv_len
= (total_kv_len > 0 ? static_cast<size_t>(total_kv_len) : 0) and use
std::max(kv_len, static_cast<size_t>(mChunkPrefillBufferBatchSize) *
max_num_tokens) when assigning kv_buf_tokens, so fp8_k_buf_size and
fp8_v_buf_size are computed from a valid non-wrapped value.
In `@tensorrt_llm/_torch/pyexecutor/resource_manager.py`:
- Around line 1932-1952: The retry path that constructs a GPU-only config after
KVCacheManagerPy init fails can still leave the scheduler using
CapacitySchedulerPolicy.MAX_UTILIZATION (a suspending policy), causing
deadlocks; in the except block where you call _build_cache_config and set
self.kv_cache_manager_py_config and self.impl, modify the new config to use a
non-suspending scheduler policy (i.e. explicitly set the scheduler policy to a
non-suspending alternative instead of CapacitySchedulerPolicy.MAX_UTILIZATION)
before constructing KVCacheManagerPy, or alternatively re-raise the original
error instead of retrying; update the retry branch around KVCacheManagerPy,
_build_cache_config, and kv_cache_manager_py_config to implement one of these
two behaviors.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Enterprise
Run ID: 82ada20d-56a1-4ef8-af86-ba785a7734fe
📒 Files selected for processing (6)
cpp/tensorrt_llm/common/attentionOp.cppcpp/tensorrt_llm/common/attentionOp.hcpp/tensorrt_llm/thop/attentionOp.cpptensorrt_llm/_torch/pyexecutor/_util.pytensorrt_llm/_torch/pyexecutor/resource_manager.pytensorrt_llm/runtime/kv_cache_manager_v2/_storage_manager.py
| // Use total_kv_len when available (KV cache reuse causes total_kv_len >> max_num_tokens). | ||
| // enqueueContext sizes these buffers by total_kv_len, so workspace must match. | ||
| size_t const kv_buf_tokens = std::max( | ||
| static_cast<size_t>(total_kv_len), static_cast<size_t>(mChunkPrefillBufferBatchSize) * max_num_tokens); | ||
| fp8_k_buf_size = kv_buf_tokens * static_cast<size_t>(total_k_dim_all_heads); | ||
| fp8_v_buf_size = kv_buf_tokens * static_cast<size_t>(total_v_dim_all_heads); |
There was a problem hiding this comment.
Guard total_kv_len before casting to size_t.
static_cast<size_t>(total_kv_len) will wrap if total_kv_len < 0, which can explode workspace size calculations.
💡 Suggested fix
- size_t const kv_buf_tokens = std::max(
- static_cast<size_t>(total_kv_len), static_cast<size_t>(mChunkPrefillBufferBatchSize) * max_num_tokens);
+ int32_t const safeTotalKvLen = std::max(total_kv_len, 0);
+ size_t const kv_bufTokensByChunk
+ = static_cast<size_t>(mChunkPrefillBufferBatchSize) * static_cast<size_t>(max_num_tokens);
+ size_t const kv_buf_tokens = std::max(static_cast<size_t>(safeTotalKvLen), kv_bufTokensByChunk);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cpp/tensorrt_llm/common/attentionOp.cpp` around lines 806 - 811, total_kv_len
may be negative and casting it directly to size_t causes wrap/large values;
before computing kv_buf_tokens (used to size fp8_k_buf_size and fp8_v_buf_size
and matched to enqueueContext), clamp or check total_kv_len to ensure
non-negative and then cast: compute a safe size_t kv_len = (total_kv_len > 0 ?
static_cast<size_t>(total_kv_len) : 0) and use std::max(kv_len,
static_cast<size_t>(mChunkPrefillBufferBatchSize) * max_num_tokens) when
assigning kv_buf_tokens, so fp8_k_buf_size and fp8_v_buf_size are computed from
a valid non-wrapped value.
| try: | ||
| self.impl = KVCacheManagerPy(config) | ||
| except (CuError, KVCacheOutOfMemoryError): | ||
| if len(cache_tiers) > 1: | ||
| logger.warning( | ||
| "Failed to initialize KV cache manager with host cache " | ||
| "tier (cuMemHostRegister may have failed). " | ||
| "Retrying without host cache tier.") | ||
| cache_tiers_gpu_only = [ | ||
| t for t in cache_tiers if isinstance(t, GpuCacheTierConfig) | ||
| ] | ||
| config = self._build_cache_config( | ||
| kv_cache_config, | ||
| tokens_per_block=tokens_per_block, | ||
| vocab_size=vocab_size, | ||
| cache_tiers=cache_tiers_gpu_only, | ||
| ) | ||
| self.kv_cache_manager_py_config = config | ||
| self.impl = KVCacheManagerPy(config) | ||
| else: | ||
| raise |
There was a problem hiding this comment.
GPU-only fallback can still deadlock the default V2 scheduler.
This retry drops the host tier, but _util.py still defaults V2 to CapacitySchedulerPolicy.MAX_UTILIZATION. Your own comment above says that policy relies on suspend/resume succeeding; without a host tier, resume eventually fails once the GPU tier fills, so this turns a startup-time host-registration failure into a latent runtime hang. Please either downgrade to a non-suspending policy when this fallback fires or surface the init failure instead of silently continuing.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tensorrt_llm/_torch/pyexecutor/resource_manager.py` around lines 1932 - 1952,
The retry path that constructs a GPU-only config after KVCacheManagerPy init
fails can still leave the scheduler using
CapacitySchedulerPolicy.MAX_UTILIZATION (a suspending policy), causing
deadlocks; in the except block where you call _build_cache_config and set
self.kv_cache_manager_py_config and self.impl, modify the new config to use a
non-suspending scheduler policy (i.e. explicitly set the scheduler policy to a
non-suspending alternative instead of CapacitySchedulerPolicy.MAX_UTILIZATION)
before constructing KVCacheManagerPy, or alternatively re-raise the original
error instead of retrying; update the retry branch around KVCacheManagerPy,
_build_cache_config, and kv_cache_manager_py_config to implement one of these
two behaviors.
|
PR_Github #45650 [ run ] triggered by Bot. Commit: |
|
PR_Github #45650 [ run ] completed with state
|
Summary by CodeRabbit
Bug Fixes
Improvements
Description
Port the KV cache V2 follow-up fixes from #12837 that are independent of enabling V2 by default.
This PR keeps
use_kv_cache_manager_v2default unchanged and excludes test waives/skips and cleanup workarounds. It includes:max_batch_sizeas the V2 scheduler per-iteration request budget.total_kv_lenwhen KV cache reuse makes it larger thanmax_num_tokens.RLIMIT_MEMLOCK, and retry without host tier when host registration fails.Test Coverage
pre-commit run --files cpp/tensorrt_llm/common/attentionOp.cpp cpp/tensorrt_llm/common/attentionOp.h cpp/tensorrt_llm/thop/attentionOp.cpp tensorrt_llm/_torch/pyexecutor/_util.py tensorrt_llm/_torch/pyexecutor/resource_manager.py tensorrt_llm/runtime/kv_cache_manager_v2/_storage_manager.pypython3 -m py_compile tensorrt_llm/_torch/pyexecutor/_util.py tensorrt_llm/_torch/pyexecutor/resource_manager.py tensorrt_llm/runtime/kv_cache_manager_v2/_storage_manager.pygit diff --checkPR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
Update tava architecture diagram if there is a significant design change in PR.
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
To see a list of available CI bot commands, please comment
/bot help.