Skip to content

[https://nvbugs/6105768][fix] ** Runtime GPU detection inside the test function: when total_memory < 80 GiB #13471

Open
tensorrt-cicd wants to merge 1 commit intoNVIDIA:mainfrom
tensorrt-cicd:repair-bot-bug6105768
Open

[https://nvbugs/6105768][fix] ** Runtime GPU detection inside the test function: when total_memory < 80 GiB #13471
tensorrt-cicd wants to merge 1 commit intoNVIDIA:mainfrom
tensorrt-cicd:repair-bot-bug6105768

Conversation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

@tensorrt-cicd tensorrt-cicd commented Apr 26, 2026

Summary

  • Root cause: ** DeepSeek-V3-Lite bf16 model requires ~37 GiB per disaggregated worker. With two workers (context + generation) sharing a single L40S GPU (44.4 GiB), total memory needed is ~74 GiB, causing OOM during model._apply(init_meta_tensor). The test was designed for H100 (80 GiB) with workers on separate GPUs but had no memory guard.
  • Fix: ** Runtime GPU detection inside the test function: when total_memory < 80 GiB and device_count < 2, fall back to TinyLlama-1.1B-Chat-v1.0 (~2 GiB) with a dedicated small-model config (disagg_config_cancel_stress_test_small.yaml) that uses conservative free_gpu_memory_fraction values (0.2/0.3 vs 0.3/0.85) to prevent KV cache allocation races on shared GPUs. The parametrize ID [DeepSeek-V3-Lite-bf16] is preserved since it comes from the fixture parameter, not the actual model loaded. On H100 or multi-GPU systems, the original DeepSeek-V3-Lite bf16 path is unchanged.
  • Automated fix generated by repair-bot

Test plan

  • Verify fix on the same GPU type as the original failure
  • Check for regressions in related tests

Links

Summary by CodeRabbit

  • Tests

    • Added a new small-scale cancellation stress test configuration.
    • Enhanced cancellation test logic to intelligently select model and configuration based on available GPU memory.
    • Improved support for single-GPU systems with lower memory configurations.
  • Chores

    • Re-enabled a previously skipped large-context cancellation test.

…M on L40S

The test_disaggregated_cancel_large_context_requests test fails with OOM
on L40S (44.4 GiB) because DeepSeek-V3-Lite bf16 requires ~37 GiB per
disaggregated worker, and two workers sharing a single GPU need ~74 GiB.

Add runtime GPU memory detection to fall back to TinyLlama with a
smaller config on single-GPU systems with <80 GiB memory. This preserves
the test's cancellation stress-test coverage while fitting within L40S
memory constraints. On H100 or multi-GPU systems, the original
DeepSeek-V3-Lite bf16 model is still used.

Also adds a TinyLlama-compatible disagg config with conservative memory
fractions to prevent KV cache allocation races on shared GPUs, and
removes the test waiver from waives.txt.

Signed-off-by: tensorrt-cicd <90828364+tensorrt-cicd@users.noreply.github.com>
@tensorrt-cicd tensorrt-cicd requested a review from a team as a code owner April 26, 2026 07:42
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 26, 2026

📝 Walkthrough

Walkthrough

Introduces a new disaggregated cancellation stress test configuration with TinyLlama model. Updates the test runner with a prompt_len_range parameter and GPU-aware fallback logic that selects model and configuration based on available GPU memory. Removes an existing test waiver directive.

Changes

Cohort / File(s) Summary
Test Configuration
tests/integration/defs/disaggregated/test_configs/disagg_config_cancel_stress_test_small.yaml
New YAML configuration file for disaggregated cancellation stress testing with TinyLlama-1.1B model, including tensor/pipeline parallelization, KV cache reuse settings, cache transceiver configuration, and CUDA graph options.
Test Logic
tests/integration/defs/disaggregated/test_disaggregated.py
Adds prompt_len_range parameter to run_disaggregated_cancel_test() function and implements conditional logic to detect GPU count and memory, falling back to TinyLlama with reduced prompt range on single-GPU systems with <80 GiB memory.
Waiver Updates
tests/integration/test_lists/waives.txt
Removes SKIP waiver for test_disaggregated_cancel_large_context_requests[DeepSeek-V3-Lite-bf16].

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Title check ✅ Passed The title is mostly related to the main change, identifying the runtime GPU detection logic as the key fix, though it is somewhat verbose with extra asterisks and incomplete sentence structure.
Description check ✅ Passed The PR description provides clear context on root cause, the fix implementation, and testing approach, though it lacks explicit coverage of the PR checklist items from the template.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
tests/integration/defs/disaggregated/test_disaggregated.py (1)

1-1: Consider updating copyright year.

The copyright header shows 2022-2024, but since this file is being modified in 2026, it should be updated to 2022-2026 per coding guidelines. As per coding guidelines: "update year on modified files".

📝 Proposed fix
-# SPDX-FileCopyrightText: Copyright (c) 2022-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+# SPDX-FileCopyrightText: Copyright (c) 2022-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/integration/defs/disaggregated/test_disaggregated.py` at line 1, Update
the copyright header year range in test_disaggregated.py from "2022-2024" to
"2022-2026" so the file reflects the current modification year; locate the
SPDX/header comment at the top of the file and change the year substring
accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@tests/integration/defs/disaggregated/test_disaggregated.py`:
- Line 1: Update the copyright header year range in test_disaggregated.py from
"2022-2024" to "2022-2026" so the file reflects the current modification year;
locate the SPDX/header comment at the top of the file and change the year
substring accordingly.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Enterprise

Run ID: 5baeccdf-3fcb-478d-926d-8942b9c81d28

📥 Commits

Reviewing files that changed from the base of the PR and between eeba2eb and ba07791.

📒 Files selected for processing (3)
  • tests/integration/defs/disaggregated/test_configs/disagg_config_cancel_stress_test_small.yaml
  • tests/integration/defs/disaggregated/test_disaggregated.py
  • tests/integration/test_lists/waives.txt
💤 Files with no reviewable changes (1)
  • tests/integration/test_lists/waives.txt

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant