[None][test] Waive failed cases for main in QA CI#13504
[None][test] Waive failed cases for main in QA CI#13504crazydemo wants to merge 6 commits intoNVIDIA:mainfrom
Conversation
Bug(s): 6105765, 6106174, 6112497, 6112500 Requested by: qa@nvidia.com Jenkins build: https://prod.blsm.nvidia.com/swqa-tensorrt-qa-test/job/LLM_FUNCTION_TEST/2118/ Signed-off-by: xinhe-nv <200704525+xinhe-nv@users.noreply.github.com> Signed-off-by: Ivy Zhang <25222398+crazydemo@users.noreply.github.com>
Bug(s): 5921674, 6087946, 6112497, 6112502, 6112503 Requested by: qa@nvidia.com Jenkins build: https://prod.blsm.nvidia.com/swqa-tensorrt-qa-test/job/LLM_FUNCTION_TEST/2117/ Signed-off-by: xinhe-nv <200704525+xinhe-nv@users.noreply.github.com> Signed-off-by: Ivy Zhang <25222398+crazydemo@users.noreply.github.com>
Bug(s): 6011317, 6094102, 6114139, 6114140, 6114141, 6114142, 6114464 Requested by: qa@nvidia.com Jenkins build: https://prod.blsm.nvidia.com/swqa-tensorrt-qa-test/job/LLM_FUNCTION_CLUSTER_TEST/1394/ Signed-off-by: xinhe-nv <200704525+xinhe-nv@users.noreply.github.com> Signed-off-by: Ivy Zhang <25222398+crazydemo@users.noreply.github.com>
Bug(s): 5705199, 6114464, 6114610, 6114612 Requested by: qa@nvidia.com Jenkins build: https://prod.blsm.nvidia.com/swqa-tensorrt-qa-test/job/LLM_FUNCTION_CLUSTER_TEST/1403/ Signed-off-by: xinhe-nv <200704525+xinhe-nv@users.noreply.github.com> Signed-off-by: Ivy Zhang <25222398+crazydemo@users.noreply.github.com>
Bug(s): 5702795, 6114608, 6115560, 6115562 Requested by: qa@nvidia.com Jenkins build: https://prod.blsm.nvidia.com/swqa-tensorrt-qa-test/job/LLM_FUNCTION_CLUSTER_TEST/1395/ Signed-off-by: xinhe-nv <200704525+xinhe-nv@users.noreply.github.com> Signed-off-by: Ivy Zhang <25222398+crazydemo@users.noreply.github.com>
Bug(s): 5981122, 6112497 Requested by: qa@nvidia.com Jenkins build: https://prod.blsm.nvidia.com/swqa-tensorrt-qa-test/job/LLM_FUNCTION_TEST/2135/ Signed-off-by: xinhe-nv <200704525+xinhe-nv@users.noreply.github.com> Signed-off-by: Ivy Zhang <25222398+crazydemo@users.noreply.github.com>
|
/bot run --skip-test |
📝 WalkthroughWalkthroughUpdates the test waiver list by adding 19 new SKIP entries for various test scenarios including autodeploy accuracy, disaggregated workflows, DWDP serving, multimodal MoE dtype, and E2E multi-node evaluation cases across different model families and configurations. Changes
Estimated code review effort🎯 1 (Trivial) | ⏱️ ~3 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/integration/test_lists/waives.txt`:
- Line 436: Remove the duplicate waiver line
"accuracy/test_llm_api_pytorch.py::TestLlama3_1_8BInstruct::test_bfloat16_4gpus[pp4-attn_backend=TRTLLM-torch_compile=False]
SKIP (https://nvbugs/6112497)" from tests/integration/test_lists/waives.txt (the
identical entry already exists earlier in the file around the original waiver
for the same node id), leaving only the existing waiver entry to avoid
redundancy and NVBugs tracking ambiguity.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Enterprise
Run ID: 6e047414-1d90-4422-a35b-dbfe2b001a7d
📒 Files selected for processing (1)
tests/integration/test_lists/waives.txt
| accuracy/test_llm_api_pytorch.py::TestGPTOSS::test_eagle3_4gpus[v2_kv_cache-trtllm-one_model-no_overlap_scheduler] SKIP (https://nvbugs/6114821) | ||
| accuracy/test_llm_api_pytorch.py::TestDeepSeekR1::test_nvfp4_multi_gpus[throughput_tp4] SKIP (https://nvbugs/6110074) | ||
| accuracy/test_llm_api_autodeploy.py::TestNemotronNanoV3::test_accuracy[fp8-4-trtllm] SKIP (https://nvbugs/6112500) | ||
| accuracy/test_llm_api_pytorch.py::TestLlama3_1_8BInstruct::test_bfloat16_4gpus[pp4-attn_backend=TRTLLM-torch_compile=False] SKIP (https://nvbugs/6112497) |
There was a problem hiding this comment.
Duplicate waiver entry already exists earlier in this file.
The exact same node id is already waived at Line 169, so this new line is redundant and can cause tracking ambiguity across NVBugs.
Suggested cleanup
-accuracy/test_llm_api_pytorch.py::TestLlama3_1_8BInstruct::test_bfloat16_4gpus[pp4-attn_backend=TRTLLM-torch_compile=False] SKIP (https://nvbugs/6112497)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| accuracy/test_llm_api_pytorch.py::TestLlama3_1_8BInstruct::test_bfloat16_4gpus[pp4-attn_backend=TRTLLM-torch_compile=False] SKIP (https://nvbugs/6112497) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/integration/test_lists/waives.txt` at line 436, Remove the duplicate
waiver line
"accuracy/test_llm_api_pytorch.py::TestLlama3_1_8BInstruct::test_bfloat16_4gpus[pp4-attn_backend=TRTLLM-torch_compile=False]
SKIP (https://nvbugs/6112497)" from tests/integration/test_lists/waives.txt (the
identical entry already exists earlier in the file around the original waiver
for the same node id), leaving only the existing waiver entry to avoid
redundancy and NVBugs tracking ambiguity.
|
PR_Github #45721 [ run ] triggered by Bot. Commit: |
|
/bot run --skip-test |
|
PR_Github #45727 [ run ] triggered by Bot. Commit: |
|
PR_Github #45727 [ run ] completed with state |
Auto-generated Waive PR
Created by: TensorRT LLM CI Report (requested by qa@nvidia.com)
Target branch:
main, 8xA100Jenkins build: https://prod.blsm.nvidia.com/swqa-tensorrt-qa-test/job/LLM_FUNCTION_TEST/2117/
Bug(s): 6087946, 6112497
Target branch:
main, 8xL40SJenkins build: https://prod.blsm.nvidia.com/swqa-tensorrt-qa-test/job/LLM_FUNCTION_TEST/2118/
Bug(s): 6112497, 6112500
Target branch:
main, 8xH100Jenkins build: https://prod.blsm.nvidia.com/swqa-tensorrt-qa-test/job/LLM_FUNCTION_TEST/2135/
Bug(s): 5981122, 6112497
Target branch:
main, B200Jenkins build: https://prod.blsm.nvidia.com/swqa-tensorrt-qa-test/job/LLM_FUNCTION_CLUSTER_TEST/1403/
Bug(s): 5705199, 6114464, 6114610, 6114612
Jenkins build: https://prod.blsm.nvidia.com/swqa-tensorrt-qa-test/job/LLM_FUNCTION_CLUSTER_TEST/1404/
Bug(s): 6114608
Target branch:
main, GB200Jenkins build: https://prod.blsm.nvidia.com/swqa-tensorrt-qa-test/job/LLM_FUNCTION_CLUSTER_TEST/1394/
Bug(s): 6011317, 6094102, 6114139, 6114140, 6114141, 6114142, 6114464
Target branch:
main, B300Jenkins build: https://prod.blsm.nvidia.com/swqa-tensorrt-qa-test/job/LLM_FUNCTION_CLUSTER_TEST/1395/
Bug(s): 5702795, 6114608, 6115560, 6115562
Target branch:
main, GB300Jenkins build: https://prod.blsm.nvidia.com/swqa-tensorrt-qa-test/job/LLM_FUNCTION_CLUSTER_TEST/1401/
Bug(s): 6114139, 6114140, 6114141, 6114142
Waive entries added
Summary by CodeRabbit
Description
Test Coverage
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
Update tava architecture diagram if there is a significant design change in PR.
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
To see a list of available CI bot commands, please comment
/bot help.