tests: skip local NVML runtime mismatches while preserving CI failures#1739
tests: skip local NVML runtime mismatches while preserving CI failures#1739cpcloud wants to merge 4 commits intoNVIDIA:mainfrom
Conversation
|
Auto-sync is disabled for ready for review pull requests in this repository. Workflows must be run manually. Contributors can view more details about this message here. |
Driver upgrades without a reboot can temporarily leave NVML in a driver/library mismatch state, which is a common local developer scenario. Route NVML-dependent checks through shared fixtures/helpers so local runs skip cleanly while CI still fails fast on real NVML init/load regressions. Made-with: Cursor
Run repository hooks and keep the NVML fixture changes compliant by applying ruff import ordering and formatting adjustments. Made-with: Cursor
Apply hook-driven import ordering/spacing updates introduced by rebasing onto upstream/main so pre-commit passes cleanly. Made-with: Cursor
3ae1ece to
4272269
Compare
|
/ok to test |
|
mdboom
left a comment
There was a problem hiding this comment.
This looks great, in terms of centralizing all of this logic.
However, I'm not sure why some tests unrelated to NVML now have this tagging.
And within system/test_system_system.py, we need to keep most of those tests still running even when NVML is totally missing.
cuda_core/tests/test_memory.py
Outdated
|
|
||
|
|
||
| @pytest.mark.parametrize("change_device", [True, False]) | ||
| @pytest.mark.usefixtures("require_nvml_runtime_or_skip_local") |
There was a problem hiding this comment.
Why do the tests here now require a working NVML? These tests predate NVML in cuda_bindings... What's the root cause?
There was a problem hiding this comment.
I'll look into why this ends up being required.
There was a problem hiding this comment.
get_num_devices will use NVML if it's available. So, yeah, they're pre-existing tests, but they're hitting other APIs now.
Now that these route through NVML when it's available, they're a place where we need to skip if nvml is available but fails in an expected way.
What's the root cause?
The root cause is that I upgraded the driver without rebooting. Since NVML is a driver library, I can no longer use it without a reboot.
I don't want to reboot to keep working in the repo, especially if I'm working on something unrelated to any of this code.
|
|
||
| from .conftest import skip_if_nvml_unsupported | ||
|
|
||
| pytestmark = skip_if_nvml_unsupported |
There was a problem hiding this comment.
Most of the tests in this file are expected to run, other than test_gpu_driver_version, even without an NVML available.
There was a problem hiding this comment.
I'll look into why this ends up being required.
Remove the module-level pytestmark from test_system_system.py and the per-test require_nvml_runtime_or_skip_local markers from test_memory.py. These tests don't inherently need NVML; the NVML-specific tests already have individual @skip_if_nvml_unsupported decorators. Made-with: Cursor
Summary
require_nvml_runtime_or_skip_localfixtures incuda_bindingsandcuda_coretests.Test plan
pixi run --manifest-path cuda_bindings pytest cuda_bindings/tests --override-ini norecursedirs=examples -k "not test_cufile"CI=1 pixi run --manifest-path cuda_bindings pytest cuda_bindings/tests/nvml/test_init.py::test_init_ref_count(expected error on NVML mismatch in CI mode)pixi run --manifest-path cuda_core test(currently blocked in this workspace by unrelated import mismatch:cuda.core._resource_handles does not export expected C function create_culink_handle)CI=1 pixi run --manifest-path cuda_core pytest cuda_core/tests/system/test_system_system.py::test_num_devices(same unrelated import mismatch blocker)Made with Cursor