-
Notifications
You must be signed in to change notification settings - Fork 581
breaking: stop providing CUDA 11 pre-built wheels #5080
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
CUDA 11 is very old. TensorFlow, PyTorch, and JAX stopped the support of CUDA 11 very long ago.
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## devel #5080 +/- ##
=======================================
Coverage 84.28% 84.28%
=======================================
Files 709 709
Lines 70561 70561
Branches 3618 3619 +1
=======================================
+ Hits 59472 59473 +1
Misses 9923 9923
+ Partials 1166 1165 -1 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Signed-off-by: Jinzhe Zeng <jinzhe.zeng@ustc.edu.cn>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR removes support for CUDA 11 pre-built wheels as CUDA 11 is outdated and no longer supported by major frameworks like TensorFlow, PyTorch, and JAX.
- Removed CUDA 11 dependency groups and wheel build configurations
- Cleaned up CI/CD workflows to remove CUDA 11 build matrices and setup steps
- Updated documentation to remove CUDA 11 installation instructions
Reviewed changes
Copilot reviewed 11 out of 11 changed files in this pull request and generated 2 comments.
Show a summary per file
| File | Description |
|---|---|
| source/install/docker/Dockerfile | Removed conditional PyTorch backend selection for CUDA 11 during wheel installation |
| pyproject.toml | Removed cu11 dependency group, updated manylinux image to latest, and cleaned up CUDA 11-specific build configuration |
| doc/install/install-from-source.md | Removed CUDA 11.8/cu118 PaddlePaddle installation instructions and Paddle C++ inference library link |
| doc/install/install-from-c-library.md | Updated to reflect only CUDA 12.2 library availability |
| doc/install/easy-install.md | Removed CUDA 11/11.8 installation tabs for TensorFlow, PyTorch, and PaddlePaddle backends |
| doc/install/easy-install-dev.md | Removed reference to devel_cu11 Docker tag |
| backend/find_tensorflow.py | Removed CUDA 11 version detection logic and TensorFlow 2.14.1 requirement |
| backend/find_pytorch.py | Removed CUDA 11 version detection logic and PyTorch 2.3.1 version pinning |
| .github/workflows/package_c.yml | Removed TensorFlow 2.14 build matrix entry for CUDA 11 C library |
| .github/workflows/build_wheel.yml | Removed CUDA 11.8 build matrix entry and QEMU/setuptools_scm setup steps |
| .github/workflows/build_cc.yml | Removed CUDA 11.8 variant from build matrix and associated CUDA toolkit installation |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
📝 WalkthroughWalkthroughSystematically removes CUDA 11 support across CI workflows, backend version detection logic, documentation, and build configuration. Consolidates the build matrix to CUDA 12.x and CPU variants, removes conditional CUDA 11 branching from PyTorch/TensorFlow version selectors, and eliminates related build steps (QEMU, UV, AlmaLinux RPM imports). Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes
Possibly related PRs
Suggested labels
Suggested reviewers
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (11)
💤 Files with no reviewable changes (6)
🧰 Additional context used🧠 Learnings (1)📚 Learning: 2025-08-14T07:11:51.357ZApplied to files:
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
🔇 Additional comments (8)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
CUDA 11 is very old. TensorFlow, PyTorch, and JAX stopped the support of CUDA 11 very long ago.
Summary by CodeRabbit
Chores
Documentation
✏️ Tip: You can customize this high-level summary in your review settings.