Skip to content

Commit 09486c5

Browse files
authored
breaking: stop providing CUDA 11 pre-built wheels (#5080)
CUDA 11 is very old. TensorFlow, PyTorch, and JAX stopped the support of CUDA 11 very long ago. <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit * **Chores** * Removed CUDA 11 support across build matrix; CUDA 12.8 is now the minimum supported version. * Dropped TensorFlow 2.14 support from C library builds; only TensorFlow 2.18 available. * Simplified build toolchain and updated base image specifications. * **Documentation** * Removed CUDA 11.8 installation guides and references. * Updated installation documentation to reflect current CUDA and TensorFlow requirements. <sub>✏️ Tip: You can customize this high-level summary in your review settings.</sub> <!-- end of auto-generated comment: release notes by coderabbit.ai --> --------- Signed-off-by: Jinzhe Zeng <jinzhe.zeng@ustc.edu.cn>
1 parent e816d01 commit 09486c5

File tree

11 files changed

+5
-101
lines changed

11 files changed

+5
-101
lines changed

.github/workflows/build_cc.yml

Lines changed: 0 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -20,8 +20,6 @@ jobs:
2020
include:
2121
- variant: cpu
2222
dp_variant: cpu
23-
- variant: cuda
24-
dp_variant: cuda
2523
- variant: cuda120
2624
dp_variant: cuda
2725
- variant: rocm
@@ -36,12 +34,6 @@ jobs:
3634
- uses: lukka/get-cmake@latest
3735
- run: python -m pip install uv
3836
- run: source/install/uv_with_retry.sh pip install --system --group pin_tensorflow_cpu --group pin_pytorch_cpu --torch-backend cpu
39-
- run: |
40-
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.0-1_all.deb \
41-
&& sudo dpkg -i cuda-keyring_1.0-1_all.deb \
42-
&& sudo apt-get update \
43-
&& sudo apt-get -y install cuda-cudart-dev-11-8 cuda-nvcc-11-8
44-
if: matrix.variant == 'cuda'
4537
- run: |
4638
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.0-1_all.deb \
4739
&& sudo dpkg -i cuda-keyring_1.0-1_all.deb \

.github/workflows/build_wheel.yml

Lines changed: 1 addition & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -29,13 +29,7 @@ jobs:
2929
python: 311
3030
platform_id: manylinux_x86_64
3131
dp_variant: cuda
32-
cuda_version: 12.2
33-
- os: ubuntu-latest
34-
python: 311
35-
platform_id: manylinux_x86_64
36-
dp_variant: cuda
37-
cuda_version: 11.8
38-
dp_pkg_name: deepmd-kit-cu11
32+
cuda_version: 12.8
3933
# macos-x86-64
4034
- os: macos-15-intel
4135
python: 311
@@ -64,14 +58,6 @@ jobs:
6458
- name: Install uv
6559
run: curl --proto '=https' --tlsv1.2 -LsSf https://github.com/astral-sh/uv/releases/download/0.2.24/uv-installer.sh | sh
6660
if: runner.os != 'Linux'
67-
- uses: docker/setup-qemu-action@v3
68-
name: Setup QEMU
69-
if: matrix.platform_id == 'manylinux_aarch64' && matrix.os == 'ubuntu-latest'
70-
# detect version in advance. See #3168
71-
- run: |
72-
echo "SETUPTOOLS_SCM_PRETEND_VERSION=$(pipx run uv tool run --from setuptools_scm python -m setuptools_scm)" >> $GITHUB_ENV
73-
rm -rf .git
74-
if: matrix.dp_pkg_name == 'deepmd-kit-cu11'
7561
- name: Build wheels
7662
uses: pypa/cibuildwheel@v3.3
7763
env:
@@ -126,8 +112,6 @@ jobs:
126112
include:
127113
- variant: ""
128114
cuda_version: "12"
129-
- variant: "_cu11"
130-
cuda_version: "11"
131115
steps:
132116
- name: Delete huge unnecessary tools folder
133117
run: rm -rf /opt/hostedtoolcache

.github/workflows/package_c.yml

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -24,9 +24,6 @@ jobs:
2424
- tensorflow_build_version: "2.18"
2525
tensorflow_version: ""
2626
filename: libdeepmd_c.tar.gz
27-
- tensorflow_build_version: "2.14"
28-
tensorflow_version: ">=2.5.0,<2.15"
29-
filename: libdeepmd_c_cu11.tar.gz
3027
steps:
3128
- name: Free Disk Space (Ubuntu)
3229
uses: insightsengineering/disk-space-reclaimer@v1

backend/find_pytorch.py

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -116,9 +116,6 @@ def get_pt_requirement(pt_version: str = "") -> dict:
116116
cibw_requirement = read_dependencies_from_dependency_group(
117117
"pin_pytorch_cpu"
118118
)
119-
elif cuda_version in SpecifierSet(">=11,<12"):
120-
# CUDA 11.8, cudnn 8
121-
pt_version = "2.3.1"
122119
else:
123120
raise RuntimeError("Unsupported CUDA version") from None
124121
if pt_version == "":

backend/find_tensorflow.py

Lines changed: 0 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -95,14 +95,6 @@ def find_tensorflow() -> tuple[str | None, list[str]]:
9595
requires.extend(
9696
read_dependencies_from_dependency_group("pin_tensorflow_cpu")
9797
)
98-
elif cuda_version in SpecifierSet(">=11,<12"):
99-
# CUDA 11.8, cudnn 8
100-
requires.extend(
101-
[
102-
"tensorflow-cpu>=2.5.0,<2.15; platform_machine=='x86_64' and platform_system == 'Linux'",
103-
]
104-
)
105-
tf_version = "2.14.1"
10698
else:
10799
raise RuntimeError("Unsupported CUDA version") from None
108100
requires.extend(get_tf_requirement(tf_version)["cpu"])

doc/install/easy-install-dev.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,8 +12,6 @@ The [`devel` tag](https://github.com/deepmodeling/deepmd-kit/pkgs/container/deep
1212
docker pull ghcr.io/deepmodeling/deepmd-kit:devel
1313
```
1414

15-
For CUDA 11.8 support, use the `devel_cu11` tag.
16-
1715
## Install with pip
1816

1917
Follow [the documentation for the stable version](easy-install.md#install-python-interface-with-pip), but add `--pre` and `--extra-index-url` options like below:

doc/install/easy-install.md

Lines changed: 0 additions & 29 deletions
Original file line numberDiff line numberDiff line change
@@ -96,14 +96,6 @@ pip install deepmd-kit[gpu,cu12]
9696

9797
::::
9898

99-
::::{tab-item} CUDA 11
100-
101-
```bash
102-
pip install deepmd-kit-cu11[gpu,cu11]
103-
```
104-
105-
::::
106-
10799
::::{tab-item} CPU
108100

109101
```bash
@@ -128,15 +120,6 @@ pip install deepmd-kit[torch]
128120

129121
::::
130122

131-
::::{tab-item} CUDA 11.8
132-
133-
```bash
134-
pip install torch --index-url https://download.pytorch.org/whl/cu118
135-
pip install deepmd-kit-cu11
136-
```
137-
138-
::::
139-
140123
::::{tab-item} CPU
141124

142125
```bash
@@ -194,18 +177,6 @@ pip install deepmd-kit
194177

195178
::::
196179

197-
::::{tab-item} CUDA 11.8
198-
199-
```bash
200-
# release version
201-
pip install paddlepaddle-gpu==3.1.1 -i https://www.paddlepaddle.org.cn/packages/stable/cu118/
202-
# nightly-build version
203-
# pip install --pre paddlepaddle-gpu -i https://www.paddlepaddle.org.cn/packages/nightly/cu118/
204-
pip install deepmd-kit
205-
```
206-
207-
::::
208-
209180
::::{tab-item} CPU
210181

211182
```bash

doc/install/install-from-c-library.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ wget https://github.com/deepmodeling/deepmd-kit/releases/latest/download/libdeep
1212
tar xzf libdeepmd_c.tar.gz
1313
```
1414

15-
The library is built in Linux (GLIBC 2.17) with CUDA 12.2 (`libdeepmd_c.tar.gz`) or 11.8 (`libdeepmd_c_cu11.tar.gz`). It's noted that this package does not contain CUDA Toolkit and cuDNN, so one needs to download them from the NVIDIA website.
15+
The library is built in Linux (GLIBC 2.17) with CUDA 12.2 (`libdeepmd_c.tar.gz`). It's noted that this package does not contain CUDA Toolkit and cuDNN, so one needs to download them from the NVIDIA website.
1616

1717
## Use Pre-compiled C Library to build the LAMMPS plugin, i-PI driver, and GROMACS patch
1818

doc/install/install-from-source.md

Lines changed: 1 addition & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -104,12 +104,6 @@ pip install paddlepaddle-gpu==3.1.1 -i https://www.paddlepaddle.org.cn/packages/
104104
# nightly-build version
105105
# pip install --pre paddlepaddle-gpu -i https://www.paddlepaddle.org.cn/packages/nightly/cu126/
106106

107-
# cu118
108-
# release version
109-
pip install paddlepaddle-gpu==3.1.1 -i https://www.paddlepaddle.org.cn/packages/stable/cu118/
110-
# nightly-build version
111-
# pip install --pre paddlepaddle-gpu -i https://www.paddlepaddle.org.cn/packages/nightly/cu118/
112-
113107
# cpu
114108
# release version
115109
pip install paddlepaddle==3.1.1 -i https://www.paddlepaddle.org.cn/packages/stable/cpu/
@@ -361,9 +355,7 @@ download the TensorFlow C library from [this page](https://www.tensorflow.org/in
361355

362356
If you want to use C++ interface of Paddle, you need to compile the Paddle inference library(C++ interface) manually from the [linux-compile-by-make](https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/install/compile/linux-compile-by-make.html), then use the `.so` and `.a` files in `Paddle/build/paddle_inference_install_dir/`.
363357

364-
We also provide a weekly-build Paddle C++ inference library for Linux x86_64 with CUDA 11.8/12.3/CPU below:
365-
366-
CUDA 11.8: [Cuda118_cudnn860_Trt8531_D1/latest/paddle_inference.tgz](https://paddle-qa.bj.bcebos.com/paddle-pipeline/GITHUB_Docker_Compile_Test_Cuda118_cudnn860_Trt8531_D1/latest/paddle_inference.tgz)
358+
We also provide a weekly-build Paddle C++ inference library for Linux x86_64 with CUDA 12.3/CPU below:
367359

368360
CUDA 12.3: [Cuda123_cudnn900_Trt8616_D1/latest/paddle_inference.tgz](https://paddle-qa.bj.bcebos.com/paddle-pipeline/GITHUB_Docker_Compile_Test_Cuda123_cudnn900_Trt8616_D1/latest/paddle_inference.tgz)
369361

pyproject.toml

Lines changed: 1 addition & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -116,16 +116,6 @@ ipi = [
116116
gui = [
117117
"dpgui",
118118
]
119-
cu11 = [
120-
"nvidia-cuda-runtime-cu11",
121-
"nvidia-cublas-cu11",
122-
"nvidia-cufft-cu11",
123-
"nvidia-curand-cu11",
124-
"nvidia-cusolver-cu11",
125-
"nvidia-cusparse-cu11",
126-
"nvidia-cudnn-cu11<9",
127-
"nvidia-cuda-nvcc-cu11",
128-
]
129119
cu12 = [
130120
"nvidia-cuda-runtime-cu12",
131121
"nvidia-cublas-cu12",
@@ -254,9 +244,7 @@ test-command = [
254244
test-extras = ["cpu", "test", "lmp", "ipi", "torch", "paddle"]
255245
build = ["cp311-*"]
256246
skip = ["*-win32", "*-manylinux_i686", "*-musllinux*"]
257-
# TODO: uncomment to use the latest image when CUDA 11 is deprecated
258-
# manylinux-x86_64-image = "manylinux_2_28"
259-
manylinux-x86_64-image = "quay.io/pypa/manylinux_2_28_x86_64:2022-11-19-1b19e81"
247+
manylinux-x86_64-image = "manylinux_2_28"
260248
manylinux-aarch64-image = "manylinux_2_28"
261249

262250
[tool.cibuildwheel.macos]
@@ -288,15 +276,9 @@ environment-pass = [
288276
]
289277
before-all = [
290278
"""if [ ! -z "${DP_PKG_NAME}" ]; then sed -i "s/name = \\"deepmd-kit\\"/name = \\"${DP_PKG_NAME}\\"/g" pyproject.toml; fi""",
291-
# https://almalinux.org/blog/2023-12-20-almalinux-8-key-update/
292-
"""rpm --import https://repo.almalinux.org/almalinux/RPM-GPG-KEY-AlmaLinux""",
293279
"""{ if [ "$(uname -m)" = "x86_64" ] ; then yum config-manager --add-repo http://developer.download.nvidia.com/compute/cuda/repos/rhel8/x86_64/cuda-rhel8.repo && yum install -y cuda-nvcc-${CUDA_VERSION/./-} cuda-cudart-devel-${CUDA_VERSION/./-}; fi }""",
294-
# uv is not available in the old manylinux image
295-
"""{ if [ "$(uname -m)" = "x86_64" ] ; then pipx install uv; fi }""",
296280
]
297281
before-build = [
298-
# old build doesn't support uv
299-
"""{ if [ "$(uname -m)" = "x86_64" ] ; then uv pip install --system -U build; fi }""",
300282
]
301283
[tool.cibuildwheel.linux.environment]
302284
PIP_PREFER_BINARY = "1"

0 commit comments

Comments
 (0)