You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
breaking: stop providing CUDA 11 pre-built wheels (#5080)
CUDA 11 is very old. TensorFlow, PyTorch, and JAX stopped the support of
CUDA 11 very long ago.
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
* **Chores**
* Removed CUDA 11 support across build matrix; CUDA 12.8 is now the
minimum supported version.
* Dropped TensorFlow 2.14 support from C library builds; only TensorFlow
2.18 available.
* Simplified build toolchain and updated base image specifications.
* **Documentation**
* Removed CUDA 11.8 installation guides and references.
* Updated installation documentation to reflect current CUDA and
TensorFlow requirements.
<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
---------
Signed-off-by: Jinzhe Zeng <jinzhe.zeng@ustc.edu.cn>
Copy file name to clipboardExpand all lines: doc/install/easy-install-dev.md
-2Lines changed: 0 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,8 +12,6 @@ The [`devel` tag](https://github.com/deepmodeling/deepmd-kit/pkgs/container/deep
12
12
docker pull ghcr.io/deepmodeling/deepmd-kit:devel
13
13
```
14
14
15
-
For CUDA 11.8 support, use the `devel_cu11` tag.
16
-
17
15
## Install with pip
18
16
19
17
Follow [the documentation for the stable version](easy-install.md#install-python-interface-with-pip), but add `--pre` and `--extra-index-url` options like below:
The library is built in Linux (GLIBC 2.17) with CUDA 12.2 (`libdeepmd_c.tar.gz`) or 11.8 (`libdeepmd_c_cu11.tar.gz`). It's noted that this package does not contain CUDA Toolkit and cuDNN, so one needs to download them from the NVIDIA website.
15
+
The library is built in Linux (GLIBC 2.17) with CUDA 12.2 (`libdeepmd_c.tar.gz`). It's noted that this package does not contain CUDA Toolkit and cuDNN, so one needs to download them from the NVIDIA website.
16
16
17
17
## Use Pre-compiled C Library to build the LAMMPS plugin, i-PI driver, and GROMACS patch
@@ -361,9 +355,7 @@ download the TensorFlow C library from [this page](https://www.tensorflow.org/in
361
355
362
356
If you want to use C++ interface of Paddle, you need to compile the Paddle inference library(C++ interface) manually from the [linux-compile-by-make](https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/install/compile/linux-compile-by-make.html), then use the `.so` and `.a` files in `Paddle/build/paddle_inference_install_dir/`.
363
357
364
-
We also provide a weekly-build Paddle C++ inference library for Linux x86_64 with CUDA 11.8/12.3/CPU below:
365
-
366
-
CUDA 11.8: [Cuda118_cudnn860_Trt8531_D1/latest/paddle_inference.tgz](https://paddle-qa.bj.bcebos.com/paddle-pipeline/GITHUB_Docker_Compile_Test_Cuda118_cudnn860_Trt8531_D1/latest/paddle_inference.tgz)
358
+
We also provide a weekly-build Paddle C++ inference library for Linux x86_64 with CUDA 12.3/CPU below:
367
359
368
360
CUDA 12.3: [Cuda123_cudnn900_Trt8616_D1/latest/paddle_inference.tgz](https://paddle-qa.bj.bcebos.com/paddle-pipeline/GITHUB_Docker_Compile_Test_Cuda123_cudnn900_Trt8616_D1/latest/paddle_inference.tgz)
0 commit comments