Skip to content

Conversation

Copy link
Contributor

Copilot AI commented Jan 6, 2026

  • Analyze repository layout and understand backend architecture
  • Create shared GPU library packaging script (scripts/build/package-gpu-libs.sh)
  • Update backend package.sh scripts to include GPU libraries
  • Update Dockerfile.python to package GPU libraries
  • Update libbackend.sh to handle GPU library paths at runtime
  • Simplify main Dockerfile - remove GPU driver installation sections
  • Simplify image.yml workflow - reduce to single unified base image build
  • Add libomp.so to hipblas GPU library packaging
  • Test and validate changes
  • Run code review
  • Run CodeQL security scan
Original prompt

Read AGENTS.md. The LocalAI repository contains several backends that are built for different GPUs, backends are meant to be fully isolated environments, indeed the python based ships their own python version. Your task is to extend the backends to package insode the libs they ship also gpu libraries such as cuda for nvidia, sycl libs for intel, and rocm for amd, and so on so forth. This is to try to unify the base image of LocalAI to one, and leverage the fact that we already have separate images for every backend/GPU combination. Before starting, analyze the repository layout thoroughly and check the backends directory yo understand the architecture.


✨ Let Copilot coding agent set things up for you — coding agent works faster and does higher quality work when set up for your repo.

@netlify
Copy link

netlify bot commented Jan 6, 2026

Deploy Preview for localai ready!

Name Link
🔨 Latest commit 0440977
🔍 Latest deploy log https://app.netlify.com/projects/localai/deploys/695e720f05106600081fbf0e
😎 Deploy Preview https://deploy-preview-7891--localai.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify project configuration.

Copilot AI and others added 3 commits January 6, 2026 16:51
- Create scripts/build/package-gpu-libs.sh for packaging CUDA, ROCm, SYCL, and Vulkan libraries
- Update llama-cpp, whisper, stablediffusion-ggml package.sh to include GPU libraries
- Update Dockerfile.python to package GPU libraries into Python backends
- Update libbackend.sh to set LD_LIBRARY_PATH for GPU library loading

Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
…riable

Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
Copilot AI changed the title [WIP] Extend backends to include GPU libraries Package GPU libraries inside backend containers for unified base image Jan 6, 2026
Copilot AI requested a review from mudler January 6, 2026 16:59
@mudler mudler added the roadmap label Jan 6, 2026
@mudler mudler changed the title Package GPU libraries inside backend containers for unified base image feat: package GPU libraries inside backend containers for unified base image Jan 7, 2026
@mudler mudler marked this pull request as ready for review January 7, 2026 08:16
@mudler
Copy link
Owner

mudler commented Jan 7, 2026

@copilot This is looking good, now, we can adapt the main dockerfile as we dont need to bundle anymore the gpu specific packages, and we can update the github workflow as well to stop building all the variants. We can focus on a single base image now

- Remove GPU-specific driver installation from Dockerfile (CUDA, ROCm, Vulkan, Intel)
- Simplify image.yml workflow to build single unified base image for linux/amd64 and linux/arm64
- GPU libraries are now packaged in individual backend containers

Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
Copy link
Contributor Author

Copilot AI commented Jan 7, 2026

@copilot This is looking good, now, we can adapt the main dockerfile as we dont need to bundle anymore the gpu specific packages, and we can update the github workflow as well to stop building all the...

Done in 0440977. The main Dockerfile now has GPU driver installation sections removed, and the image.yml workflow has been simplified to build a single unified base image (linux/amd64,linux/arm64) instead of multiple GPU-specific variants. GPU libraries are now bundled in individual backend containers.

; fi

RUN if [ "${BUILD_TYPE}" = "hipblas" ]; then \
ln -s /opt/rocm-**/lib/llvm/lib/libomp.so /usr/lib/libomp.so \
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@copilot let's add libomp.so to the new package GPU script on hipblas

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added in 676585e. The libomp.so* is now included in the rocm_libs list and the script searches in /opt/rocm/lib/llvm/lib, versioned ROCm LLVM lib paths, and architecture-specific system lib paths.

Copilot AI requested a review from mudler January 7, 2026 13:55
@mudler mudler force-pushed the copilot/extend-backends-with-gpu-libraries branch from 676585e to 0440977 Compare January 7, 2026 14:47
@mudler mudler merged commit fd53978 into master Jan 7, 2026
92 of 113 checks passed
@mudler mudler deleted the copilot/extend-backends-with-gpu-libraries branch January 7, 2026 14:48
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants