feat(docker): add Dockerfile.arm64 for Linux aarch64 / SBC builds#9095
Open
Steve235lab wants to merge 1 commit intoinvoke-ai:mainfrom
Open
feat(docker): add Dockerfile.arm64 for Linux aarch64 / SBC builds#9095Steve235lab wants to merge 1 commit intoinvoke-ai:mainfrom
Steve235lab wants to merge 1 commit intoinvoke-ai:mainfrom
Conversation
PyTorch does not publish +cpu wheels for linux/aarch64 on the pytorch.org WHL index, so the existing Dockerfile fails on Raspberry Pi and other ARM SBCs with: Distribution `torchvision==0.22.1+cpu` can't be installed because it doesn't have a source distribution or wheel for the current platform hint: manylinux_2_28_x86_64 / win_amd64 only Dockerfile.arm64 works around this by: 1. COPYing pyproject.toml as a writable file instead of a read-only bind-mount. 2. Using awk/sed to strip the explicit per-extra pytorch-index source overrides ([tool.uv.sources] for torch, torchvision, pytorch-triton-rocm) so uv falls back to PyPI for all three packages. 3. Replacing all platform-specific version pins (+cpu / +cu128 / +rocm*) in every optional extra with bare PyPI-compatible specifiers, because uv resolves all extras simultaneously and any unresolvable extra fails the whole build. 4. Running `uv sync --extra cpu` without --frozen (lock file was generated on x86_64 and cannot be used verbatim on aarch64). Tested on Raspberry Pi 5 (linux/aarch64, Ubuntu 24.04). Installs torch==2.7.1 and torchvision==0.22.1 from PyPI aarch64 wheels. Also updates docker/README.md to document the ARM64 build path and corrects the "only x86_64 is supported" note.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
docker/Dockerfile.arm64to support building InvokeAI on Linux ARM64 devices (Raspberry Pi, SBCs, etc.)docker/README.mdwith build instructions and corrects the "only x86_64 is supported" noteWhy it's needed: The existing
Dockerfilefails onlinux/aarch64because PyTorch's+cpuwheels ondownload.pytorch.org/whl/cpuare only published formanylinux_2_28_x86_64andwin_amd64. There are noaarch64wheels on that index.How it works:
The new
Dockerfile.arm64diverges from the upstream Dockerfile in one key section — the dependency install step:pyproject.tomlisCOPYed as a writable file rather than a read-only bind-mount.awkstrips the explicit[tool.uv.sources]index overrides fortorch,torchvision, andpytorch-triton-rocm, souvfalls back to PyPI for those packages.sedreplaces all platform-specific version pins (+cpu/+cu128/+rocm*) across all optional extras with bare PyPI-compatible specifiers. This is necessary becauseuv syncresolves all extras simultaneously — any unresolvable extra (e.g.rocmstill pinningtorch==2.7.1+rocm6.3) fails the entire build even if that extra isn't being installed.uv sync --extra cpuis run without--frozensince the lock file was generated on x86_64.PyPI ships
torch==2.7.1andtorchvision==0.22.1withlinux_aarch64wheels, so the install succeeds cleanly.Tested on: Raspberry Pi 5,
linux/aarch64, Ubuntu 24.04, Docker 27.x. Full build succeeds and InvokeAI starts correctly.Related Issues / Discussions
The upstream
Dockerfilealready contains this comment acknowledging the gap:QA Instructions
On any
linux/aarch64machine with Docker installed:Navigate to
http://localhost:9090and verify the UI loads. API-backed models (GPT Image, Gemini) can be configured and used without a GPU.Checklist