Conversation
…architecture support
…gging and installation
|
I do want to slim this down further, but not change too much to make it too much harder to use. The rough plan was to install docker tooling inside this container and sharing the |
…and updating image descriptions
…ilding and pushing the image
this is the code I have been working on locally to create a display container that is **seperate** and **not ros enabled** the idea with it not being ros enabled and running on debian:trixie-slim is to keep the size of this display container down, and reduce complexity.
|
@marc-hanheide do we want to rebase this onto the |
I'm not convinced by making an assumption of access to docker socket. That's a deployment question that shouldn't have an impact on the container. we can use devcontainer labels to facilitate attaching to it |
yes, feel free to merge #1 if happy with it and bring it all together |
|
Hey @marc-hanheide this is ready for some more oversight, I have working demo for you to look at: vnc-demo.mp4If you want to replicate this, this is the following structure you need: File Structure
compose.yaml
services:
vnc:
build:
context: ./aoc_container_base
dockerfile: vnc.dockerfile
user: "lcas"
ports:
- "5801:5801"
volumes:
- x11:/tmp/.X11-unix
- /var/run/docker.sock:/var/run/docker.sock
devices:
- /dev/dri:/dev/dri # GPU access for VirtualGL
shm_size: '2gb'
stdin_open: true
tty: true
networks:
- aoc_ros
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu, compute, utility, graphics]
environment:
- NVIDIA_VISIBLE_DEVICES=all
- NVIDIA_DRIVER_CAPABILITIES=all
simulator:
build:
context: ./aoc_robot_simulator
dockerfile: .devcontainer/Dockerfile
user: "ros"
runtime: nvidia
volumes:
- x11:/tmp/.X11-unix
depends_on:
- vnc
stdin_open: true
tty: true
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu, compute, utility, graphics]
environment:
- NVIDIA_VISIBLE_DEVICES=all
- NVIDIA_DRIVER_CAPABILITIES=all
- DISPLAY=:1.0
- ROS_DOMAIN_ID=1
- ROBOT_MODEL=hunter
- ROBOT_NAME=hunter_001
- USE_SIM=true
- WORLD=person_walking
networks:
- aoc_ros
hri:
build:
context: ./aoc_hri
dockerfile: .devcontainer/Dockerfile
user: "ros"
runtime: nvidia
stdin_open: true
tty: true
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu, compute, utility, graphics]
environment:
- NVIDIA_VISIBLE_DEVICES=all
- NVIDIA_DRIVER_CAPABILITIES=all
- DISPLAY=:1.0
- USE_SIM=true
volumes:
- x11:/tmp/.X11-unix
depends_on:
- vnc
- simulator
networks:
- aoc_ros
volumes:
x11:
networks:
aoc_ros:Questions to answer:
I propose all ROS work should happen in a separate container, and if required all debugging tasks should be ran in a separate container which is connected to both the network (so can get ros topics and data) and the x11 socket. Would this debugging container take the role of being the terminals? Do we provide that with code server? |
Do this to allow for extra flexibility allowing for consumers to pick and choose which tools they need. This also allows us to use this exact same image to build other containers using the same format and chop and change bits easily
There was a problem hiding this comment.
Pull request overview
Adds a new lightweight, non-ROS VNC/noVNC “display” container (Debian trixie-slim based) plus a devtools variant, and wires both into the existing multi-image GitHub Actions build pipeline.
Changes:
- Introduce
vncimage (TurboVNC + XFCE + noVNC) with a runtime entrypoint and wallpapers. - Add
vnc_devtoolsimage layer with Docker CLI tooling for interacting with sibling containers. - Update CI workflows: split ROS builds into a dedicated reusable workflow and add build/push jobs for the VNC images.
Reviewed changes
Copilot reviewed 7 out of 11 changed files in this pull request and generated 9 comments.
Show a summary per file
| File | Description |
|---|---|
vnc.dockerfile |
Defines the Debian-based VNC/noVNC desktop image (TurboVNC, XFCE, noVNC). |
vnc_devtools.dockerfile |
Extends the VNC base image with Docker CLI and extra tooling. |
docker/vnc-entrypoint.sh |
Starts VNC server, XFCE session, and noVNC proxy; applies xhost and wallpaper. |
docker/wallpapers/aoc.jpg |
Adds wallpaper asset used by the VNC container. |
docker/wallpapers/lcas.jpg |
Adds wallpaper asset used by the VNC container. |
.github/workflows/docker-build-and-push.yaml |
Adds vnc/vnc-devtools build jobs; switches ROS jobs to ROS-specific reusable workflow. |
.github/workflows/_build-ros-image.yaml |
New reusable workflow for ROS images with ROS-distro-specific tags/caching. |
.github/workflows/_build-image.yaml |
Simplifies non-ROS image workflow inputs/tags/caching. |
README.md |
Documents the new lcas.lincoln.ac.uk/vnc image. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| screen -dmS turbovnc bash -c '/opt/TurboVNC/bin/vncserver :1 -depth 24 -noxstartup -securitytypes TLSNone,X509None,None 2>&1 | tee /tmp/vnc.log; read -p "Press any key to continue..."' | ||
|
|
| DISPLAY=:1 xhost +local: 2>/dev/null | ||
| echo "xhost +local: applied to :1" |
| echo "xfce4 up" | ||
|
|
||
| echo "starting novnc ${NOVNC_VERSION}" | ||
| screen -dmS novnc bash -c '/usr/local/novnc/noVNC-${NOVNC_VERSION}/utils/novnc_proxy --vnc localhost:5901 --listen 5801 2>&1 | tee /tmp/novnc.log' |
vnc_devtools.dockerfile
Outdated
| groupadd -g $DOCKER_GID docker && \ | ||
| usermod -aG docker ${username} |
Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com>
* Initial plan * Remove unused VNC_PORT env var from vnc.dockerfile Co-authored-by: cooperj <28831674+cooperj@users.noreply.github.com> --------- Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com> Co-authored-by: cooperj <28831674+cooperj@users.noreply.github.com>
|
Regarding authentication and listen addresses, I did spend some time trying to play with getting this working previously. The way I see copilot suggesting is letting the dev have an option of setting a password on the noVNC interface which is disabled by default. What I think we should do is the following options:
I feel like we should be going in the order of preference to 2->1->3, with options 1 and 2 being the easiest to implement. |
…inal tidy: run formatter and linter on all the dockerfiles to make them nicer
VNC auth is insecure anyway. So, we should just disable it and for applications where we need protection we do this by protecting the route (e.g. using caddy auth with OIDC). Let's not put a VNC password |
|
I had a play with this, @cooperj I may need a bit more context and documentation. Slightly concerned about what you have in the readme:
Are you suggesting overwriting those existing images (e.g. Otherwise, I like the approach. I have added some docker compose setups for testing in 107fd67 One real concern I have though: I have tried using your VNC container (with GPU) and another ros container (starting rviz or simply I still vote for a ROS-enabled, VGL, and GPU enable VNC ROS visualisation image. It doesn't need cuda, but ROS installed. Rather than having a "pure desktop" that other containers connect to (which works but looses OpenGL), we have a base VNC enabled ROS image (NOT cuda!), and a version derived from it that has all standard ROS visualisations install (like rviz, rqt,...), and another that has gazebo installed? |
Updated repository name and improved container documentation.
this is the code I have been working on locally to create a display container that is seperate and not ros enabled
the idea with it not being ros enabled and running on debian:trixie-slim is to keep the size of this display container down, and reduce complexity.
This code is to be merged into the PR #1