Skip to content

VNC Container Implementation#2

Open
cooperj wants to merge 41 commits intomainfrom
display
Open

VNC Container Implementation#2
cooperj wants to merge 41 commits intomainfrom
display

Conversation

@cooperj
Copy link
Member

@cooperj cooperj commented Feb 16, 2026

this is the code I have been working on locally to create a display container that is seperate and not ros enabled

the idea with it not being ros enabled and running on debian:trixie-slim is to keep the size of this display container down, and reduce complexity.

This code is to be merged into the PR #1

@cooperj
Copy link
Member Author

cooperj commented Feb 16, 2026

I do want to slim this down further, but not change too much to make it too much harder to use.

The rough plan was to install docker tooling inside this container and sharing the docker.sock allowing for us to have desktop shortcuts for running docker exec into other containers, and I propose we should configure that with docker labels.

@cooperj cooperj marked this pull request as ready for review February 16, 2026 14:34
this is the code I have been working on locally to create a display container that is **seperate** and **not ros enabled**

the idea with it not being ros enabled and running on debian:trixie-slim is to keep the size of this display container down, and reduce complexity.
@cooperj
Copy link
Member Author

cooperj commented Feb 16, 2026

@marc-hanheide do we want to rebase this onto the main branch and merge this after #1?

@marc-hanheide
Copy link
Member

I do want to slim this down further, but not change too much to make it too much harder to use.

The rough plan was to install docker tooling inside this container and sharing the docker.sock allowing for us to have desktop shortcuts for running docker exec into other containers, and I propose we should configure that with docker labels.

I'm not convinced by making an assumption of access to docker socket. That's a deployment question that shouldn't have an impact on the container.

we can use devcontainer labels to facilitate attaching to it

@marc-hanheide
Copy link
Member

@marc-hanheide do we want to rebase this onto the main branch and merge this after #1?

yes, feel free to merge #1 if happy with it and bring it all together

@cooperj
Copy link
Member Author

cooperj commented Feb 25, 2026

Hey @marc-hanheide this is ready for some more oversight, I have working demo for you to look at:

vnc-demo.mp4

If you want to replicate this, this is the following structure you need:

File Structure

├── aoc_container_base
│   ├── base.dockerfile
│   ├── cuda_desktop.dockerfile
│   ├── cuda.dockerfile
│   ├── docker
│   ├── LICENSE
│   ├── README.md
│   └── vnc.dockerfile
├── aoc_hri
│   ├── build
│   ├── db
│   ├── docs
│   ├── install
│   ├── log
│   ├── README.md
│   └── src
├── aoc_robot_simulator
│   ├── contrib
│   ├── README.md
│   ├── robots
│   └── src
└──── compose.yaml

compose.yaml

services:
  vnc:
    build: 
      context: ./aoc_container_base
      dockerfile: vnc.dockerfile
    user: "lcas" 
    ports:
      - "5801:5801"
    volumes:
      - x11:/tmp/.X11-unix
      - /var/run/docker.sock:/var/run/docker.sock
    devices:
      - /dev/dri:/dev/dri  # GPU access for VirtualGL
    shm_size: '2gb'
    stdin_open: true
    tty: true
    networks:
      - aoc_ros
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: all
              capabilities: [gpu, compute, utility, graphics]
    environment:
      - NVIDIA_VISIBLE_DEVICES=all
      - NVIDIA_DRIVER_CAPABILITIES=all

  simulator:
    build: 
      context: ./aoc_robot_simulator
      dockerfile: .devcontainer/Dockerfile
    user: "ros"
    runtime: nvidia
    volumes:
      - x11:/tmp/.X11-unix
    depends_on:
      - vnc
    stdin_open: true
    tty: true
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: all
              capabilities: [gpu, compute, utility, graphics]
    environment:
      - NVIDIA_VISIBLE_DEVICES=all
      - NVIDIA_DRIVER_CAPABILITIES=all
      - DISPLAY=:1.0
      - ROS_DOMAIN_ID=1
      - ROBOT_MODEL=hunter
      - ROBOT_NAME=hunter_001
      - USE_SIM=true
      - WORLD=person_walking
    networks:
      - aoc_ros
      
  hri:
    build: 
      context: ./aoc_hri
      dockerfile: .devcontainer/Dockerfile
    user: "ros" 
    runtime: nvidia
    stdin_open: true
    tty: true
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: all
              capabilities: [gpu, compute, utility, graphics]
    environment:
      - NVIDIA_VISIBLE_DEVICES=all
      - NVIDIA_DRIVER_CAPABILITIES=all
      - DISPLAY=:1.0
      - USE_SIM=true
    volumes:
      - x11:/tmp/.X11-unix
    depends_on:
      - vnc
      - simulator
    networks:
        - aoc_ros

volumes:
  x11:

networks:
  aoc_ros:

Questions to answer:

  • Do we want to have docker or a terminal available in the noVNC session?
    • Is this a value add?
  • How do we want to work out the GH workflows, should they be adapted to support the VNC container or should that container be built seperately?
    • How about the name lcas.lincoln.ac.uk/vnc:1 (where 1 is tagged release number)?

I propose all ROS work should happen in a separate container, and if required all debugging tasks should be ran in a separate container which is connected to both the network (so can get ros topics and data) and the x11 socket.

Would this debugging container take the role of being the terminals? Do we provide that with code server?

@cooperj cooperj changed the title feat: add seperate display container VNC Container Implementation Feb 25, 2026
@cooperj cooperj mentioned this pull request Mar 13, 2026
3 tasks
cooperj and others added 4 commits March 13, 2026 10:05
Do this to allow for extra flexibility allowing for consumers to pick and choose which tools they need.

This also allows us to use this exact same image to build other containers using the same format and chop and change bits easily
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds a new lightweight, non-ROS VNC/noVNC “display” container (Debian trixie-slim based) plus a devtools variant, and wires both into the existing multi-image GitHub Actions build pipeline.

Changes:

  • Introduce vnc image (TurboVNC + XFCE + noVNC) with a runtime entrypoint and wallpapers.
  • Add vnc_devtools image layer with Docker CLI tooling for interacting with sibling containers.
  • Update CI workflows: split ROS builds into a dedicated reusable workflow and add build/push jobs for the VNC images.

Reviewed changes

Copilot reviewed 7 out of 11 changed files in this pull request and generated 9 comments.

Show a summary per file
File Description
vnc.dockerfile Defines the Debian-based VNC/noVNC desktop image (TurboVNC, XFCE, noVNC).
vnc_devtools.dockerfile Extends the VNC base image with Docker CLI and extra tooling.
docker/vnc-entrypoint.sh Starts VNC server, XFCE session, and noVNC proxy; applies xhost and wallpaper.
docker/wallpapers/aoc.jpg Adds wallpaper asset used by the VNC container.
docker/wallpapers/lcas.jpg Adds wallpaper asset used by the VNC container.
.github/workflows/docker-build-and-push.yaml Adds vnc/vnc-devtools build jobs; switches ROS jobs to ROS-specific reusable workflow.
.github/workflows/_build-ros-image.yaml New reusable workflow for ROS images with ROS-distro-specific tags/caching.
.github/workflows/_build-image.yaml Simplifies non-ROS image workflow inputs/tags/caching.
README.md Documents the new lcas.lincoln.ac.uk/vnc image.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +9 to +10
screen -dmS turbovnc bash -c '/opt/TurboVNC/bin/vncserver :1 -depth 24 -noxstartup -securitytypes TLSNone,X509None,None 2>&1 | tee /tmp/vnc.log; read -p "Press any key to continue..."'

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See comment

Comment on lines +31 to +32
DISPLAY=:1 xhost +local: 2>/dev/null
echo "xhost +local: applied to :1"
echo "xfce4 up"

echo "starting novnc ${NOVNC_VERSION}"
screen -dmS novnc bash -c '/usr/local/novnc/noVNC-${NOVNC_VERSION}/utils/novnc_proxy --vnc localhost:5901 --listen 5801 2>&1 | tee /tmp/novnc.log'
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See comment

Comment on lines +17 to +18
groupadd -g $DOCKER_GID docker && \
usermod -aG docker ${username}
Copy link

Copilot AI commented Mar 13, 2026

@cooperj I've opened a new pull request, #7, to work on those changes. Once the pull request is ready, I'll request review from you.

cooperj and others added 2 commits March 13, 2026 16:26
Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com>
* Initial plan

* Remove unused VNC_PORT env var from vnc.dockerfile

Co-authored-by: cooperj <28831674+cooperj@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: cooperj <28831674+cooperj@users.noreply.github.com>
@cooperj
Copy link
Member Author

cooperj commented Mar 13, 2026

Regarding authentication and listen addresses, I did spend some time trying to play with getting this working previously.

The way I see copilot suggesting is letting the dev have an option of setting a password on the noVNC interface which is disabled by default.

What I think we should do is the following options:

  1. If we don't need to auth (i.e. student facing robot) - we don't need it!
  2. If we are deploying with VPN access into the robot, have in the compose file the port binding tagged to the VPN interface (i.e. 10.8.0.123:5801:5801 where 10.8.0.123 is the IP on VPN)
  3. Sit the robot interface(s) behind a reverse proxy and put auth on that, this can be anything we like from nice OIDC to basic auth.

I feel like we should be going in the order of preference to 2->1->3, with options 1 and 2 being the easiest to implement.

…inal

tidy: run formatter and linter on all the dockerfiles to make them nicer
@marc-hanheide
Copy link
Member

Regarding authentication and listen addresses, I did spend some time trying to play with getting this working previously.

The way I see copilot suggesting is letting the dev have an option of setting a password on the noVNC interface which is disabled by default.

What I think we should do is the following options:

  1. If we don't need to auth (i.e. student facing robot) - we don't need it!
  2. If we are deploying with VPN access into the robot, have in the compose file the port binding tagged to the VPN interface (i.e. 10.8.0.123:5801:5801 where 10.8.0.123 is the IP on VPN)
  3. Sit the robot interface(s) behind a reverse proxy and put auth on that, this can be anything we like from nice OIDC to basic auth.

I feel like we should be going in the order of preference to 2->1->3, with options 1 and 2 being the easiest to implement.

VNC auth is insecure anyway. So, we should just disable it and for applications where we need protection we do this by protecting the route (e.g. using caddy auth with OIDC). Let's not put a VNC password

@marc-hanheide
Copy link
Member

I had a play with this, @cooperj

I may need a bit more context and documentation.

Slightly concerned about what you have in the readme:

A repository of verstile ROS-enabled Docker containers, orginally developed as apart of the Agri-OpenCore (AOC) project.

Container Name Tags Purpose
lcas.lincoln.ac.uk/ros { humble, jazzy } Base ROS Container, the minimal environment you need for ROS
lcas.lincoln.ac.uk/ros_cuda { humble, jazzy } ROS + Nvidia. When you need to use a GPU in your ROS environment for either better quality simulation or AI workloads.
lcas.lincoln.ac.uk/ros_cuda_desktop { humble, jazzy } ROS + Nvidia + Packages. Installs the ros-{distro}-desktop varient so there is the full ROS stack available.
lcas.lincoln.ac.uk/vnc { latest } X11 destination accessible over a website using NoVNC.

Are you suggesting overwriting those existing images (e.g. ros)?

Otherwise, I like the approach.

I have added some docker compose setups for testing in 107fd67

One real concern I have though: I have tried using your VNC container (with GPU) and another ros container (starting rviz or simply /opt/VirtualGL/bin/glxspheres64 for testing. But it is not possible to utilise 3D OpenGL (at least I ddin't make it work).

I still vote for a ROS-enabled, VGL, and GPU enable VNC ROS visualisation image. It doesn't need cuda, but ROS installed. Rather than having a "pure desktop" that other containers connect to (which works but looses OpenGL), we have a base VNC enabled ROS image (NOT cuda!), and a version derived from it that has all standard ROS visualisations install (like rviz, rqt,...), and another that has gazebo installed?

Updated repository name and improved container documentation.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants