Set up instructions for NVIDIA GPU container toolkits on a Linux host running Ubuntu 24.04, which can be used with docker and podman.
Â
Revision: 20241118-0 (init: 20240224)
Â
The NVIDIA GPU Container Runtime plugin enables container platforms to securely access and manage NVIDIA GPUs in a containerized application environment.
Docker is an open-source platform that automates applications' deployment, scaling, and management within lightweight, portable containers.
Podman is an open-source, daemonless container engine designed for developing, managing, and running OCI Containers. It functions as a drop-in replacement for Docker.
Instructions for a Linux host running Ubuntu 24.04 to install the Nvidia runtime for docker and podman.
We note that NVIDIA’s Container Toolkit officially only supports Ubuntu LTS release.
Preamble
The following are only required if you do not already have some of the tools installed.
Confirming the Nvidia driver is available
The rest of this guide expects an already functional nvidia-driver.
To install it :
On Ubuntu Desktop, install from Software & Updates’s Additional Drivers and reboot.
On Ubuntu Server, confirm the device is available using ubuntu-drivers devices and install the recommended “server” driver: sudo apt install nvidia-driver-535-server, then reboot.
if you have an aplay error, you can sudo apt-get install alsa-utils
To confirm it is functional, after a reboot, run nvidia-smi from a terminal; if a valid prompt shows up, you will have information on the Driver Version and the supported CUDA Version for future running GPU-enabled containers.
Using a more recent driver
When writing this section (late June 2024), Ubuntu 24.04 uses driver 535 as its recommended driver.
If sudo ubuntu-drivers list offers driver 550 in the provided list, you can perform a sudo ubuntu-drivers install nvidia:550 and ignore the rest of this section.
Otherwise, we will follow the method listed in the “Manual driver installation (using APT)” section.
Check which driver is currently in use:
apt list --installed | grep nvidia | grep modules
Among the options presented, the linux-modules-nvidia-535-generic-hwe-24.04 matches the expected linux-modules-nvidia-${DRIVER_BRANCH}${SERVER}-${LINUX_FLAVOUR} format.
Seek for an available package that matches this format for 550:
This gives us multiple options. The one that matches my use case (not a server, for example) is linux-modules-nvidia-550-generic-hwe-24.04, which we will install:
, confirming driver 550 is loaded, supporting up to CUDA 12.4 in Docker containers.
Using an even more recent driver
In some cases, the latest official driver provider is not recent enough to support more recent CUDA version. In such cases, it is possible to add a Personal Package Archive (PPA) from the “Graphics Drivers” team to the list of package sources. To add it:
At this point, we can start the “Additional Drivers” application and select the driver to install (here we will select Using NVIDIA driver metapackage From nvidia-driver-560 (proprietary)) and start the installation process. After the installation is completed, a reboot is required. At next login, we can confirm the driver and its capabilities using nvidia-smi:
You will need to log out entirely before the changes take effect. Once this is done, you should be able to run docker run hello-world without the need for a sudo:
sudo usermod -aG docker $USER
Install podman
On Ubuntu 24.04, apt search podman returns versions above 4.1.0, the minimum required to use the Container Device Interface (CDI) for nvidia-container-toolkit.
It is, therefore, possible to install podman by simply
sudo apt install podman
Now we can test podman:
podman run hello-world
podman runs similarly to docker; for example:
podman run --rm -it docker.io/ubuntu:24.04 /bin/bash
will download ubuntu:24.04, give you a bash shell prompt in an interactive session, and will delete the created container when you exit the shell.
Â
It is recommended that you always use a fully qualified image name, including the registry server (full DNS name), namespace, image name, and tag, such as docker.io/ubuntu:24.04.
To add docker.io to the list of “unqualified search registries,” edit/etc/containers/registries.conf and modify the following line as follows: unqualified-search-registries=["docker.io"]—more details on that topic at https://podman.io/docs/installation#registriesconf.
Â
Contrary to docker, podman does not create iptables configurations or use br_netfilter, which allows for the use of bridged VMs. In such cases, only install podman and also install podman-compose to get access to podman-compose. Usepodman-compose if you want to use a tool like Dockge; but we also recommend seeing this PR.
Confirm docker (no sudo needed if you made the optional step in the last section) sees any GPU that you have running on your system by having it run nvidia-smi. Note that docker will need both --runtime=nvidia and --gpus all to use the proper runtime and have access to all the GPUs
docker run --rm --runtime=nvidia --gpus all nvidia/cuda:11.6.2-base-ubuntu20.04 nvidia-smi
Please be aware that the max CUDA version returned by the nvidia-smi command on your host (without docker) will inform you of the max cuda:version image that you can use.
Â
You can inspect your /etc/docker/daemon.json file to see that the nvidia-container-runtime is added:
To make this runtime the default, add the following content to the top of the file "default-runtime": "nvidia", (after the first {) and sudo systemctl restart docker. You should not have to add --runtime=nvidia to the CLI anymore.
Linux host set-up instructions for Dockge, a self-hosted Docker Compose stacks management tool with a feature-rich interface for self-hosting and home lab setups. It provides access to an all-in-one view of logs, a YAML editor, a web terminal, container controls, and monitoring.
This guide details how to install quickemu on a Linux Ubuntu 24.04 server to run desktop VMs (here, we will install Ubuntu 24.04 Desktop) on the same subnet IP range as the host's primary subnet using a network bridge. Those VMs are remotely accessible on the subnet using a SPICE client.
This setup will provide VMs directly accessible on the subnet where the Linux server is running. Because we will specify the MAC addresses of those VMs, we can apply reserved DHCP IPs from our router and allow adding and configuring extra services within hardware firewalls, such as the Firewalla (for example, DNS-over-HTTPS).
Docker has become a cornerstone of the modern development stack: how applications are built, shipped, and run. It is an excellent solution to the portability problem: containers run consistently on any machine, eliminating the "it works on my machine" problem. It leverages containers: self-contained units of software that package the components needed for an application to run. This primer introduces its core concepts.