Kolla Ansible provides production-ready containers (here Docker) and deployment tools for operating OpenStack clouds. This guide explains how to install a single host (all-in-one) OpenStack Cloud on a Ubuntu 22.04 server Linux Operating System on a private network. We specify values and variables early that can easily be adapted to others’ networks. We do not address encryption for the different OpenStack services and will use an HTTPS reverse proxy for access to the dashboard. Please note that this setup requires two physical NICs in the computer you will use.
A guide detailing installing OpenStack on a Ubuntu 22.04 server Linux operating system using a private network.
Revision: 20240512-0 (init: 20240220)
Kolla Ansible provides production-ready containers (here, Docker) and deployment tools for operating OpenStack clouds. This guide explains installing a single host (all-in-one) OpenStack Cloud on a Ubuntu 22.04 server Linux operating system using a private network. We specify values and variables that can easily be adapted to others’ networks. We do not address encryption for the different OpenStack services and will use an HTTPS reverse proxy to access the dashboard. Please note that this setup requires two physical NICs in the computer you will use.
Our installation host has a small / and a larger /data disk, so we prefer the VM disk images and volumes to be created on the Cinder-mounted NFS drive. If you have a large / that can accommodate all your VMs, you can ignore the “NFS for Cinder” and cinder related variables in globals.yml.
Note that using the cinder method, even with a large /, will allow you to see all the volumes created in the /data/nfs directory.
If you have a NAS where you want to store your VM disk images, use this method: Cinder can have more than one destination location in its /etc/kolla/config/nfs_shares.
OpenStack will use your host's various cores and memory as needed, so it is recommended that you dedicate the host to only OpenStack.
How to use this guide
We recommend obtaining the source for this document.
Once you have obtained the source markdown file, open it in an editor and perform a find and replace for the different values you will need to customize for your setup. This will allow you to copy/paste directly from the source file.
Values to adjust (in no particular order):
eno1 is the host's primary NIC.
10.0.0.17 the DHCP (or manual) IP of that primary NIC.
enp1s0 is the secondary NIC of the host that should not have an IP and will be used for neutron.
kaosu, the user we are using for installation.
/data the location where we prepare the installation (in a kaos directory) and store Cinder’s NFS disks.
10.0.0.1 with your network’s gateway.
10.0.0.75 is the start IP for the OpenSack Floating IPs range.
10.0.0.89 is the end IP for the OpenStack Floating IPs range.
10.0.0.254 the OpenStack internal VIP address.
os.example.com, the URL for OpenStack for our HTTPS upgrading reverse proxy.
We are not addressing user choices like Cinder or values for disk size/memory/number of cores/quotas in the my-init-runonce.sh script or later command lines.
Most steps in the “Post-installation” section require you to select your preferred user/project/IPs; adapt as needed in those steps.
Requirements
Hardware:
Make sure virtualization is enabled in your host’s BIOS.
Enough cores on the host to run the VMs: OpenStack is a cloud operating system. Learn more about it and its different services at https://www.openstack.org/
At least 8GB RAM to run OpenStack, recommended a lot more to run the VMs themselves.
We recommend at least 40GB of disk storage on the disk where the containers will be installed and a lot of extra storage for the VMs and disk images. We will use Cinder and NFS to store VM images.
2x physical NICs are needed. Here:
eno1 is the primary NIC, with IP 10.0.0.17
make sure to have dhcp6: false in the netplan (see below).
enp1s0 is the secondary NIC, which should not have an IP assigned.
if needed, disable its DHCP by editing /etc/netplan/00-installer-config.yaml and set dhcp4: false and dhcp6: false for enp1s0 then sudo netplan apply
A Linux host, here, Ubuntu 22.04 server.
With ssh set up.
With system upgrades done, as needed.
With a kaosu user for our OpenStack Kolla Ansible installation.
a /data directory for the different components to install.
Networking with routing capabilities (i.e., a home router connecting to the internet). For our private network:
The router’s gateway is 10.0.0.1.
We will use a static IP for the primary NIC (here eno1 on 10.0.0.17).
We reserved a range of IPs on the subnet that are unused, consecutive, and not assigned to the router’s DHCP range. Here, we will use 10.0.0.75 - 10.0.0.89.
We reserved one unused IP for the OpenStack connection; here 10.0.0.254.
Pre-installation steps
Hardware Enablement (HWE, optional)
To enable the 6.x kernel:
sudo apt-get install -y linux-generic-hwe-22.04
sudo reboot -h now
# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install -y ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
# Add the repository to Apt sources:
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
sudo usermod -aG docker $USER
# logout from ssh and log back in, test that a sudo-less docker is available to your user
docker run hello-world
Passwordless sudo
To make our koas user use the sudo command without being prompted for a password:
sudo visudo -f /etc/sudoers.d/kaos-Overrides
# Add and adapt kaosu as needed
kaosu ALL=(ALL) NOPASSWD:ALL
# save the file and test in a new terminal or login
sudo echo works
We want to use NFS on /data/nfs to store Cinder-created volumes:
# Install nfs server
sudo apt-get install -y nfs-kernel-server
# Create the destination directory and make it nfs-permissions ready
mkdir -p /data/nfs
sudo chown nobody:nogroup /data/nfs
# edit the `exports` configuration file
sudo nano /etc/exports
# Wihin this file: add the directory and the access host (ourselves, ie, our 10. IP) to the authorized list
/data/nfs 10.0.0.17(rw,sync,no_subtree_check)
# After saving, restart the nfs server
sudo systemctl restart nfs-kernel-server
# Prepare the cinder configuration to enable the NFS mount
sudo mkdir -p /etc/kolla/config
sudo nano /etc/kolla/config/nfs_shares
# Add the "remote" to mount in the file and save
10.0.0.17:/data/nfs
For this install, we will work from/data/kaos as the kaosu user.
Preparation
cd /data
# if the directory creation fails, create the directory as root: sudo mkdir kaos
# and chown it: sudo chown $USER.$USER kaos
mkdir kaos
cd kaos
sudo apt-get install -y git python3-dev libffi-dev gcc libssl-dev build-essential
sudo apt-get install -y python3-venv python3-pip
python3 -m venv venv
source venv/bin/activate
pip install -U pip
# Install a few things that might otherwise fail during ansible prechecks
sudo apt-get install -y build-essential libpython3-dev libdbus-1-dev cmake libglib2.0-dev
pip install docker pkgconfig dbus-python
pip install 'ansible-core>=2.14,<2.16'pip install git+https://opendev.org/openstack/kolla-ansible@master
sudo mkdir -p /etc/kolla
sudo chown $USER.$USER /etc/kolla
cp -r venv/share/kolla-ansible/etc_examples/kolla/* /etc/kolla
# we are going to do an all-in-one (single host) install
cp venv/share/kolla-ansible/ansible/inventory/all-in-one /etc/kolla/.
# Install Ansible Galaxy requirements
kolla-ansible install-deps
# generate passwords (into /etc/kolla/passwords.yml)
kolla-genpwd
Edit and adapt the sudo nano /etc/kolla/globals.yml file as follows (search for matching keys):
kolla_base_distro: "ubuntu”
kolla_internal_vip_address: "10.0.0.254"
network_interface: "eno1"
neutron_external_interface: "enp1s0”
enable_cinder: "yes"
enable_cinder_backend_nfs: "yes"
Before we try the deployment, let’s ensure the Python interpreter is the venv one. At the top of the /data/kaos/all-in-one file, add localhost ansible_python_interpreter=/data/kaos/venv/bin/python.
The Dashboard will be on our host's port 80, so at http://10.0.0.17/. The admin user password can be found using fgrep keystone_admin_password /etc/kolla/passwords.yml.
CLI
(still using the venv)
OpenStack command line
Install the python openstack command: pip install python-openstackclient -c https://releases.openstack.org/constraints/upper/master
OpenStack configuration file
kolla-ansible post-deploy -i /etc/kolla/all-in-one will create a cloud.yml file that can be added to your default config: cp /etc/kolla/clouds.yaml ~/.config/openstack
Cloud Init: Run once
(requires the venv, the openstack command line, the cloud.yml file, and the generated/etc/kolla/admin-openrc.sh script)
In /data/kaos, there is a venv/share/kolla-ansible/init-runonce script to create some of the basic configurations for your cloud. Most people will be fine with modifying its EXT_NET_CIDR, EXT_NET_RANGE, and EXT_NET_GATEWAYvariables.
The file below is adapted to our configuration as amy-init-runonce.sh executable script. It uses larger tiny images (5GB, a Ubuntu server is over 2GB), has each larger instance only use a base image of 20GB (since you can specify your preferred disk image size during the instance creation process), its instance names following the m<number_of_cores> naming convention and adds xxlarge and xxxlarge memory instances.
Please copy and adapt the USER CONF section based on your system and preferences.
#!/usr/bin/env bash
# Adapted from init-runonce script
set -o errexit
set -o pipefail
# Avoid some issues, only valid for the local shell
unset LANG
unset LANGUAGE
LC_ALL=C
export LC_ALL
### USER CONF ###
# Specific to our network config
EXT_NET_CIDR='10.0.0.0/24'
EXT_NET_RANGE='start=10.0.0.75,end=10.0.0.89'
EXT_NET_GATEWAY='10.0.0.1'
## Quotas
# Max number of instances per project
CONF_QUOTA_INSTANCE=10
# Max number of floating IPs per project
CONF_QUOTA_FLOATIP=10
# Max number of cores per instance
CONF_QUOTA_CORE=32
# Max amount of MB of memory per instance (1000=1GB)
CONF_QUOTA_MEM=96000
# file locations
KOLLA_CONFIG_PATH=/etc/kolla
export OS_CLIENT_CONFIG_FILE=${KOLLA_CONFIG_PATH}/clouds.yaml
source /etc/kolla/admin-openrc.sh
#####
echo "--"; echo "--"; echo "-- Attempt to add external-net (if not already present)"
if openstack network list | grep -q external; then
echo "external-net already specicied, skipping"
else
openstack network create --external --provider-physical-network physnet1 --provider-network-type flat external-net
openstack subnet create --no-dhcp --allocation-pool ${EXT_NET_RANGE} --network external-net --subnet-range ${EXT_NET_CIDR} --gateway ${EXT_NET_GATEWAY} external-subnet
fi
echo "-- network list"
openstack network list
#####
echo "--"; echo "--"; echo "-- Attempt to configure Security Groups: ssh and ICMP (ping)"
# Get admin user and tenant IDs
ADMIN_PROJECT_ID=$(openstack project list | awk '/ admin / {print $2}')
ADMIN_SEC_GROUP=$(openstack security group list --project ${ADMIN_PROJECT_ID} | awk '/ default / {print $2}')
# Default Security Group Configuration: ssh and ICMP (ping)
if openstack security group rule list ${ADMIN_SEC_GROUP} | grep -q "22:22"; then
echo "a ssh rule is already in place, skipping"
else
openstack security group rule create --ingress --ethertype IPv4 --protocol tcp --dst-port 22 ${ADMIN_SEC_GROUP}
fi
if openstack security group rule list ${ADMIN_SEC_GROUP} | grep -q icmp; then
echo "an ICMP rule is already in place, skipping"
else
openstack security group rule create --ingress --ethertype IPv4 --protocol icmp ${ADMIN_SEC_GROUP}
fi
#####
echo "--"; echo "--"; echo "-- Attempt to create and add a default id_ecdsa key to nova (if not already present)"
if [ ! -f ~/.ssh/id_ecdsa.pub ]; then
echo Generating ssh key.
ssh-keygen -t ecdsa -N '' -f ~/.ssh/id_ecdsa
fi
if [ -r ~/.ssh/id_ecdsa.pub ]; then
if ! openstack keypair list | grep -q mykey; then
echo Configuring nova public key.
openstack keypair create --public-key ~/.ssh/id_ecdsa.pub mykey
fi
fi
#####
echo "--"; echo "--"; echo "-- Setting quota defaults following user values"
openstack quota set --force --instances ${CONF_QUOTA_INSTANCE} ${ADMIN_PROJECT_ID}
openstack quota set --force --cores ${CONF_QUOTA_CORE} ${ADMIN_PROJECT_ID}
openstack quota set --force --ram ${CONF_QUOTA_MEM} ${ADMIN_PROJECT_ID}
openstack quota set --force --floating-ips ${CONF_QUOTA_FLOATIP} ${ADMIN_PROJECT_ID}
openstack quota show
#####
echo "--"; echo "--"; echo "-- Creating defaults flavors (instance type) (if not already present)"
if ! openstack flavor list | grep -q m1.tiny; then
openstack flavor create --id 1 --ram 512 --disk 5 --vcpus 1 m1.tiny
openstack flavor create --id 2 --ram 512 --disk 5 --vcpus 2 m2.tiny
openstack flavor create --id 3 --ram 2048 --disk 20 --vcpus 2 m2.small
openstack flavor create --id 4 --ram 4096 --disk 20 --vcpus 2 m2.medium
openstack flavor create --id 5 --ram 8192 --disk 20 --vcpus 4 m4.large
openstack flavor create --id 6 --ram 16384 --disk 20 --vcpus 8 m8.xlarge
openstack flavor create --id 7 --ram 32768 --disk 20 --vcpus 16 m16.xxlarge
openstack flavor create --id 8 --ram 65536 --disk 20 --vcpus 32 m32.xxxlarge
fi
echo ""; echo ""; echo "Done"
Once run, you will have:
an external-net added; this is the pool from which your floating IPs will be obtained.
added ssh and ICMP to the admin project’s default security group.
created a default ssh key (mykey) added to the admin user.
set the admin’s project default quotas (will not propagate to other projects, but the CLI logic can with the right project_id).
FYSA: From the UI, it is possible to add new flavors from Admin -> Compute -> Flavors
Post-Installation
Note: future kolla-ansible or openstack CLI usage requires the venv to be activated and source /etc/kolla/admin-openrc.sh to be performed for the commands to have the right configuration information.
New admin user (UI)
Login to your OpenStack instance by going to the web dashboard (`horizon``, available on port 80) at http://10.0.0.17
The admin user’s password can be obtained using fgrep keystone_admin_password /etc/kolla/passwords.yml
Using Project -> Compute -> Overview gives you a list of used and available resources.
Create a new project and another admin user for your account:
As admin in the Identity -> Projects (left column), Create Project. Give it a name.
The project does not inherits the “admin” project’s default values, so we will want to update the quotas as needed (next section).
As admin in the Identity -> Users (left column), Create User. Provide its User Name and Password (Confirm), assign that user the Primary Project created above, and give it the AdminRole. Enable the account.
New User + Project: ssh + security groups + quotas (CLI)
The following steps use the CLI to re-add our ssh key, security groups, and quotas to the new user and its project.
Add the ssh key to your new user (adapting newadminuser):
Add the security groups and quotas to your new project (adapting newprojectname):
# Adapt newprojectname
MY_PROJECT_ID=$(openstack project list | awk '/ newprojectname / {print $2}')
MY_SEC_GROUP=$(openstack security group list --project ${MY_PROJECT_ID} | awk '/ default / {print $2}')
openstack security group rule create --ingress --ethertype IPv4 --protocol icmp ${MY_SEC_GROUP}
openstack security group rule create --ingress --ethertype IPv4 --protocol tcp --dst-port 22 ${MY_SEC_GROUP}
openstack quota set --force --instances 10 ${MY_PROJECT_ID}
openstack quota set --force --cores 32 ${MY_PROJECT_ID}
openstack quota set --force --ram 96000 ${MY_PROJECT_ID}
openstack quota set --force --floating-ips 10 ${MY_PROJECT_ID}
Add an Ubuntu image to Glance
Go to https://cloud-images.ubuntu.com/ and select the distro you want (here we will use jammy jellyfish or Ubuntu 22.04’s most current image), then copy the URL of the QCow2 UEFI/GPT Bootable disk image image of choice.
In /data/cloudimg (create if needed), wget https://cloud-images.ubuntu.com/jammy/current/jammy-server-cloudimg-amd64.img
We recommend naming based on the image date so you know which base image you use, for example, mv jammy-server-cloudimg-amd64.img ubuntu2204-20240223.img
Then, use the openstack command line to add the image to the list of available images for all users of your cloud OS:
Once completed, a table with details of the new image added to your OpenStack installation will appear.
From your new user’s UI, select Project -> Compute -> Images, and you will see your new image listed.
Network and Router setup
From your new admin user’s UI (which should start in your new project), select Project -> Network -> Network Topology
You will have a graph showing only the external-net we have configured.
We need a network and a router added for VMs to communicate.
Network
Select Create Network
Network tab:
Give it a name. We recommend “project-net” replacing “project” with your project’s name.
Check Enable Admin State to make sure it is active.
Uncheck Shared, this network is only for this project.
Check Create Subnet; we need to configure the IP details for this subnet.
There is no need to modify Availability Zone Hints or MTU
Click Next.
Subnet tab:
Give it a name such as “project-subnet”.
For the Network Address, use a private IP range not currently in use in your network, such as 10.56.78.0/24. Each new subnet you create must be independent and not currently in use.
Select IPv4.
Use 10.56.78.1 for the Gateway IP; it must be in the same IP range as your subnet.
Uncheck Disable Gateway.
Click Next.
Subnet Details tab:
Check Enable DHCP. We want our VM instances to get IPs automatically when they start.
For Allocation Pool use something unused within the subnet range, for example, 10.56.78.100,10.56.78.200.
DNS Name Servers use Google (8.8.8.8, 8.8.4.4) or CloudFlare (1.1.1.1).
No need to add any Host Routes.
Click Create.
You now have a new network ready to be used with VMs. Those still need a router.
Router
Select Create Router:
Give it a name such as “project-router”.
Check Enable Admin State to make sure it will be active.
Select the external-netExternal Network.
Check Enable SNAT since we do have an external network.
Leave Availabilty Zone Hints as is.
You now have a router connected to the external network.
The IP for the router on external-net gets selected from the pool automatically.
The router is yet to be connected to the “project network.” Hover over to the “router” and select Add Interface, select the “project-subnet” and leave the IP Address unspecified; it will use the configured gateway.
When you return to the Network Topology page, you will see an external-net connected to your “project-net” by your “project-router”.
Starting and accessible our first VM
Launch Instance
From your new admin user’s UI, select Project -> Compute -> Instances and choose Launch Instance.
On the Details tab:
Give your instance a name, for example, u22test.
Click Next.
On the Source tab:
Use an Image as the Select Boot Source.
Select Yes for Create New Volume; this will force the creation of the VM disk image onto the Cinder location.
If the Volume Size is less than the flavor’s disk size, the larger one of the two will be selected.
Delete Volume on Instance Delete is a user choice. We often select Yes.
Click on the up arrow next to our ubuntu server image to have it become the Allocated image.
Click Next.
On the Flavor tab:
Click on the up arrow next to our m2.tiny flavor (2x VCPUs, 512MB RAM, 5GB disk).
If you kept the Volume Size to 1GB in the Source tab, if you got back, you will see it now shows 5GB, the size of our Flavor.
Click Next.
On the Netwoks tab:
The “project-net” and “project-subnet” should be automatically allocated.
Click Next.
We are not adding any Network Ports, so click Next.
The AllocatedSecurity Groups ’s default will have ssh and icmp listed (feel free to verify by clicking the toggle arrow), so click Next.
Our mykey will show in Key Pair.
You can continue to look at the other tabs available. We will Launch Instance.
After a few seconds, the instance should appear to be Running.
If you look into /data/nfs you will see the file for your newly created disk volume.
From your instance row’s Actions submenu (right), you can View Logs or view the interactive Console (you can not log in from the displayed terminal; the ubuntu user does not have a known password set)
Floating IPs
With our instance Running, its IP Address is within our project’s subnet range.
For us to be able to access the instance via ssh we need to obtain a Floating IP for it.
In the Actions (right) submenu for our instance row, select Associate Floating IP:
None are listed; click the + and Allocate IP from our external-net pool.
An IP will now show in the IP Address dropdown. Make sure its Port to be associated is our u22test instance and Associatethem together.
The IP Address column will now show two IPs: one from the “project-subnet” DHCP range and one from the external-net pool.
From your kaosu user, we can ssh into the host’s created floating IP using the authorized ssh key and the default cloud image user of ubuntu.
For example:
# Adapt the 10. IP to match your floating IP
ssh -i ~/.ssh/id_ecdsa [email protected]
[...]
Welcome to Ubuntu 22.04.4 LTS (GNU/Linux 5.15.0-97-generic x86_64)
[...]
ubuntu@u22test:~$
From there, you can confirm that your instance can reach out by running sudo apt update.
Securely accessing Horizon using a reverse proxy
If you have a reverse proxy setup on another host and want to benefit from https on horizon (the dashboard):
In your reverse proxy, configure the Proxy Host as you would normally; here, we will use os.example.com
Run sudo nano /etc/kolla/horizon/_9999-custom-settings.py and add to it:
Restart horizon using docker kill horizon. Wait a few seconds, and your access via https://os.example.com should be functional
ie: not present you with a csrf_failure=Origin checking failed - https%3A//os.example.com does not match any trusted origin error (in the address bar).
FYSA, your installer has named all the containers using the name of the service they provide, so horizon is one of them
Troubleshooting
“reconfigure” if you need to modify globals.yml
If you modify a globals.yml configuration option, from /data/kaos run kolla-ansible -i /etc/kolla/all-in-one reconfigure.
A solution that does not require alterations to your configuration is to add --network=host to your docker command line. For more details about the network=host option, see https://docs.docker.com/network/drivers/host/.
WARNING: The following is not recommended as per the launchpad.net post, as it “may pose a security risk”, so use it at your own risk.
That solution is to remove the three lines from the /etc/docker/daemon.json: "bridge": "none",, "ip-forward": false, and "iptables": false,.
Then restart docker using sudo systemctl restart docker.
Next time running the docker run --rm -it ubuntu:22.04, you should not see the IPv4 forwarding is disabled.
Note: the addition of docker_disable_ip_forward: "no" (present in /data/kaos/venv/share/kolla-ansible/ansible/group_vars/all.yml) to the globals.yml file, and a reconfigure does not appear to solve this.
Broken after a Reboot?
This has happened to me in a previous installation. Luckily it is just a matter of doing a reconfigure to get it functional again.
# Take the running instance down
kolla-ansible -i /etc/kolla/all-in-one --yes-i-really-really-mean-it stop
# You might need to reboot if this step fails due to a stuck container/qemu process and re-run it
/data/kaos/venv/share/kolla-ansible/tools/cleanup-containers
# delete most of the host configuration
sudo ./venv/share/kolla-ansible/tools/cleanup-host
# Delete any possible remaining content
kolla-ansible destroy -i all-in-one --yes-i-really-really-mean-it
# Check that the neutron iptables content are remove
sudo iptables --list
Revision History
20240512-0: Migration of content to notion.
20240415-0: Added a section on cleanup + working from /etc/kolla directory
20240302-0: Added links to the introduction section.
20240225-0: Added content about using the host for more than OpenStack (with caveats)
20240224-2: Fix some sentences for clarity.
20240224-1: Fix display of “revision history”.
20240224-0: Content clarification and changed default username. Added an “How to use this guide” and “Contribute” section.
20240223-0: Added missing step (ansible galaxy) and used a uniform apt-get.
Linux host set-up instructions for Dockge, a self-hosted Docker Compose stacks management tool with a feature-rich interface for self-hosting and home lab setups. It provides access to an all-in-one view of logs, a YAML editor, a web terminal, container controls, and monitoring.
This guide details how to install quickemu on a Linux Ubuntu 22.04 server to run desktop VMs (here, we will install both Ubuntu 22.04 Desktop and PopOS 22.04 Desktop) on the same subnet IP range as the host's primary subnet using a network bridge. Those VMs are remotely accessible on the subnet using a SPICE client.
This setup will provide VMs directly accessible on the subnet where the Linux server is running. Because we will specify the MAC addresses of those VMs, we can apply reserved DHCP IPs from our router and allow adding and configuring extra services within hardware firewalls, such as the Firewalla (for example, DNS-over-HTTPS).
The NVIDIA GPU Container Runtime is a plugin that enables container platforms to securely access and manage NVIDIA GPUs as part of a containerized application environment. Docker is an open-source platform that automates the deployment, scaling, and management of applications within lightweight, portable containers. Podman is an open-source, daemonless container engine designed for developing, managing, and running OCI Containers, functioning as a drop-in replacement for Docker. This post contains the setup instructions for NVIDIA GPU container toolkits on Linux hosts running Ubuntu 22.04 for use with docker and podman.