QuickEmu with subnet-accessible Desktop VMs (Ubuntu 24.04, 20240822)
QuickEmu with subnet-accessible Desktop VMs (Ubuntu 24.04, 20240822)
This guide details how to install quickemu on a Linux Ubuntu 24.04 server to run desktop VMs (here, we will install Ubuntu 24.04 Desktop) on the same subnet IP range as the host's primary subnet using a network bridge. Those VMs are remotely accessible on the subnet using a SPICE client.
This setup will provide VMs directly accessible on the subnet where the Linux server is running. Because we will specify the MAC addresses of those VMs, we can apply reserved DHCP IPs from our router and allow adding and configuring extra services within hardware firewalls, such as the Firewalla (for example, DNS-over-HTTPS).
Installing quickemu on a Linux Ubuntu 24.04 server to run desktop VMs on the same subnet IPs (on a network bridge) that are remotely accessible using SPICE.
Â
Revision: 20240822-0 (init: 20240404)
QEMU is a "generic and open source machine learning emulator and virtualizer" that provides a robust and flexible virtualization (and emulation) backbone. It caters to a wide range of needs, from development to testing across different architectures.
QuickEmu focuses on simplifying the QEMU experience, making it easier for users to create and manage virtual machines (VMs) without needing to manage most of the complexity of setup.
This guide details how to install quickemu on a Linux Ubuntu 24.04 server to run desktop VMs (here, we will install Ubuntu 24.04 Desktop) on the same subnet IP range as the host's primary subnet using a network bridge. Those VMs are remotely accessible on the subnet using a SPICE client.
This setup will provide VMs directly accessible on the subnet where the Linux server is running. Because we will specify the MAC addresses of those VMs, we can apply reserved DHCP IPs from our router and allow adding and configuring extra services within hardware firewalls, such as the Firewalla (for example, DNS-over-HTTPS).
We recommend obtaining the source for this document. Once you have the source file, open it in an editor and perform a find-and-replace for the different values that you will need to customize for your setup. This will allow you to copy/paste directly from the source file.
Values to adjust (in no particular order):
eno1, the network interface we will use to set up the bridge network.
br0, the device handler for the bridge.
10.0.0.17, the static IP (DHCP reservation) for the Linux host (Ubuntu 22.04 server)
d8:9d:67:f4:4a:51, the MAC address of the eno1 device (that will be cloned to the bridge)
10.0.0.1, the router's gateway address.
hostuser, the Linux host user.
vmuser, the Linux VM user.
QEMU
QEMU (Quick Emulator) is a free, open-source emulator and virtualizer that performs hardware virtualization.
QEMU is a highly versatile and widely used tool in the virtualization space.
Compatibility: QEMU can run on a variety of host operating systems, including Linux, Windows, and macOS. It can emulate several CPU architectures, including x86, x86_64 (AMD64/Intel 64), ARM, PowerPC, and more.
Modes of Operation: User-mode emulation allows individual programs to run on a foreign CPU architecture. Full system emulation simulates an entire hardware system, including processor and peripherals, which can run a different operating system.
Performance: When used on a system with the same CPU architecture as the guest OS, QEMU can utilize KVM (Kernel-based Virtual Machine) to achieve near-native performance by executing guest code directly on the host CPU via hardware virtualization extensions.
QuickEmu
QuickEmu serves as a frontend to simplify QEMU’s usage.
It's designed to make it quicker and easier to create and run QEMU virtual machines.
QuickEmu automates many configuration and setup processes in creating a VM.
Simplified Configuration: QuickEmu uses simple configuration files to define the virtual machine's specifications, making it easier for users to set up a new VM without exploring the many QEMU command-line options.
OS Detection and Setup: This feature automatically downloads and configures operating system images (Linux, Windows, and macOS).
Confirm that your host has hardware virtualization enabled using lscpu | grep Virtualization
You should see VT-X or AMD-V, depending on your processor.
If not, you must search your host’s BIOS for the switch to enable its virtualization capabilities.
A note on the use of Docker
Because we will create a network bridge, we recommend using podman instead of docker. docker uses “bridge netfilter” (br_netfilter), which prevents our VMs from having easy functional networking and acquiring a DHCP IP. See serverfault.com for more details.
Podman is also able to access the local GPU with the proper Container Device Interface (CDI) configured: podman run --rm --device nvidia.com/gpu=all ubuntu nvidia-smi
If you want docker, install it within a VM. In our setup, each VM will exist directly on the bridged network, and exposed ports will be directly reachable on your subnet.
Bridge setup
Install the required tools
sudo apt-get install bridge-utils
Our installation was a server installation. We used DHCP on one network interface to obtain a static reserved IP from our router.
As such, our /etc/netplan/50-cloud-init.yaml is currently:
network:
ethernets:
eno1:
dhcp4: true
version: 2
eno1 is our primary interface; it gets its reserved IP (here 10.0.0.17) from DHCP.
We want to create a bridge onto the same subnet to specify the MAC address of the VMs we will create and continue to use static reservation on our router for created VMs.
Because we want our bridge interface to use the same MAC address as our eno1 interface (so that any hardware firewall does not classify it as a new device).
Our eno1 device has MAC d8:9d:67:f4:4a:51.
We will create a br0 bridge using the IP details of eno1.
First, rename the .yaml file .yaml.old to avoid netplan using it: sudo mv /etc/netplan/50-cloud-init.yaml{,.old}
Now, let's create a new /etc/netplan/50-cloud-init.yaml file with the following content:
With this configuration, we list the MAC address of eno1 and match it in br0, bridge br0 to eno1, and use the interface's known IP for that bridge.
We also tell the system that our gateway is 10.0.0.1 and to use Google's DNS (8.8.8.8) when trying to query.
To avoid netplan from printing a WARNING message when apply-ing the configuration, make sure to check that the file's permissions are 600 or sudo chmod 600 /etc/netplan/50-cloud-init.yaml
Before apply-ing it, it is recommended that you have physical access to the host if you are working over SSH, as the connection might drop while the settings are updated.
Apply the configuration using
sudo netplan generate
sudo netplan --debug apply
After a few seconds, your prompt should be back, and you can use the ip a command to see that your br0 interface was created and uses the expected IP and MAC.
You can confirm the bridge can connect to the outside using ping -I br0 1.1.1.1
To get more details about the bridge, run sudo networkctl status br0
Although unlikely to fail, a sudo reboot -h now is recommended to confirm that the system reboots with a clean networking stack.
quickemu needs to allow our non-admin user to use the bridge helper script (following those extra steps), and our user needs to be added to the kvm group.
sudo chmod u+s /usr/lib/qemu/qemu-bridge-helper
sudo usermod -a -G kvm $USER
Logout and log back in for the group change to take effect.
Ubuntu 24.04 Desktop on a Ubuntu 24.04 server
We will use quickget to have the tool download the needed installer and create a default configuration file.
We will install all VM files in ~/qemu (adapt as needed), with our user directory as /home/hostuser.
cd
mkdir qemu
cd qemu
quickget ubuntu 24.04
This provides us with a Ubuntu-24.04 directory where the different disk images (.iso and .qcow2) will end up. After you run the quickemu tool, you will have a .sh file with the full qemu-system-x86_64 command.
This also provided us with a pre-filled ubuntu-24.04.conf file, which currently contains:
Let's extend it to specify the number of cores (cpu_cores), RAM, and disk size, have it use our br0 network, and select the VM's MAC address (qemu MAC addresses must start with 52:54:00).
Because we will be using the bridge, the new VM will ask our router for an IP from its DHCP range, and we can perform a static IP reservation later (based on the MAC address we specified).
We are reserving 8 CPU cores, 16GB of RAM, and a 128GB disk (the qcow2 disk will increase as data is added) using the bridge network with a specific MAC address.
Because we are starting the VM from a server host, we must access the guest VM remotely using a Spice remote desktop client.
The output of the command in our case was as follows (some of the values are hardware-specific, and your prompt's output will match your system):
Quickemu 4.9.4 using /usr/bin/qemu-system-x86_64 v8.2.2
- Host: Ubuntu 24.04 LTS running Linux 6.8 (host)
- CPU: Intel(R) Xeon(R) CPU E5-2697 v2 @ 2.70GHz
- CPU VM: 2 Socket(s), 4 Core(s), 2 Thread(s), 16G RAM
- BOOT: EFI (Linux), OVMF (/usr/share/OVMF/OVMF_CODE_4M.fd), SecureBoot (off).
- Disk: ubuntu-24.04/disk.qcow2 (128G)
Just created, booting from ubuntu-24.04/ubuntu-24.04-desktop-amd64.iso
- Boot ISO: ubuntu-24.04/ubuntu-24.04-desktop-amd64.iso
- Display: SPICE, virtio-gpu, GL (on), VirGL (off)
- Sound: intel-hda
- ssh: On host: ssh user@localhost -p 22220
- SPICE: On host: spicy --title "ubuntu-24.04" --port 3001 --spice-shared-dir /home/hostuser
- WebDAV: On guest: dav://localhost:9843/
- 9P: On guest: sudo mount -t 9p -o trans=virtio,version=9p2000.L,msize=104857600 Public-hostuser ~/hostuser
- Network: Bridged (br0)
- Monitor: On host: nc -U "ubuntu-24.04/ubuntu-24.04-monitor.socket"
or : socat -,echo=0,icanon=0 unix-connect:ubuntu-24.04/ubuntu-24.04-monitor.socket
- Serial: On host: nc -U "ubuntu-24.04/ubuntu-24.04-serial.socket"
or : socat -,echo=0,icanon=0 unix-connect:ubuntu-24.04/ubuntu-24.04-serial.socket
- Process: Starting ubuntu-24.04.conf as ubuntu-24.04 ()
- Viewer: spicy --title "ubuntu-24.04" --port "3001" --spice-shared-dir "/home/hostuser" "" >/dev/null 2>&1 &
If you see something akin to cat: ubuntu-24.04/ubuntu-24.04.pid: No such file or directory; something went wrong, and you will want to check the ubuntu-24.04/ubuntu-24.04.log for details.
The VM is now started. If you were to do a ps auwwx you would see a long command line defining all the parameters of the qemu command line.
Ours starts with /usr/bin/qemu-system-x86_64 -name ubuntu-24.04,process=ubuntu-24.04 -pidfile ubuntu-24.04/ubuntu-24.04.pid -enable-kvm and is multiple lines long.
The Ubuntu VM got its DHCP IP from our router and used an available DHCP range IP, 10.0.0.214. It is on the same subnet as our other hosts and can be reached directly on this IP.
Because we know the MAC address (which we specified in the configuration file), we can use the router to specify a reserved IP for future use and extra hardware-firewall-specific features.
The SPICE access is on the VM host, so 10.0.0.17 using port 3001 as manually specified in the quickemu command line.
Once you can see the Ubuntu Desktop (VM guest)'s installer running on our Ubuntu server (VM host), perform the installation (you will be able to change the resolution) of your vmuser account.
spice-gtk client
brew provides us with the spicy-gtk client for installation: brew install spice-gtk
Once installed, you can use spicy -h IP -p port.
The spice port above was 3001, and spice is running on the VM host (not the VM itself, also called the VM guest). To access it, use spicy -h 10.0.0.17 -p 3001
aSPICE Pro (alternate MacOS spice client)
An alternate client (paid) for SPICE is available in the Mac App Store, named "aSPICE Pro"
For the mouse pointer to work with it, per their documentation, you will need to modify the startup command line slightly:
After creating a host configuration (VM guest name, VM host's IP, SPICE port, and SSH access to the VM host if that is your preferred method), you can access your VM guest's Desktop.
If you make a mistake or want to edit the host, you can do so by left-mouse long-pressing on the host configuration you wish to edit.
Post-installation steps & ssh access
Once the Ubuntu Desktop installation is completed, reboot and perform updates on the VM (sudo apt update && sudo apt upgrade), then run sudo apt install openssh-server to install the SSH server.
Because the VM exists on the router's subnet, we can access the VM guest at its IP address directly: ssh [email protected].
Quickemu recommends a few post-installation steps: Install the SPICE and the WebDAV agents in the Guest VM to enable copy/paste and USB redirection, and file sharing.
sudo apt install spice-vdagent spice-webdavd
Docker setup (within the VM)
As discussed in the preamble, this VM can run docker when you prefer not to install it on the VM host.
To do so, please follow the steps in the Ubuntu 24.04 version of the “Setting up NVIDIA docker & podman” guide, limiting yourself to the “Docker setup (from docker.io)” section.
Troubleshooting
If after an update you see and error about the bridge helper failed (qemu-system-x86_64: -nic bridge,br=br0,model=virtio-net-pci,mac=52:54:00:00:01:01: bridge helper failed), we need to make it executable by any user again.
sudo chmod u+s /usr/lib/qemu/qemu-bridge-helper
Revision History
20240822-0: Added a troubleshooting entry after a recent reboot
20240625-0: Added link to another post + QuickEmu 4.9.5 announcement
Hosting on a VPS is a great option to run a blogging service, but installing services that might expose ports needs to be done with some precaution (or not at all if the service is only to be used by the server itself).
This guide details how to install quickemu on a Linux Ubuntu 22.04 server to run desktop VMs (here, we will install both Ubuntu 22.04 Desktop and PopOS 22.04 Desktop) on the same subnet IP range as the host's primary subnet using a network bridge. Those VMs are remotely accessible on the subnet using a SPICE client.
This setup will provide VMs directly accessible on the subnet where the Linux server is running. Because we will specify the MAC addresses of those VMs, we can apply reserved DHCP IPs from our router and allow adding and configuring extra services within hardware firewalls, such as the Firewalla (for example, DNS-over-HTTPS).