The following are best practices from using Unraid for close to three years. Although much more complex installation (with ZFS pools or complex alternate OSes VM setup) are possible, it is written to support people interested in learning about the tool. I wrote about Unraid in the past, such as “Things I wished I had READ before my first Unraid Install... and more”. This post is different as I have recently installed a new workstation; and used the experience to document various aspects such as installation, array management, share configurations, and community applications.
I hope it provides valuable insights to benefit those interested in exploring Unraid further, especially as the 7.0 version is coming soon. We note that we will use external links to Unraid’s documentation for additional content on discussed topics.
This document provides a guide to installing and configuring Unraid, a network-attached storage (NAS) and applications ecosystem. It covers topics ranging from the initial hardware setup and OS installation to array and pool management, share configuration, Docker integration, and “Community Applications” (CA). The guide is structured in a step-by-step format, offering practical advice and best practices to help users leverage the full potential of Unraid.
Revision: 20241022-0 (init: 20240731)
The following are best practices from using Unraid for close to three years. Although much more complex installation (with ZFS pools or complex alternate OSes VM setup) are possible, it is written to support people interested in learning about the tool. I wrote about Unraid in the past, such as “Things I wished I had READ before my first Unraid Install... and more”. This post is different as I have recently installed a new workstation; and used the experience to document various aspects such as installation, array management, share configurations, and community applications.
I hope it provides valuable insights to benefit those interested in exploring Unraid further, especially as the 7.0 version is coming soon. We note that we will use external links to Unraid’s documentation for additional content on discussed topics.
Unraid is an operating system that manages storage, computing, and network resources. It provides the features of a software-based network-attached storage (NAS) system but also offers a comprehensive IT infrastructure solution.
As a NAS, Unraid’s “Array” allows mixing hard drives of different sizes and speeds, maximizing storage capacity, and using parity drive(s). The parity drive provides a means to recover data from a failed disk and emulate the content of a damaged drive until it can be replaced. Adding drives is possible without the need to rebuild the array (as long as the added drives are zero-ed out first and are smaller than the array’s parity drive). The array is usually made of spinning drives for mass storage.
“Pools” are generally made of faster storage (SSDs or NVMes) and can be utilized as a cache to enhance performance, allowing frequently accessed data to be stored on faster drives. Pools are the recommended storage space for Docker containers, the applications’ data, and Virtual Machines’ disk images.
“Shares” are the method for sharing data across the network and the folders created on the server’s disks, on the array, or the pool. Shares can be set with specific permissions for different users or groups. When sharing those over the network, they can be public, secure, and private, each providing various levels of access control. Shares can be configured to utilize cache pools, improving performance for frequently accessed data by temporarily storing it on faster drives.
As an IT infrastructure solution, its Docker integration and the ability to run Virtual Machines provide a solid base with its Linux kernel. It can be installed on many x86-64 hardware platforms. Unraid’s built-in hypervisor allows users to run applications and even entire operating systems in isolation.
Unraid runs on a large subset of hardware; the minimal setup requires
A 64-bit capable processor that runs at 1 GHz or higher.
A minimum of 4 GB of RAM for basic NAS functionality.
Linux hardware driver support for storage, Ethernet, and USB controllers.
Two hard disk drives to ensure data protection with a parity disk.
The more powerful the hardware, the more it is possible to run on Unraid (NAS, VMs, Gaming, …), so it is recommended to think ahead of your server's goals; their “Uses Cases” page gives a great idea of the possibilities. We will use it to run Docker services.
Unraid is popular among DIY enthusiasts and fellow self-hosters because of its flexibility, ease of use, and ability to repurpose older hardware. It is a commercial product developed by Lime Technology, Inc., requiring a license. The license is available in different tiers based on the number of storage drives you need to support.
UnRAID has a many community contributors providing docker applications and a wide range of plugins, which makes extending its functionality relatively straightforward. The “Community Apps” plugin is Unraid’s “App Store” and the first plugin to be installed after setup to get access to a wide range of applications. Most of those applications are Docker-based, and their containers can be managed through Unraid’s web-based user interface.
Running applications in Docker containers keeps them isolated from each other and the host system. This isolation helps prevent application conflicts and increases security by limiting what each application can access. It is also possible to control the amount of CPU and memory resources allocated to each container, ensuring that the server remains responsive and stable, even when running multiple services. Among the proposed applications are media servers, data-sharing applications, or various game servers, which can be installed with just a few clicks.
OS Installation
Initial installation
The Unraid OS runs entirely from a USB stick (keeping all disk drives as units of storage) and should be run from a system with an Ethernet connection (avoid Wifi) with a static IP for the server host.
To set up the hardware, obtain the “USB creator” and run it. The latest version of Unraid at the time of this installation was 6.12.11; we will, therefore, install this version on a recommended USB stick.
Unraid is a server application; we want to find it at the same IP on our subnet after each reboot. Deciding on a static IP or using the server’s MAC addresses can ensure a static DHCP reservation when creating the USB stick. We will use the 192.168.22.99 IP. During this stage, It is also possible to name the server (the default is tower); we will name ours unraid99.
Once the USB stick is ready, we boot our hardware using it. The Unraid OS will attempt to recognize and configure existing hardware devices. After some time, a Linux login prompt will appear. When the login prompt is displayed, we can configure our unraid99 OS instance. To do so, we must go to the IP of our host using a web browser (at http://192.168.22.99/). We will be presented with the OS’s dashboard and a reminder to register or purchase a license. Testing the OS for up to 30 days before buying a license is possible.
After creating the root password, we can set our Array and Pool(s).
Array, Pools, and Shares
After licensing our copy of Unraid (or using the 30-day trial), we are ready to configure our array and pools. In general, arrays are the primary storage space in unRAID, optimized for capacity, while pools are additional high-performance storage areas that can be configured for specific needs, like caching.
Arrays are often used as the main storage space and are composed of hard drives. Disks in the arrays should have one (up to two) parity drive(s). Depending on the license, it is possible to have more or less total drives in the system. Those parity drives will contain the XOR-ed content of all the other drives constituting the array. The parity drives must always be of equal or larger size than the largest of the data disks. For example, if we have 4TB, 5TB, and 2x 6TB drives to constitute our array, the 4+5+1x6TB can be our array drives, while the second 6TB will be our parity drive, creating a total array size of 15TB. Please see Unraid’s Storage Management page for additional details.
Drives in Unraid’s array are not using a Redundant Array of Independent Drive (RAID) solution. Each drive contains an independent filesystem that can be read individually on any Linux system. The array is optimized for capacity rather than performance. Files are placed on the drives as part of “Shares” and put on physical drives following a user-configurable share’s “Allocation method”. This means that although an independent disk can be read from another Linux system, a directory on a “share” on a single disk might not contain all the files for that share as those may have been allocated (placed) on another physical disk.
Pools are additional storage spaces using SSDs or other high-performance drives. They do not use the parity redundancy method, relying on standard RAID configurations like RAID 0, 1, 5, etc. Pools are typically used for caching (their former name), providing higher performance than the main array “[and] does this by redirecting write operations to a dedicated disk […] and moves that data to the array on a schedule”. Fast drive-based pools should be where applications and virtual machines’ data are stored while in use for high throughput. Multiple pools can be configured for different purposes, such as one optimized for read performance and another for write performance.
👉
Although this is the usual way to configure Unraid Pools and Arrays, it is also possible to directly create zfs pools using hard drives. Please see https://unraid.net/blog/zfs-guide
This writeup will use SSDs for pools and HDDs for array drives.
There are many options for disk format, the primary ones being btrfs, xfs and zfs:
BTRFS (B-Tree File System) is a modern, copy-on-write file system (when a file is modified, the file system does not overwrite the existing data on the drive with that newer information) that provides features like snapshots, checksums, and subvolumes.
XFS (Extended File System) is a well-proven journaling system supporting large files with good parallel I/O performance.
ZFS (Zettabyte File System) goes beyond a file system with its use of pools of disks. ZFS requires a dedicated amount of memory to cache the data, so it might not be the best fit for systems that can not dedicate up to 16GB for its management.
For our setup, we will use btrfs for the pool (unencrypted) and xfs for the data disks (encrypted).
In general, with any drives, to perform changes such as “Erase” (which might be needed before a “File system type” can be assigned), we need to go into the submenu accessed by clicking on the “Device” name, such asCache, Disk 1, Disk 2… For example, when selecting “Disk 1”, we enter a “Disk 1 Setting” tab. The “Erase” option is available, and we can delete any existing partition on the drive (we will need to confirm by typing disk1); we can also set the “File system type” at this stage.
Let’s first add our pool, then we will add disks to our array and encrypt those.
Adding a pool
In “Pool Devices,” we select “Add Pool,” give it a name, and match the number of physical SSD/NVMe present to use as pool data disks (those should not be disks from our array of spinning drives). We will use this location for the docker data, application data, and potential VMs to run on our system.
Depending on the number of drives added to each pool, different RAID options will become available under the pool disks’ selection.
If we only have one pool composed of one disk, we can name it cache. We will format it using btrfs (not encrypted), turning off “compression” but enabling “autotrim” (as our drive has trim capability). After selecting “Apply”, the drive is ready for our pool (it might be required to “Erase” it first).
Adding encrypted disks to the array
Unraid recommends to fill hard disks with zeroes before adding them to the array. This is to speed up the array creation and parity creation, because 0 XOR x = x, ie a zeroed disk has no influence on parity.
It is also possible to create the parity drive by adding it last and for it to be filled with the XOR of all data disks. Because we will encrypt disks for our array, we will use this method, therefore we must make a note of which disk we intend to use as the parity drive but not add it just yet.
By using encryption on the data drive, should a drive die, the data on it is an encrypted blob. That disk is still readable on another Linux host, as long as 1) that system can read the partition type (here xfs) 2) LUKS can decrypt the disk content using the data encryption passphrase. As discussed earlier, because of share’s allocation method, not all files for a given share might be present on that disk.
Add each drive to the array in the “Disk” slot that you prefer (depending on license you will be able to use more of less total drives). Usually, the first disk will be written to first, then following the allocation method, another disk will be used, attempting to spread the data load on the array drives.
For each drive added, we can prepare the disk to our desired settings from the “Disk ID” selection. Under the “Disk ID Settings”, we will have a “File system type” dropdown and “Erase” button. Use those to prepare the disk to be used. This is particularily important if the drives were used before, to clear the previous partition and prepare the drive to be formatted. As such, First we erase all data disks.
After erasing all disks, the “Start the array” button is now available. Before using it, we will first encrypt the data disks.
Selecting our first disk, from the “Settings” submenu we can now change the “File system type” to `xfs - encrypted" and "Apply".
On the bottom of our “Main” page is a new entry to enter an encryption “passphrase”, ignore it for now.
Repeat the modification of “File system type” for all disks.
The “Start” button changes from greyed out to active. Confirm no “Parity” disk is added just yet and “start the array”.
After the start, all drives will show up with an Unmountable error. The “Array Operations” section will have an “Unmountable disks presents” with an option “Format” those. Let’s use it.
After formatting the array disks and pool(s), a green lock is present next to each encrypted disk.
Go to “Settings → Disk Settings” and enable “Enable auto start”, then “Apply”.
Reboot (from the “Dashboard” tab) to confirm that the /root/keyfile is automatically enabled (our encryption passphrase).
After the reboot, and confirming that the encrypted array is automatically starting, we will now enable the Parity disk.
From the “Main” tab, “Stop” the array.
Add the remaining disk, reserved for parity (of equal or higher size than the largest individual data disk) in the first “Parity” slot. This drive does not need to be formatted, as its entire content will XOR the content of every data disk present.
Clicking “Start” will “Start will start Parity-Sync and/or Data-Rebuild”, which is what we wanted: the Parity drive will be created as we use the system.
Creating the Parity disk means XOR-ing each sector for each disk of the array onto the parity one; which will take as long as needy for all sectors to be processed. When using disks of different size, processing for smaller disks will be completed when reaching that disk’s last sector. An estimated finish button will appear in the “Array Operations” section.
💡
We recommend making labels for each drive in the system, with the type of drive and the last few digits of the drive’s serial for easy visual review. As you can see from the “Main” tab, knowing which drive is your parity drive and if possible the order in which the other drives are added might prove useful with future modifications. Taking a screenshot is a good way to have this information available at a later time.
Users
To better use Unraid and its shares, it is useful to add “Users” beyond root to our system.
Adding users allows us to control who can access specific shares on the Unraid server. By creating user accounts, we can assign different levels of access to various shares. For example, we might want certain users to have read-only access while others have read-write permissions.
The users menu can be accessed from the “Users” tab or from the “Dashboard” tab when selecting the gears icon in the user’s section.
Once in the interface, select “Add User”, enter a ”User name” for the new user (It is recommended to use lowercase letters and keep the name under 30 characters to ensure compatibility across different operating systems). Optionally, provide a ”Description” and a ”Custom image” for the user. Set a ”Password”and confirm it. Make sure to click ”Add” to create the user.
Shares
Shares represent folders or drives on your Unraid server that can be accessed over a network
Shares allow the organization of data logically, with the use of separate shares for media, documents, backups, etc. They are created in /mnt/user as folders existing at this root. Each can have a primary storage and an optional secondary storage. The primary storage is the location where new files are initially written for a selected share, while the optional secondary storage determines where files can be moved to after they've been initially stored in the primary location. The mover process transfers files between different storage locations (typically from cache to array) at scheduled time, and its behavior is influenced by the share settings, including the allocation method and split level. Unraid offers different allocation methods for distributing files across disks, with “High-Water” (fills up one disk at a time until it reaches a certain threshold —the "high-water mark"— then moves on to the next disk), “Fill-up” (use the lowest numbered disk that still has free space above a threshold), or “Most-free” (use the disk with currently the most free space).
Shares can also —optionally— be shared over the network (using SMB) either as “visible” or “hidden” (must know the share’s name) and access control to those share can be selected per added user.
Docker by default uses a vDisk image of 20GB, which might not be enough to run some of the applications we will obtain from the “Apps” tab. We will therefore use cache-allocated share (not network reachable) to store as much data as we need.
From the “Shares” tab, “Add Share”
name it docker
use Cache for the “Primary Storage”
use None for “Secondary Storage”
click “Add Share”
This will create a “Share” that only exists on the “Cache” (ie it will not be copied to our disk array, unless some backup is enabled for this location).
New options will appear after the share is created, for SMB sharing for example. We will not share that directory.
From “Settings → System Settings → Docker”, disable Docker
select “Delete vDisk file” for the “Docker vDisk location” entry and “Delete”
Change the “Docker data-root” to a “directory” pointing to /mnt/user/docker
keep the “Default appdata storage location” the same
click “Apply”
Change “Enable Docker” to “Yes” then “Apply” again. The “Docker” tab in the main WebUI that had disappear has now reappeared.
💡
It is recommended to create a new bridge network for docker containers to be able to communicate with one another. Docker will use the container’s “Name” when starting a container to assign it to the running container for that service. For example, “AUTOMATIC1111-Stable-Diffusion-Web-UI” can be renamed as “a1111”:
The advantage of this is that should we attempt to link a container to another container running on the same dedicated bridge network, we can refer to those by name.
For example, we can link “Open-WebUI” to “Ollama” by using the container’s name (”ollama”):
In a terminal on the Unraid system, run docker network create docker which will create a new docker network.
Community Applications
Unraid becomes a true hub of resources once applications are installed. To do so, we must first install the “Community Applications” plugin. This is done by selecting “Apps” from the main WebUI which will ask us to install the plugin. Use “Install” and acknowledge the various disclaimers. Community Applications (CA) is an essential plugin for almost any UnRAID user: it acts as an app store for UnRAID, making it easy to find, install, and manage other plugins and Docker applications.
💡
Recommendation: On the CA page, select Settings and enable:
“Allow CA to check for updates to applications”
“Allow CA to send any emergency notifications”
Plugins
The “plugins” tab will list the different tools that you have installed to support the system. Once installed, those will show in the “Plugins” tab.
Plugins are loaded when the system starts and are available before the array is started. This is the main difference with Docker as containers can only be started after the array is started.
To explain this difference, let’s discuss Tailscale (a secure, zero-config VPN service that enables easy and seamless private network connectivity between devices, servers, and services across different infrastructures) which is available both as a docker containers but also as a plugin (both are available in CA). The tailscale plugin will be started when the unraid system starts and before the array is started: Tailscale 100. IPs will be available even if the array is not started, which is useful when attempting to use Tailscale to connect to the Unraid system in case a system check is needed. In contrast, the Docker container requires the array to be started. As such, the system will not be on the tailnet (Tailscale’s private space, inaccessible from the public internet; as more devices are added, they are automatically included in the same tailnet) before the Unraid system has succesfully started its array. This was done on purpose; the tailscale docker is designed “for use as a sidecar for Docker containers connected to [docker-specific] networks”; i.e. share the docker containers ports with Tailscale, while the plugin is here to share the Unraid system itself.
Recommended Plugins
Plugins to install first are those that will enable features in the core system:
“Fix Common Problems”: “find and suggest solutions to common unRaid configuration errors, common problems, etc.” (support page)
“Nvidia drivers” (if you have an NVIDIA GPU in your system): “install all necessary modules and dependencies for your Nvidia Graphics Card so that you can make use of it in your Docker containers” (support page)
Please make sure to disable and enable Docker if you installed the Nvidia driver for the first time! Settings -> Docker -> Enable Docker 'No' -> Apply -> Enable Docker 'Yes' -> Apply
Note that “The server needs to be restarted in order to install the new driver”.
“Unassigned Devices”: “used to mount and share non-array disks, remote SMB or NFS shares, and iso files” and “Unassigned Devices Plus (Addon)”: “[enables support] for HFS+, exFAT, and apfs disk formats, and to enable destructive mode.” (same support page)
Additional Plugins
Those plugins provide welcome features to enhance the capabilities of Unraid:
“Unassigned Devices Preclear”: “used to exercise and clear disks and prepare them for adding to the array” (support page). This additional step when adding new disks can help ensure they are free of defects before being added to the array.
“Auto Update Applications”: automatically keeps plugins and containers up to date as updates become available. Once installed, access its Settings. The choice of plugins and Docker applications is left to the end-user. It is possible to change the update frequency as well as their schedule, and also delay updates (support page)
“Appdata Backup” provides backup of the docker appdata (and can create a flash drive backup as well). When using it, it is recommended to create a “Share” on the array, and use it as the destination for the backup files (shares are placed in /mnt/user). In its settings, It is also possible to schedule and use this tool to update containers after backup; be conscious of choices made with the “Auto Update Applications”. (support page)
“Tailscale” is a commercial product, although their “Personal Plan” is likely perfect for most self-hosters. It is “a VPN service that makes the devices and applications you own accessible anywhere in the world, securely and effortlessly. The service handles complex network configuration on your behalf so that you don't have to. Network connections between devices pierce through firewalls and routers as if they weren't there, allowing for direct connections without the need to manually configure port forwarding”. The tool is extremely useful for accessing your infra remotely, uses 100. IPs on the Carrier Grade NAT (CGNAT) and Wireguard connections. It is possible to “Disable key expiry” for servers, and to “Edit machine IPv4” to create matching IPs between our Tailscale IPs and local infra reserved IPs. After installing, we will be challenged by a login page, which allows us to add the machine to our Tailnet. Additional options can then be found in the plugin configuration page, such as Subnet router, Tailscale SSH server or Taildrop.
There are thousands of applications installable directly from CA, those are all Docker based. Because we have moved from a data disk to a directory, the risk related to space limitation for large container should be mitigated.
When installing those applications, we will be presented with an installation template that will provide us access to Path, Port, Variable, and Label for the container to be installed. Most templates will come with descriptions of what is needed for the installation. It is recommended to install configurations to the application’s /mnt/user/appdata location (preferably with a primary storage location on a cache), and store data on other shares.
For example, when installing Traefik (onto our custom: docker network), we are presented with multiple options and can follow the provided details to make decisions related to the different Path, Port, environment Variable and Docker Label to select.
The choice of applications to install is left to the end user; many categories are available to select from. To see the up-to-date list of currently available applications, please see
If the application you want isn't available through Community Applications, you can manually add it by providing the necessary information such as the Docker image name, container name, and any required volume mappings or port forwarding settings. To do so, from the Docker tab, select Add Container and enter the required information as those are detailed by the container you are attempting to deploy. Please see https://forums.unraid.net/topic/162164-guide-how-to-install-docker-images-that-are-not-avaliable-on-the-community-applications-page/ for a guide on how to proceed.
Depending on your use of Unraid there are many different type of applications that can be investigated from the diverse categories in CA, from AI, Backup, Cloud, Game Servers, Home Automation, Productivity, Security, Tools and many others. We let the reader search r/Unraid or r/selfhosted for lists of recommendations, and will only list a few, and invite the reader to check those and more in CA: CloudflaredTunnel, code-server, DockerRegistry, Dozzle, FileBrowser, Gitea, homepage, paperless-ngx, syncthing, traefik, vaultwarden.
Misc: Settings Tab
The following are recommendations for some settings, use at your own discretion.
For some entries in the “Settings” tab, when hovering over the description text, a question mark might popup; clicking on the text will then display some help information for the selected text.
💡
Remember to “Apply” or “Save” when modifying entries in the different sections as you change settings.
Management Access (System Settings)
Enable SSH (on port 22), this will allow you to connect to the system as root.
If you intend to host a reverse proxy on the Unraid server (Traefik or Nginx Proxy Manager for example), it is recommended to change the HTTP and HTTPS ports from 80 and 443 to alternate ports. For example when changing the HTTP port to 888, after clicking “apply”, you must remember that to access the Unraid WebUI, you must go to http://192.168.22.99:888/ until your reverse proxy URL (for example: https://unraid99.example.com/) is configured.
UPS Settings (System Settings)
If your setup includes an Uninterruptible Power Supply (UPS, ie a battery backup) connected to your Unraid server, start the deamon from “Settings → UPS”.
Notification Settings (User Preferences)
Having the system tell us when it encounters an issue is very helpful. Unraid proposes many notification methods. Each section is independent of one another and settings need to be “Apply”-ied for those to be saved.
in the “Notification Settings,” we can decide on the frequency, format and methods (email or agents) to publish those notifications. Some enabled features (ex “Docker update notification” enabled) will had entries to the notification method section.
“SMTP settings” will allow us to send emails for the notifications for which we selected “Email” as the notification method.
“Notification Agents” lists many possible other agents such as Discord, Slack, Pushover and others.
Managing our dotfiles across multiple machines and operating systems can be complex. A tool designed for this purpose, chezmoi, can streamline this process and store the dotfiles in a GitHub repository, manage the differences between systems (using a template), and secure some of those files using encryption.
docker compose setup for HomePage with Dashdot and Watchtower widgets from Komodo or Dockge, including detailing the dashboard setup for a multi-tab layout with four sections of content: system metrics, static content, dynamic applications (docker service discovery), and service monitors.
This guide details how to deploy Traefik using Docker compose (with Dockge) and Cloudflare as our DNS Challenge provider to generate Let’s Encrypt certificates for local infrastructure (not internet-accessible) and use basic authentication to protect the Traefik Dashboard. The configuration files presented can also be used with other deployment methods, such as Unraid.