Reverse Proxy: Nginx Proxy Manager (20240730)
Dockge deployment of the “Nginx Proxy Manager” reverse proxy to create private-network routable URLs with a Let’s Encrypt wildcard certificate and Cloudflare as our DNS provider.
Linux host set-up instructions for Dockge, a self-hosted Docker Compose stacks management tool with a feature-rich interface for self-hosting and home lab setups. It provides access to an all-in-one view of logs, a YAML editor, a web terminal, container controls, and monitoring.
init
: 20240706)docker compose
manager on a host with Docker, and an optional Nvidia GPU. We will deploy a few stacks to demonstrate the tool's use. Please note that although we will mention HTTPS reverse proxies (that would upgrade http://127.0.0.1:3001/ to https://dockge.example.com, for example), their setup is not covered in this post.compose.yaml
of that stack is placed. docker run
and proposes a matching compose.yaml
file.docker
, we will use it with a few stacks, among which watchtower, dashdot, and CTPO. This setup is done on an Ubuntu 24.04 host but should be adaptable to other Linux distributions with minor alterations. GPU setups requires the Nvidia runtime installed on the host system; see “Setting up NVIDIA docker & podman (Ubuntu 24.04)” for details.docker
and following the “basic” installation process, the default stacks directory will be in /opt/stacks
, and the default port 5001
.sudo
to create the directories and curl
the compose.yaml
file (feel free to confirm that its content matches the one from the official Dockge page before starting the service):# Create directories that store the stacks and stores Dockge's stack sudo mkdir -p /opt/stacks /opt/dockge cd /opt/dockge # Download the compose.yaml sudo curl https://raw.githubusercontent.com/louislam/dockge/master/compose.yaml --output compose.yaml # Start the server # add a sudo before if the user is not in the docker group docker compose up -d
http://localhost:5001
http://localhost:5001
docker run --detach \ --name watchtower \ --volume /var/run/docker.sock:/var/run/docker.sock \ containrrr/watchtower
docker run
and using the Convert to Compose
button, we get an already populated UI with an automatically converted compose.yaml
content:# ignored options for 'watchtower' # --detach version: "3.3" services: watchtower: container_name: watchtower volumes: - /var/run/docker.sock:/var/run/docker.sock image: containrrr/watchtower networks: {}
General -> Stack Name
to watchtower
. This will create a directory in the /opt/stacks
named watchtower
, and all content relative to this “stack” will be placed within, such as .env
, if any.compose.yaml
file to:restart: unless-stopped
command: --cleanup --interval 86400 --include-stopped
/etc/timezone
and /etc/localtime
to the container (as ro
).labels
(more on this shortly)compose.yaml
is:services: # Watchtower - Auto update containers watchtower: container_name: watchtower image: containrrr/watchtower restart: unless-stopped volumes: - /var/run/docker.sock:/var/run/docker.sock - /etc/timezone:/etc/timezone:ro - /etc/localtime:/etc/localtime:ro command: --cleanup --interval 86400 --include-stopped --label-enable labels: com.centurylinklabs.watchtower.enable: true
Save
button.Inactive
. Using the Start
button will show it as active
. Container logs can be seen in the Terminal
section to investigate if a problem has occurred within the newly run container.>_ bash
button to get a running shell
within the terminal, which might be useful for some containers. watchtower
does not have either bash
or sh
available, so the button will not function for this container.By default, watchtower will monitor all containers running within the Docker daemon to which it is pointed […] you can restrict watchtower to monitoring a subset of the running containers by specifying the container names as arguments when launching watchtower.
--label-enable
(and its disable mode) and label all such containers, or --disable-containers
followed by a list of the containers to skip.services: builtcontainer: image: localbuild:local labels: com.centurylinklabs.watchtower.enable: false
--label-enable
we would set the label to true
to update only those containers.schedule
flag (using a format similar to cron) to request daily updates at 1:30am local time.compose.yaml
, we map it to host port 28080. services: watchtower: container_name: watchtower image: containrrr/watchtower restart: unless-stopped volumes: - /var/run/docker.sock:/var/run/docker.sock - /etc/timezone:/etc/timezone:ro - /etc/localtime:/etc/localtime:ro command: --cleanup --schedule "0 30 1 * * *" --include-stopped --label-enable --http-api-update --http-api-metrics --http-api-periodic-polls environment: - WATCHTOWER_HTTP_API_TOKEN=secret-token ports: - 28080:8080
It is a log viewer designed to simplify the process of monitoring and debugging containers. It is a lightweight, web-based application that provides real-time log streaming, filtering, and searching capabilities through an intuitive user interface.
compose.yaml
services: dozzle: container_name: dozzle image: amir20/dozzle:latest restart: unless-stopped volumes: - /var/run/docker.sock:/var/run/docker.sock - ./data:/data ports: - 8008:8080 environment: DOZZLE_AUTH_PROVIDER: simple DOZZLE_ENABLE_ACTIONS: true labels: com.centurylinklabs.watchtower.enable: true
data
directory in the stack location (/opt/stacks/dozzle
you will likely need to sudo
to do this) then create and edit a data/users.yml
file containing content adapted from Dozzle’s “File Based User Management” page.compose.yaml
file is also at this link):services: dash: image: mauricenino/dashdot:nvidia restart: unless-stopped privileged: true deploy: resources: reservations: devices: - capabilities: - gpu ports: - '80:3001' volumes: - /:/mnt/host:ro environment: DASHDOT_WIDGET_LIST: 'os,cpu,storage,ram,network,gpu'
privileged
and /
mount are present, please see https://getdashdot.com/docs/installationdeploy:
section defines the device access, here to a GPU.dashdot
stack’s compose.yaml
in use for this setup uses port 3001 and adds a few environment variables:services: dash: image: mauricenino/dashdot:nvidia container_name: dashdot-nvidia restart: unless-stopped privileged: true deploy: resources: reservations: devices: - capabilities: - gpu ports: - '3001:3001' volumes: - /:/mnt/host:ro - /etc/timezone:/etc/timezone:ro - /etc/localtime:/etc/localtime:ro environment: DASHDOT_WIDGET_LIST: 'os,cpu,storage,ram,network,gpu' DASHDOT_SHOW_HOST: true DASHDOT_CUSTOM_HOST: hostname DASHDOT_OVERRIDE_OS: 'Ubuntu 24.04' labels: com.centurylinklabs.watchtower.enable: true
DASHDOT_CUSTOM_HOST
used to allow control on the hostname displayed.DASHDOT_OVERRIDE_OS
in use to avoid the tool from giving details about the running container (compared to the docker host, here, running Ubuntu 24.04
)latest
Docker image is available as infotrend/ctpo-jupyter-tensorflow_pytorch_opencv:latest
README.md
’s docker compose
section gives us the following: services: jupyter_ctpo: container_name: jupyter_ctpo image: infotrend/ctpo-jupyter-cuda_tensorflow_pytorch_opencv:latest restart: unless-stopped ports: - 8888:8888 volumes: - ./iti:/iti - ./home:/home/jupyter - /etc/timezone:/etc/timezone:ro - /etc/localtime:/etc/localtime:ro environment: - NVIDIA_VISIBLE_DEVICES=all - NVIDIA_DRIVER_CAPABILITIES=all deploy: resources: reservations: devices: - driver: nvidia count: 1 capabilities: [gpu] labels: com.centurylinklabs.watchtower.enable: true
local_port:container_port
volumes
(mount points) created for the /iti
directory where the Jupyter interface starts from and /home/jupyter
where user configurations are stored. Because Docke will use those local to the directory in /opt/stacks
where the service is started, it is convenient to access its content.environment:
settings, we ensure that the container has full access to the NVIDIA
device(s), and point the resources
(in deploy:
to the first GPU available on the host (adapt as needed).+ Compose
and paste the above compose.yaml
content, naming the stack jupyter_ctpo
and Deploy
it. We await the docker pull
to complete before being able to go to http://127.0.0.1:8888/, enter the Jupyter access token (here set as iti
), and confirm access to the GPU by running a new terminal and typing nvidia-smi
.jupyter_tpo
using a compose,yml
as:services: jupyter_tpo: container_name: jupyter_tpo image: infotrend/ctpo-jupyter-tensorflow_pytorch_opencv:latest restart: unless-stopped ports: - 8889:8888 volumes: - ./iti:/iti - ./home:/home/jupyter - /etc/timezone:/etc/timezone:ro - /etc/localtime:/etc/localtime:ro environment: - NVIDIA_VISIBLE_DEVICES=void labels: com.centurylinklabs.watchtower.enable: true
container_name
, and are using a local port different from the CTPO version.NVIDIA_VISIBLE_DEVICES=void
is here if the default docker runtime is set to nvidia-docker
and is only used in this case, so keeping it is benign.syncthing
Dockge stack matches this second case of a Send Only
configuration to the NAS. For this use, we prefer being able to run as root
so that tool can read every single file it encounters (which is not the recommended way per the “Please consider using a normal user account” message that we will encounter, see the additions to the environment
section) and we will mount two directories /opt
(where Dockge’s stacks are located, which might include models for AI tools) and /home
(where the different user directories are present) to be shared with SyncThing peers (in our case the local NAS). The compose.yaml
with those settings is as follows (please adapt hostname
as preferred):services: syncthing: image: syncthing/syncthing container_name: syncthing hostname: hostname environment: - PUID=0 - PGID=0 volumes: - ./st-sync:/var/syncthing - /opt:/data1 - /home:/data2 - /etc/timezone:/etc/timezone:ro - /etc/localtime:/etc/localtime:ro ports: - 8384:8384 - 22000:22000 - 21027:21027 restart: unless-stopped labels: com.centurylinklabs.watchtower.enable: true
container_name
for existing containerswatchtower
container update rule with --label-enable
watchtower
and using those for our stacks + passing timezone details to running containersversion:
from the compose.yaml
files as those are not needed + Added a SyncThing section