Stable Diffusion within Open WebUI (20240730)
This post details the build as a container of Automatic1111 and its integration as an image generator option for the “Ollama with Open WebUI” installation.
Linux host set-up instructions for Dockge, a self-hosted Docker Compose stacks management tool with a feature-rich interface for self-hosting and home lab setups. It provides access to an all-in-one view of logs, a YAML editor, a web terminal, container controls, and monitoring.
init
: 20240706)docker compose
manager on a host with Docker, and an optional Nvidia GPU. We will deploy a few stacks to demonstrate the tool's use. Please note that although we will mention HTTPS reverse proxies (that would upgrade http://127.0.0.1:3001/ to https://dockge.example.com, for example), their setup is not covered in this post.compose.yaml
of that stack is placed.docker run
and proposes a matching compose.yaml
file.docker
, we will use it with a few stacks, among which watchtower, dashdot, and CTPO. This setup is done on an Ubuntu 24.04 host but should be adaptable to other Linux distributions with minor alterations. GPU setups require the Nvidia runtime installed on the host system; see “Setting up NVIDIA docker & podman (Ubuntu 24.04)” for details.docker
and following the “basic” installation process, the default stacks directory will be in /opt/stacks
, and the default port 5001
.sudo
to create the directories and curl
the compose.yaml
file (feel free to confirm that its content matches the one from the official Dockge page before starting the service):# Create directories that store the stacks and stores Dockge's stack sudo mkdir -p /opt/stacks /opt/dockge cd /opt/dockge # Download the compose.yaml sudo curl https://raw.githubusercontent.com/louislam/dockge/master/compose.yaml --output compose.yaml # Start the server # add a sudo before if the user is not in the docker group docker compose up -d
http://localhost:5001
http://localhost:5001
docker run --detach \ --name watchtower \ --volume /var/run/docker.sock:/var/run/docker.sock \ containrrr/watchtower
docker run
and using the Convert to Compose
button, we get an already populated UI with an automatically converted compose.yaml
content:# ignored options for 'watchtower' # --detach version: "3.3" services: watchtower: container_name: watchtower volumes: - /var/run/docker.sock:/var/run/docker.sock image: containrrr/watchtower networks: {}
General -> Stack Name
to watchtower
. This will create a directory in the /opt/stacks
named watchtower
, and all content relative to this “stack” will be placed within, such as .env
, if any.compose.yaml
file to:restart: unless-stopped
command: --cleanup --interval 86400 --include-stopped
/etc/timezone
and /etc/localtime
to the container (as ro
).labels
(more on this shortly)compose.yaml
is:services: # Watchtower - Auto update containers watchtower: container_name: watchtower image: containrrr/watchtower restart: unless-stopped volumes: - /var/run/docker.sock:/var/run/docker.sock - /etc/timezone:/etc/timezone:ro - /etc/localtime:/etc/localtime:ro command: --cleanup --interval 86400 --include-stopped --label-enable labels: com.centurylinklabs.watchtower.enable: true
Save
button.Inactive
. Using the Start
button will show it as active
. Container logs can be seen in the Terminal
section to investigate if a problem has occurred within the newly run container.>_ bash
button to get a running shell
within the terminal, which might be helpful for some containers. watchtower
does not have either bash
or sh
available, so the button will not function for this container.By default, watchtower will monitor all containers running within the Docker daemon to which it is pointed […] you can restrict watchtower to monitoring a subset of the running containers by specifying the container names as arguments when launching watchtower.
--label-enable
(and its disable mode) and label all such containers, or --disable-containers
followed by a list of the containers to skip.services: builtcontainer: image: localbuild:local labels: com.centurylinklabs.watchtower.enable: false
--label-enable
we would set the label to true
to update only those containers.schedule
flag (using a format similar to cron) to request daily updates at 1:30 a.m. local time.compose.yaml
, we map it to host port 28080.services: watchtower: container_name: watchtower image: containrrr/watchtower restart: unless-stopped volumes: - /var/run/docker.sock:/var/run/docker.sock - /etc/timezone:/etc/timezone:ro - /etc/localtime:/etc/localtime:ro command: --cleanup --schedule "0 30 1 * * *" --include-stopped --label-enable --http-api-update --http-api-metrics --http-api-periodic-polls environment: - WATCHTOWER_HTTP_API_TOKEN=secret-token ports: - 28080:8080
It is a log viewer designed to simplify the process of monitoring and debugging containers. It is a lightweight, web-based application that provides real-time log streaming, filtering, and searching capabilities through an intuitive user interface.
compose.yaml
services: dozzle: container_name: dozzle image: amir20/dozzle:latest restart: unless-stopped volumes: - /var/run/docker.sock:/var/run/docker.sock - ./data:/data ports: - 8008:8080 environment: DOZZLE_AUTH_PROVIDER: simple DOZZLE_ENABLE_ACTIONS: true labels: com.centurylinklabs.watchtower.enable: true
data
directory in the stack location (in /opt/stacks/dozzle
—we will likely need to sudo
to do this). Then, create and edit a data/users.yml
file containing content adapted from Dozzle’s “File Based User Management” page.compose.yaml
file is also at this link):services: dash: image: mauricenino/dashdot:nvidia restart: unless-stopped privileged: true deploy: resources: reservations: devices: - capabilities: - gpu ports: - '80:3001' volumes: - /:/mnt/host:ro environment: DASHDOT_WIDGET_LIST: 'os,cpu,storage,ram,network,gpu'
privileged
and /
mount are present, please see https://getdashdot.com/docs/installationdeploy:
section defines the device access, here to a GPU.dashdot
stack’s compose.yaml
in use for this setup uses port 3001 and adds a few environment variables:services: dash: image: mauricenino/dashdot:nvidia container_name: dashdot-nvidia restart: unless-stopped privileged: true deploy: resources: reservations: devices: - capabilities: - gpu ports: - '3001:3001' volumes: - /:/mnt/host:ro - /etc/timezone:/etc/timezone:ro - /etc/localtime:/etc/localtime:ro environment: DASHDOT_WIDGET_LIST: 'os,cpu,storage,ram,network,gpu' DASHDOT_SHOW_HOST: true DASHDOT_CUSTOM_HOST: hostname DASHDOT_OVERRIDE_OS: 'Ubuntu 24.04' labels: com.centurylinklabs.watchtower.enable: true
DASHDOT_CUSTOM_HOST
is used to allow control of the hostname displayed.DASHDOT_OVERRIDE_OS
in use to avoid the tool from giving details about the running container (compared to the docker host, here, running Ubuntu 24.04
)latest
Docker image is available as infotrend/ctpo-jupyter-tensorflow_pytorch_opencv:latest
README.md
’s docker compose
section gives us the following:services: jupyter_ctpo: container_name: jupyter_ctpo image: infotrend/ctpo-jupyter-cuda_tensorflow_pytorch_opencv:latest restart: unless-stopped ports: - 8888:8888 volumes: - ./iti:/iti - ./home:/home/jupyter - /etc/timezone:/etc/timezone:ro - /etc/localtime:/etc/localtime:ro environment: - NVIDIA_VISIBLE_DEVICES=all - NVIDIA_DRIVER_CAPABILITIES=all deploy: resources: reservations: devices: - driver: nvidia count: 1 capabilities: [gpu] labels: com.centurylinklabs.watchtower.enable: true
local_port:container_port
volumes
(mount points) created for the /iti
directory, where the Jupyter interface starts from, and /home/jupyter
, where user configurations are stored. Because Dockge will use those local to the directory in /opt/stacks
where the service is started, it is convenient to access its content.environment:
settings, we ensure that the container has full access to the NVIDIA
device(s), and point the resources
(in deploy:
to the first GPU available on the host (adapt as needed).+ Compose
and paste the above compose.yaml
content, naming the stack jupyter_ctpo
, and Deploy
it. We await the docker pull
to complete before being able to go to http://127.0.0.1:8888/, enter the Jupyter access token (here set as iti
), and confirm access to the GPU by running a new terminal and typing nvidia-smi
.jupyter_tpo
using a compose,yml
as:services: jupyter_tpo: container_name: jupyter_tpo image: infotrend/ctpo-jupyter-tensorflow_pytorch_opencv:latest restart: unless-stopped ports: - 8889:8888 volumes: - ./iti:/iti - ./home:/home/jupyter - /etc/timezone:/etc/timezone:ro - /etc/localtime:/etc/localtime:ro environment: - NVIDIA_VISIBLE_DEVICES=void labels: com.centurylinklabs.watchtower.enable: true
container_name
, and are using a local port different from the CTPO version.NVIDIA_VISIBLE_DEVICES=void
is here if the default docker runtime is set to nvidia-docker
and is only used in this case, so keeping it is benign.syncthing
Dockge stack matches this second case of a Send Only
configuration to the NAS. For this use, we prefer being able to run as root
so that the tool can read every single file it encounters (which is not the recommended way per the “Please consider using a normal user account” message that we will encounter; see the additions to the environment
section) and we will mount two directories /opt
(where Dockge’s stacks are located, which might include models for AI tools) and /home
(where the different user directories are present) to be shared with SyncThing peers (in our case the local NAS). The compose.yaml
with those settings is as follows (please adapt hostname
as preferred):services: syncthing: image: syncthing/syncthing container_name: syncthing hostname: hostname environment: - PUID=0 - PGID=0 volumes: - ./st-sync:/var/syncthing - /opt:/data1 - /home:/data2 - /etc/timezone:/etc/timezone:ro - /etc/localtime:/etc/localtime:ro ports: - 8384:8384 - 22000:22000 - 21027:21027 restart: unless-stopped labels: com.centurylinklabs.watchtower.enable: true
Settings -> Connections -> Enable Relay
menu.comfy
user, whose container-side user and group ID can be set at runtime using arguments. This allows end users to have local directory structures for all the side data (input
, output
, temp
, user
) and the entire models
folder structure to be separate from the container and owned by the starting user. Please see GitHub at https://github.com/mmartial/ComfyUI-Nvidia-Docker for complete details on how to use the container and additional details on using it with docker compose
.comfyui-nvidia
stack with the following compose.yaml
services: comfyui-nvidia: image: mmartial/comfyui-nvidia-docker:latest container_name: comfyui-nvidia ports: - 7188:8188 volumes: - ./run:/comfy/mnt - /etc/timezone:/etc/timezone:ro - /etc/localtime:/etc/localtime:ro restart: unless-stopped environment: - WANTED_UID=1000 - WANTED_GID=1000 - NVIDIA_VISIBLE_DEVICES=all - NVIDIA_DRIVER_CAPABILITIES=all - COMFY_CMDLINE_XTRA= deploy: resources: reservations: devices: - driver: nvidia count: all capabilities: - gpu - compute - utility labels: - com.centurylinklabs.watchtower.enable=true
WANTED_UID
and WANTED_GID
to match the desired user and group ID (which can be obtained using the id
command). Save the stack, and within the /opt/stacks/comfyui-nvidia
create a run
directory, then change it to be owned by the selected ID: sudo chown 1000:1000 run
.run
folder. After successful installation, the container must be restarted to access the WebUI on port 7188
(it runs within the container on port 8188
)./opt/stacks
directory. If we want to move the ./run
folder to another disk (for example a comfyui-nvidia-run
folder within the /data
mounted disk), we can modify the compose.yaml
to reflect that location (the following file contains additional components such as HomePage —with multiple instances— and Traefik, their uses being described here and here)services: comfyui-nvidia: image: mmartial/comfyui-nvidia-docker:latest container_name: comfyui-nvidia ports: - 7188:8188 networks: - traefik_default volumes: - /data/comfyui-nvidia-run:/comfy/mnt - /etc/timezone:/etc/timezone:ro - /etc/localtime:/etc/localtime:ro restart: unless-stopped environment: - WANTED_UID=1000 - WANTED_GID=1000 - NVIDIA_VISIBLE_DEVICES=all - NVIDIA_DRIVER_CAPABILITIES=all - COMFY_CMDLINE_XTRA= deploy: resources: reservations: devices: - driver: nvidia count: all capabilities: - gpu - compute - utility labels: - com.centurylinklabs.watchtower.enable=true - traefik.enable=true - traefik.http.routers.entrypoints=https - traefik.http.routers.rule=Host(`comfyui.example.com`) - homepage.name=ComfyUI - homepage.group=TAB_apps - homepage.icon=SIGN.png - homepage.instance.INSTANCE1.href=https://comfyui.example.com/ - homepage.instance.INSTANCE2.href=https://comfyui.example.net/ - homepage.description=ComfyUI (TAB) networks: traefik_default: external: true
/opt/stacks
./opt/stacks/comfyui-nvidia
folder to /data/dockge-stacks/comfyui-nvidia
, then bind mount it back to its original /opt
location.comfyui-nvidia
container from within Dockge then Dockge itself to avoid potential issues: cd /opt/dockge; docker compose down
cd /opt/stacks; mv comfyui-nvidia /data/dockge-stacks/.
). Create a directory within the /opt/stacks
location to have a mount location: sudo mkdir /opt/stacks/comfyui-nvidia
/etc/fstab
file to add the new line (modifying the file will allow the mount to occur following a system reboot):# added bind mount /data/dockge-stacks/comfyui-nvidia /opt/stacks/comfyui-nvidia none default,bind 0 0
fstab
using sudo systemctl daemon-reload
then mount /opt/stacks/comfyui-nvidia
. The content of data/dockge-stacks/comfyui-nvidia
will appear within /opt/stacks/comfyui-nvidia
. Any change at one location will be reflected in the other.cd /opt/dockge; docker compose up -d
comfyui-nvidia
container.Edit
a given folder and then add an Ignore Pattern
to the mounted directory to avoid duplicate synchronization of what points to the same content.container_name
for existing containerswatchtower
container update rule with --label-enable
watchtower
and using those for our stacks + passing timezone details to running containersversion:
from the compose.yaml
files as those are not needed + Added a SyncThing section