cloudflared on Docker Compose (Rev: 07/30)

Dockge deployment of the cloudflared Zero Trust tunnel to create private-network public URLs with some access control.

Jul 27, 2024
👉
Dockge deployment of the cloudflared Zero Trust tunnel to create private-network public URLs with some access control.
 
Revision: 20240730-0 (init: 20240720)
 
This post discusses using cloudflared within Docker Compose (as a Dockge stack), exposing web applications, and making those available as HTTPS URLs. By using Dockge, we will retain finer control of the ports the Zero Trust tunnel service can access and have the ability to enable and disable stacks as required (including the cloudflared one) from Dockge’s WebUI.
 

Recommended pre-reads

The following content will refer to some instructions and concepts described in previous posts.
  • Dockge, where we describe installing Dockge onto a system and adding “stacks” (Docker compose services) to it
  • VPS: Cloudflare Zero Trust access to Web Applications in which we discuss exposing web applications on a VPS with no ports exposed (save for SSH on an alternate port) and making those available as HTTPS URLs. In particular, in this post, we discuss the initial setup of the domain and the steps needed to create a Zero Trust tunnel and add applications to it.

Preamble

Docker Compose provides a powerful networking solution for managing container communication in a multi-container application. By default, Docker Compose sets up a single network for applications within the same compose.yaml; each container can communicate with the other using the service’s name (the entry at the next indentation level from service: — in the below example, this means web). In addition, it is possible to specify network-related configurations, such as external networks or drivers: services external to the Compose environment or network drivers (such as bridge, overlay, or host).
The external flag connects a service to a network external to the current compose.yaml file.
For example, if we were to create a web service (in directory www):
# www/compose.yaml services: web: image: nginx # directory "www": networks are created based on directory names where "compose.yaml" is located # "docker compose up" will create a www_defaults network
, and added an api service, this service would need to be on and bring the external network to the second compose.yaml:
# actions/compose.yaml services: api: image: notexisting/apihandler network: - www_defaults networks: www_defaults: external: true
Because of this, when deploying cloudflared from within a Docker compose (that is not using the host network driver or the host.docker.internal method), it is possible to get finer control over what services are accessible by the Zero Trust tunnel (compared to the default installation method that grants the tunnel access to any port exposed on the host).
This proves useful for example when making use of Dockge to control the deployment of services on a system with multiple zero trust access policy and wanting finer access control to ports the cloudflare tunnel can see. With the network method in compose.yml only services that are explicitly using the tunnel’s network can be accessed.

cloudflared setup using Docker Compose

Using Dockge, we will set up a new stack named cloudflared. For this setup, obtain the tunnel token (see “tunnel’s token”’s entry in the “cloudflared: Cloudflare tunnels” section of VPS: Cloudflare Zero Trust access to Web Applications) and store it as the TUNNEL_TOKEN variable in the .env section of the Dockge UI.
The service’s compose.yaml is as follow:
services: cloudflared: image: cloudflare/cloudflared container_name: cloudflare-tunnel restart: unless-stopped command: tunnel --no-autoupdate run environment: - TUNNEL_TOKEN=${TUNNEL_TOKEN} volumes: - /etc/timezone:/etc/timezone:ro - /etc/localtime:/etc/localtime:ro labels: - "com.centurylinklabs.watchtower.enable=true"
When started, it creates a network for this Docker compose; Dockge uses the service name for directoy naming, the network’ name is cloudflared_default
Adding new services that will be reachable by cloudflared means making sure they can see and use the cloudflared_default network. For this, we will add to the service: section of such containers:
networks: - cloudflared_default networks: cloudflared_default: external: true
After starting the service, you will see that the tunnel is connected in the Zero Trust dashboard.

Example setup: Two cloudflared protected OpenAI WebUI

Let us demonstrate exposing two similar services on a system with no internet reachable ports, such that the running services are exposed using cloudflared. If other services are running in Dockge on that host, even if they added to the Cloudflare Zero Trust dashboard, unless they are added to the cloudflared_default network; they can not be contacted by the Cloudflare Tunnel.
Here we will deploy two set of services, both related to OpenAI WebUI:
  1. the first one is available publicly but protected by a WebUI password (using Streamlit’s secrets.toml). Those end users get access to gpt-4o-mini in the WebUI.
  1. the second one is only accessible by users authorized by a One-time PIN sent to their email. Those users get access to more GPTs and the DallE component of the WebUI.
Although access to those services will be granted by the Zero Trust tunnel, when a service is down in Dockge, the access will not be successful.

oaiwui_gptonly

This is the first service, we will name it oaiwui_gptonly and its compose.yml file contains (”Save” but do not “Deploy”):
services: oaiwui_gptonly: image: infotrend/openai_webui:latest container_name: oaiwui_gptonly_container restart: unless-stopped volumes: - ./savedir:/iti - ./secrets.toml:/app/.streamlit/secrets.toml:ro - /etc/timezone:/etc/timezone:ro - /etc/localtime:/etc/localtime:ro ports: - 18501:8501 environment: - OPENAI_API_KEY=${OPENAI_API_KEY} - OAIWUI_SAVEDIR=/iti - OAIWUI_GPT_ONLY=True - OAIWUI_GPT_MODELS=gpt-4o-mini - OAIWUI_GPT_VISION=False - OAIWUI_DALLE_MODELS=dall-e-3 networks: - cloudflared_default labels: - "com.centurylinklabs.watchtower.enable=true" networks: cloudflared_default: external: true
Add OPENAI_API_KEY in the .env section of the Dockge UI. When creating the API key, follow OpenAI’s instructions at https://help.openai.com/en/articles/4936850-where-do-i-find-my-openai-api-key and create keys with limited access; for the purpose of the WebUI only “read” access to the “Models” endpoints and “write” access to the “Model capabilities” is needed.
After saving the service and before starting it, we need to create the savedir and the secrets.toml file (that contains the shared password)
# you might require sudo permission for those commands cd /opt/stacks/oaiwui_gptonly mkdir savedir nano secrets.toml # add a line with: password = "SET_YOUR_PASSWORD_HERE"
In this Docker compose we have added the oaiwui_gponly service to the cloudflared_default network, making it accessible by the tunnel. The internal container port is 8501 while the host exposed port is 18501. If this service is running on our VPS, ufw prevents access to any port other than our alternate SSH port.
From the Cloudflare dashboard, in “Zero Trust → Networks → Tunnels”, we select the tunnel, and add a new “Public Hostname”. Here, we will set it to oai.example.com and refer the service to http and oaiwui_gptonly:8501. Compared to the host-setup for the Zero Trust tunnel, where the services link to 127.0.0.1 and the exposed port on the host, because we are using Docker Compose, we can use the name of the service (since it is on the same docker compose network, it can be resolved) and must use the internal port of the service (not the localhost exposed port).
After starting the service, it will be accessible at https://oai.example.com/, with access limited by the Zero Trust configuration and the shared password.
When the service is taken down in Dockge, it will not be accessible at that URL.
Similarly, if the cloudflared service is turned off in Dockge, all access to any authorized service will be disabled.

oaiwui_full

Here, we will follow similar steps as above with the following compose.yaml file (here too, “Save” but do not “Deploy”):
services: oaiwui_full: image: infotrend/openai_webui:latest container_name: oaiwui_full_container restart: unless-stopped volumes: - ./savedir:/iti - /etc/timezone:/etc/timezone:ro - /etc/localtime:/etc/localtime:ro ports: - 28501:8501 environment: - OPENAI_API_KEY=${OPENAI_API_KEY} - OAIWUI_SAVEDIR=/iti - OAIWUI_GPT_ONLY=False - OAIWUI_GPT_MODELS=gpt-4o-mini,gpt-4o - OAIWUI_GPT_VISION=True - OAIWUI_DALLE_MODELS=dall-e-3 networks: - cloudflared_default labels: - "com.centurylinklabs.watchtower.enable=true" networks: cloudflared_default: external: true
This time we will name it oaiwui.example.com and have it refer to http to oaiwui_full:8501 . This uses the host:port representation: both oaiwui_full and oiwui_gptonly are independent hosts, so each IP can have its own version of a service running on 8501.
This is not the case for the exposed port on the host system, we therefore have used ports 18501 and 28501 to prevent the service from failing because of a port already in use.

Revision History

  • 20240730-0: Passed timezone and watchtower label to docker compose
  • 20240727-0: initial release