VPS: Cloudflare Zero Trust access to Web Applications (20240715)
Linux host setup for cloudflared to allow Zero Trust access to a running web application, using one-time OTP to email, and alternative rules such as country blocking.
Dockge deployment of the cloudflared Zero Trust tunnel to create private-network public URLs with some access control.
cloudflared
Zero Trust tunnel to create private-network public URLs with some access control.init
: 20240720)cloudflared
within Docker Compose (as a Dockge stack), exposing web applications, and making those available as HTTPS URLs. By using Dockge, we will retain finer control of the ports the Zero Trust tunnel service can access and have the ability to enable and disable stacks as required (including the cloudflared
one) from Dockge’s WebUI.compose.yaml
; each container can communicate with the other using the service’s name (the entry at the next indentation level from service:
— in the below example, this means web
). In addition, it is possible to specify network-related configurations, such as external networks or drivers:
services external to the Compose environment or network drivers (such as bridge
, overlay
, or host
).web
service (in directory www
):# www/compose.yaml services: web: image: nginx # directory "www": networks are created based on directory names where "compose.yaml" is located # "docker compose up" will create a www_defaults network
api
service, this service would need to be on and bring the external network to the second compose.yaml
:# actions/compose.yaml services: api: image: notexisting/apihandler network: - www_defaults networks: www_defaults: external: true
cloudflared
from within a Docker compose (that is not using the host
network driver or the host.docker.internal
method), it is possible to get finer control over what services are accessible by the Zero Trust tunnel (compared to the default installation method that grants the tunnel access to any port exposed on the host).network
method in compose.yml
only services that are explicitly using the tunnel’s network can be accessed.cloudflared
. For this setup, obtain the tunnel token (see “tunnel’s token”’s entry in the “cloudflared: Cloudflare tunnels” section of VPS: Cloudflare Zero Trust access to Web Applications) and store it as the TUNNEL_TOKEN
variable in the .env
section of the Dockge UI.compose.yaml
is as follow:services: cloudflared: image: cloudflare/cloudflared container_name: cloudflare-tunnel restart: unless-stopped command: tunnel --no-autoupdate run environment: - TUNNEL_TOKEN=${TUNNEL_TOKEN} volumes: - /etc/timezone:/etc/timezone:ro - /etc/localtime:/etc/localtime:ro labels: - "com.centurylinklabs.watchtower.enable=true"
cloudflared_default
cloudflared
means making sure they can see and use the cloudflared_default
network. For this, we will add to the service:
section of such containers:networks: - cloudflared_default networks: cloudflared_default: external: true
cloudflared
. If other services are running in Dockge on that host, even if they added to the Cloudflare Zero Trust dashboard, unless they are added to the cloudflared_default
network; they can not be contacted by the Cloudflare Tunnel.secrets.toml
). Those end users get access to gpt-4o-mini
in the WebUI.oaiwui_gptonly
and its compose.yml
file contains (”Save” but do not “Deploy”):services: oaiwui_gptonly: image: infotrend/openai_webui:latest container_name: oaiwui_gptonly_container restart: unless-stopped volumes: - ./savedir:/iti - ./secrets.toml:/app/.streamlit/secrets.toml:ro - /etc/timezone:/etc/timezone:ro - /etc/localtime:/etc/localtime:ro ports: - 18501:8501 environment: - OPENAI_API_KEY=${OPENAI_API_KEY} - OAIWUI_SAVEDIR=/iti - OAIWUI_GPT_ONLY=True - OAIWUI_GPT_MODELS=gpt-4o-mini - OAIWUI_GPT_VISION=False - OAIWUI_DALLE_MODELS=dall-e-3 networks: - cloudflared_default labels: - "com.centurylinklabs.watchtower.enable=true" networks: cloudflared_default: external: true
OPENAI_API_KEY
in the .env
section of the Dockge UI. When creating the API key, follow OpenAI’s instructions at https://help.openai.com/en/articles/4936850-where-do-i-find-my-openai-api-key and create keys with limited access; for the purpose of the WebUI only “read” access to the “Models” endpoints and “write” access to the “Model capabilities” is needed.savedir
and the secrets.toml
file (that contains the shared password)# you might require sudo permission for those commands cd /opt/stacks/oaiwui_gptonly mkdir savedir nano secrets.toml # add a line with: password = "SET_YOUR_PASSWORD_HERE"
oaiwui_gponly
service to the cloudflared_default
network, making it accessible by the tunnel. The internal container port is 8501 while the host exposed port is 18501. If this service is running on our VPS, ufw
prevents access to any port other than our alternate SSH port.oai.example.com
and refer the service to http
and oaiwui_gptonly:8501
. Compared to the host-setup for the Zero Trust tunnel, where the services link to 127.0.0.1
and the exposed port on the host, because we are using Docker Compose, we can use the name of the service (since it is on the same docker compose network, it can be resolved) and must use the internal port of the service (not the localhost exposed port).cloudflared
service is turned off in Dockge, all access to any authorized service will be disabled.compose.yaml
file (here too, “Save” but do not “Deploy”):services: oaiwui_full: image: infotrend/openai_webui:latest container_name: oaiwui_full_container restart: unless-stopped volumes: - ./savedir:/iti - /etc/timezone:/etc/timezone:ro - /etc/localtime:/etc/localtime:ro ports: - 28501:8501 environment: - OPENAI_API_KEY=${OPENAI_API_KEY} - OAIWUI_SAVEDIR=/iti - OAIWUI_GPT_ONLY=False - OAIWUI_GPT_MODELS=gpt-4o-mini,gpt-4o - OAIWUI_GPT_VISION=True - OAIWUI_DALLE_MODELS=dall-e-3 networks: - cloudflared_default labels: - "com.centurylinklabs.watchtower.enable=true" networks: cloudflared_default: external: true
oaiwui.example.com
and have it refer to http
to oaiwui_full:8501
. This uses the host:port
representation: both oaiwui_full
and oiwui_gptonly
are independent hosts, so each IP can have its own version of a service running on 8501.