Exposing your raw Docker socket to the public internet is a ticking time bomb. We get it. Setting up reverse proxies usually means drowning in Nginx configs or fighting Traefik’s endless YAML formatting. It is frustrating when you just want a secure connection without spending hours reading documentation. In this guide, we’ll fix insecure deployments by building a segmented Caddy Docker proxy that handles automatic SSL without risking your entire host machine.
Disclaimer: The information and configurations provided in this article are for educational and informational purposes only. Modifying Docker socket permissions, network segmentations, and reverse proxy settings carry inherent system and security risks. The author and publisher strictly disclaim any liability for potential security breaches, data loss, server downtime, or other damages resulting from the use of these configurations. Always audit code and test setups in a secure, isolated development environment before deploying to production.
Introduction: Why Caddy is the Definitive Ingress for 2026
Let’s look at the reality of modern networking. According to the Mozilla State of HTTPS Adoption on the Web, over 90.4% of all top-level document loads now start securely. The web is encrypted by default. You cannot rely on unencrypted HTTP anymore, not even for your internal tools.
Simply put, Caddy is a memory-safe, zero-dependency ingress controller that automatically provisions and renews SSL certificates natively. No Certbot sidecars. No messy cron jobs.
I’ve noticed that most tutorials still push Traefik or legacy Nginx. But if you look at the 2025 Docker State of Application Development Report, 35% of developers are now managing complex microservices, and 64% operate in non-local environments. You don’t have time to manually write static routing files every time a container spins up. Caddy running on Go 1.24 solves this with its automated TLS stack and dynamic label inference.
Here is the main difference between Caddy-Docker-Proxy and the alternatives:
| Proxy Controller | Configuration Style | SSL/TLS Automation | External Dependencies |
|---|---|---|---|
| Caddy-Docker-Proxy | Minimal Docker labels (Zero static files) | Native Let’s Encrypt & ZeroSSL failover | None (Memory-safe Go binary) |
| Traefik | Complex YAML routers & middlewares | Requires manual certificate resolvers | Heavy ecosystem lock-in |
| Nginx | Static configuration files | Requires external Certbot container | OpenSSL (Memory vulnerabilities) |
The Architecture: Controller vs. Server (Fixing the Docker Socket Vulnerability)
Here’s the thing. Almost every basic setup guide on the internet tells you to blindly mount /var/run/docker.sock right into your web-facing container. Do that, and you might as well hand over the keys to your server. If an attacker breaches the proxy, they get root execution on your host.
I remember auditing a compromised server a few years ago. The harsh, fluorescent server room light gave me a headache while I traced the breach back to a single exposed Docker daemon socket. That cold sweat of realizing a tiny misconfiguration brought down an entire network? You want to avoid that.
We fix this by using network segmentation. The architecture splits the proxy into two distinct modes:
- The Controller: Has access to the Docker daemon socket to read labels, but exposes absolutely zero external ports.
- The Server: Binds to ports 80 and 443 to handle public web traffic, but has no access to the Docker socket.
Before writing the compose file, you need an external bridge network. And you must enable IPv6. Without the --ipv6 flag, Caddy drops the actual client IP addresses and logs the internal Docker gateway IP instead. That blinds your analytics.
Run this command right now:
docker network create caddy_ingress --ipv6
[caddy reverse proxy docker compose example]: The Ultimate Blueprint
Most guides direct you to an outdated v2.3 image. We are going to use the rolling ci-alpine tag for maximum security. Think of standard reverse proxies like a physical switchboard operator, manually plugging cables every time a new service boots up. The lucaslorentz/caddy-docker-proxy image is more like an automated air traffic controller; it constantly monitors the Docker socket and dynamically routes traffic with zero-downtime graceful reloads.
Here is a production-ready compose.yaml file that reflects modern stacks (Python backend, React frontend, MongoDB). Note the addition of the caddy_controller_net to allow the Controller to push configs to the Server securely.
networks:
caddy_ingress:
external: true
caddy_controller_net:
ipam:
config:
- subnet: 10.200.200.0/24
services:
caddy_controller:
image: lucaslorentz/caddy-docker-proxy:ci-alpine
environment:
- CADDY_DOCKER_MODE=controller
- CADDY_INGRESS_NETWORKS=caddy_ingress
- CADDY_CONTROLLER_NETWORK=10.200.200.0/24
networks:
- caddy_controller_net
- caddy_ingress
volumes:
- /var/run/docker.sock:/var/run/docker.sock
restart: unless-stopped
caddy_server:
image: lucaslorentz/caddy-docker-proxy:ci-alpine
environment:
- CADDY_DOCKER_MODE=server
- CADDY_CONTROLLER_NETWORK=10.200.200.0/24
ports:
- "80:80"
- "443:443/tcp"
- "443:443/udp" # Required for HTTP/3
networks:
- caddy_controller_net
- caddy_ingress
volumes:
- caddy_data:/data
restart: unless-stopped
python_api:
image: my-python-fastapi:latest
networks:
- caddy_ingress
labels:
caddy: api.yourdomain.com
caddy.reverse_proxy: "{{upstreams 8000}}"
volumes:
caddy_data:
Notice the {{upstreams 8000}} label? That is an upstream template helper. Instead of hardcoding internal IPs or fighting with Docker DNS aliases, this Go template dynamically resolves the internal container IP address inside the bridge network. It just works.
Routing Edge Cases: Conquering the “Subfolder Problem”
So, you try to host an app at domain.com/app instead of using a clean subdomain. Suddenly, the screen goes completely blank white, and your browser console is bleeding red with cascading 404 errors.
The app is looking for its static CSS and JS assets at the root domain, completely unaware it lives in a subfolder. Routing without prefix stripping is like giving a delivery driver the right street name but refusing to tell them the house number. They arrive, but the package gets dumped in the street.
You solve this with the URI path routing directive handle_path. It mathematically strips the prefix before passing the traffic downstream. You also need strict directive ordering (using index labels like 0_redir) so Caddy doesn’t generate the config unpredictably.
Here are the exact labels to route subdirectories cleanly:
labels:
caddy: domain.com
caddy.0_handle_path: /myapp/*
caddy.0_handle_path.reverse_proxy: "{{upstreams 3000}}"
Enterprise-Grade Security Headers (Passing the Nextcloud Test)
I tested this default config against enterprise self-hosted apps like Nextcloud. The default setups fail spectacularly, throwing warnings about missing Strict-Transport-Security (HSTS) headers.
Mozilla data shows HSTS drives 62.4% of all successful secure protocol upgrades. Browsers expect a max-age of at least one year. Injecting security headers via Docker labels takes two seconds but blocks massive attack vectors like MIME sniffing and cross-site scripting.
Here are 3 steps for injecting Nextcloud-compliant security headers:
- Target your main domain label.
- Use the
caddy.headerdirective. - Apply the strict 31536000 seconds preload requirement.
labels:
caddy: domain.com
caddy.header: "Strict-Transport-Security max-age=31536000; includeSubDomains; preload"
caddy.header.X-Content-Type-Options: "nosniff"
Solving Local Network SSL (The DNS-01 Challenge)
What if you don’t want to expose your dashboard to the public internet? Actually, roughly 5.5% of top-level document loads connect to local HTTP addresses. Let’s Encrypt cannot run a standard HTTP-01 challenge on a private .local domain because their validation servers cannot reach inside your firewall.
Most guides tell you to use complex IP matchers. That breaks your SSL. To get automatic HTTPS on internal networks, you have to build a custom Caddy binary injected with the Cloudflare DNS plugin to run a DNS-01 challenge.
Create a file named Dockerfile right next to your compose file:
FROM caddy:builder AS builder
RUN xcaddy build \
--with github.com/lucaslorentz/caddy-docker-proxy/v2 \
--with github.com/caddy-dns/cloudflare
FROM caddy:alpine
COPY --from=builder /usr/bin/caddy /usr/bin/caddy
CMD ["caddy", "docker-proxy"]
Then, update your compose file to build this image locally for both your server and controller. You must also pass your Cloudflare API token securely to the server container, as it handles the TLS negotiations:
services:
caddy_controller:
build: .
# ... (keep other controller configs) ...
caddy_server:
build: .
environment:
- CLOUDFLARE_API_TOKEN=${CLOUDFLARE_TOKEN}
# ... (keep other server configs) ...
labels:
caddy.email: [email protected]
caddy.tls.dns: "cloudflare {env.CLOUDFLARE_API_TOKEN}"
Now, Caddy talks directly to Cloudflare’s API to prove you own the domain, issues a valid Let’s Encrypt wildcard certificate, and secures your internal IP resolution without opening port 80 to the world. Brilliant.
Frequently Asked Questions (FAQ)
How do I view Caddy JSON logs in Docker Compose?
Standard logs are messy. Add the label caddy.log.format: json to your container. This structures the output perfectly for ingestion into enterprise telemetry dashboards like Grafana.
Can Caddy proxy Layer 4 TCP/UDP traffic?
Yes. While mostly known for web traffic, many IoT protocols (like MQTT) run on TCP. You will need to compile the custom binary (just like the Cloudflare step above) and add the mholt/caddy-l4 plugin to handle non-HTTP database or telemetry routing.
How do I pass environment variables into Caddy labels?
Docker Compose handles standard interpolation natively. Keep your secrets in a .env file and call them in your labels using the standard syntax: caddy.respond: "${MY_SECRET_MESSAGE}".
Take Control of Your Ingress
Ditch the outdated static files and dangerous root socket mounts. Moving to a segmented Caddy Docker proxy setup cuts your configuration time in half and guarantees you never wake up to an expired SSL certificate again.
Your small step for today? Run that docker network create caddy_ingress --ipv6 command, copy the ultimate blueprint above, and test it locally. Have you tried combining Caddy with other DNS providers like Route53? Let me know how it handles your specific homelab environment below.


