Secure Deployment in Docker
This is the deployment shape I would use if I wanted OpenClaw inside Docker, inside a Proxmox VM, and kept away from the rest of the LAN.
Scope note:
- This profile is for IPv4 isolation in a Docker-on-Proxmox VM.
- The outcomes below are only true if each layer is actually applied and then verified.
- The gateway is only LAN-invisible if the host-side port publishing is loopback-only as well.
Target outcome:
- OpenClaw can access the internet for required APIs.
- OpenClaw must not reach the local LAN.
- OpenClaw must not be accessible from the LAN except via an SSH tunnel to the gateway UI.
- OpenClaw must not receive LAN noise (for example, mDNS or broadcast packets) that could leak information or cause instability.
Baseline Decisions
- Proxmox host runs one VM dedicated to OpenClaw.
- OpenClaw runs inside Docker in that VM.
- LAN CIDR example:
192.168.1.0/24. - Telegram is the only public interaction channel.
- IPv6 is disabled for this deployment model.
- Messaging channel: Telegram only
OPENCLAW_SANDBOX: disabled. The short version is in Sandbox vs Network Isolation.- VLAN segmentation is not available in this environment. LAN isolation is enforced exclusively through Proxmox VM firewall rules, VM-level UFW, and Docker
DOCKER-USERiptables filtering. All three layers are mandatory; the absence of a VLAN makes no single layer optional.
Network Topology
Internet
|
v
[Proxmox VM Firewall] <-- Layer 1: hypervisor-level egress/ingress policy
| Blocks RFC1918 outbound, allows 443/53/123/SSH only
v
[OpenClaw VM]
|- UFW <-- Layer 2: host-level firewall, default-deny outbound
|- DOCKER-USER iptables <-- Layer 3: Docker-aware LAN egress block
|- Docker daemon hardening
'- openclaw-gateway (bind loopback only)
Layered Defense Model
I am deliberately stacking three separate controls here because Docker networking has enough sharp edges that I do not want to trust a single layer.
- Proxmox VM firewall โ egress/ingress control at virtual NIC.
- VM host UFW โ default-deny outbound with explicit service allowlist.
- Docker-aware egress filtering in
DOCKER-USERโ prevents Docker from bypassing UFW. - Docker daemon hardening โ disables inter-container communication and userland proxy.
- OpenClaw runtime and channel policy hardening.
Step 0: Secure VM Access (SSH Keys Only)
Do this first so you are not hardening the box while password SSH is still hanging around.
From your local machine:
ssh-keygen -t ed25519 -C "openclaw-vm" -f ~/.ssh/openclaw_vm
ssh-copy-id -i ~/.ssh/openclaw_vm.pub <user>@<vm-ip>
ssh -i ~/.ssh/openclaw_vm <user>@<vm-ip>
From the VM after key auth works:
sudo sed -i 's/^#*PasswordAuthentication.*/PasswordAuthentication no/' /etc/ssh/sshd_config
sudo sed -i 's/^#*PermitRootLogin.*/PermitRootLogin no/' /etc/ssh/sshd_config
grep -E 'PasswordAuthentication|PermitRootLogin' /etc/ssh/sshd_config
sudo systemctl restart sshd
Verification:
- SSH with key succeeds.
- Password-only SSH auth fails.
PasswordAuthentication noandPermitRootLogin noare active.
Step 1: Baseline Audit
Before changing anything, capture the current state so you can tell later whether the hardening actually changed what you think it changed:
uname -a && cat /etc/os-release
docker version && docker info
ip addr && ip route
ss -tlnp
ufw status verbose
iptables -S
iptables -L -n -v --line-numbers
systemctl list-units --type=service --state=running
Keep this output. It is your before snapshot.
Step 2: Host OS Hardening (VM)
This is where I make the VM boring on purpose: updated, quiet, IPv6 off for this profile, and outbound traffic constrained.
sudo apt update && sudo apt full-upgrade -y
sudo systemctl enable --now unattended-upgrades
mkdir -p ~/.openclaw && chmod 700 ~/.openclaw
printf 'net.ipv6.conf.all.disable_ipv6 = 1\nnet.ipv6.conf.default.disable_ipv6 = 1\n' \
| sudo tee /etc/sysctl.d/99-openclaw-disable-ipv6.conf
sudo sysctl --system | grep disable_ipv6
export ADMIN_IP="<your-admin-ip-or-cidr>"
sudo ufw --force reset
sudo ufw default deny incoming
sudo ufw default deny outgoing
sudo ufw allow in proto tcp from "$ADMIN_IP" to any port 22
sudo ufw allow out 443/tcp
sudo ufw allow out to 1.1.1.1 port 53 proto udp
sudo ufw allow out to 8.8.8.8 port 53 proto udp
sudo ufw allow out 123/udp
sudo ufw enable
sudo ufw status verbose
Expected result:
- Security updates are active.
- Host IPv6 is disabled for this profile.
- UFW is active with deny-by-default inbound and outbound policy.
- Only SSH from the admin source and required public egress remain allowed.
- No unnecessary listening services.
Step 3: Proxmox Firewall and VM Egress Policy
Enable firewall at Datacenter, VM, and VM NIC levels in the Proxmox UI.
This layer matters because I do not want the VM itself to have a clean path to the LAN, even if something lower down is misconfigured.
Create rules in this order and keep the SSH source restriction explicit:
Outbound policy at hypervisor level:
- Allow TCP 443 (HTTPS for APIs).
- Allow TCP 80 temporarily only if package updates require it, then remove it.
- Allow UDP 53 (DNS).
- Allow UDP 123 (NTP).
- Allow TCP 22 from the trusted admin source only (SSH management).
- Drop traffic destined for
192.168.1.0/24(local LAN). - Drop all other RFC1918 traffic (
10.0.0.0/8,172.16.0.0/12,100.64.0.0/10,169.254.0.0/16). - Drop multicast
224.0.0.0/4and limited broadcast255.255.255.255/32. - Drop all other outbound traffic by default.
Inbound policy at hypervisor level:
- Allow TCP 22 from trusted admin source.
- Drop all other inbound traffic.
Validation:
- Confirm the effective rule order in the Proxmox firewall UI before moving on.
- From a different LAN host, verify
ssh <user>@<vm-ip>works only from the admin source and that all non-SSH ports time out or are rejected. - If you temporarily allowed TCP 80, remove it after package/bootstrap work completes.
Step 4: Docker-Aware LAN Isolation (DOCKER-USER)
This is the part people often skip, and it is exactly where Docker can surprise you.
These rules cover IPv4 container egress. They complement the host UFW policy; they do not replace it.
Append rules to /etc/ufw/after.rules with the correct order:
# ---BEGIN DOCKER-USER LAN ISOLATION---
*filter
:DOCKER-USER - [0:0]
-A DOCKER-USER -m conntrack --ctstate ESTABLISHED,RELATED -j RETURN
-A DOCKER-USER -s 172.20.0.0/24 -d 172.20.0.0/24 -j RETURN
-A DOCKER-USER -d 192.168.1.0/24 -j DROP
-A DOCKER-USER -d 10.0.0.0/8 -j DROP
-A DOCKER-USER -d 172.16.0.0/12 -j DROP
-A DOCKER-USER -d 169.254.0.0/16 -j DROP
-A DOCKER-USER -d 100.64.0.0/10 -j DROP
-A DOCKER-USER -j RETURN
COMMIT
# ---END DOCKER-USER LAN ISOLATION---
Apply and verify:
sudo ufw reload
sudo iptables -S DOCKER-USER
sudo iptables -C FORWARD -j DOCKER-USER
Important ordering note:
The Docker internal subnet allow rule must come before the 172.16.0.0/12 drop rule.
Re-run the verification after Docker restarts and after a host reboot. I would not trust this setup until I have seen the chain survive both.
Step 5: Docker Daemon Hardening
These daemon settings are not magic, but they remove a few defaults that I do not want in this setup.
Configure /etc/docker/daemon.json:
{
"icc": false,
"userland-proxy": false,
"no-new-privileges": true,
"ipv6": false,
"log-driver": "json-file",
"default-network-opts": {
"bridge": {
"com.docker.network.bridge.host_binding_ipv4": "127.0.0.1"
}
}
}
Then apply:
sudo dockerd --validate --config-file=/etc/docker/daemon.json
sudo systemctl restart docker
docker info | grep -Ei "icc|userland|ipv6"
docker network inspect bridge --format '{{json .Options}}'
The default host binding above is just a defensive default for newly created bridge networks. I still keep explicit loopback port publishing in the deployment itself.
Step 6: OpenClaw Deployment Constraints
At deployment time, the main thing I care about is making the safe path the obvious path.
export OPENCLAW_IMAGE="ghcr.io/openclaw/openclaw:latest"
export OPENCLAW_GATEWAY_BIND=loopback
Minimum compose pattern:
services:
openclaw:
image: ghcr.io/openclaw/openclaw:latest
environment:
OPENCLAW_GATEWAY_BIND: loopback
OPENCLAW_SANDBOX: "false"
ports:
- "127.0.0.1:18789:18789"
dns:
- 1.1.1.1
- 8.8.8.8
Security requirements:
- Do not mount
/var/run/docker.sock. - Keep
OPENCLAW_SANDBOXdisabled for this profile. - Use explicit DNS resolvers in compose when needed (
1.1.1.1,8.8.8.8). - Publish the gateway port on
127.0.0.1only; do not bind it to0.0.0.0or a LAN interface. - Restrict access to control UI via SSH tunnel only.
SSH tunnel pattern:
ssh -L 18789:127.0.0.1:18789 <user>@<vm-ip>
Step 7: OpenClaw Security Policy
By this point the network path should already be constrained. These application-level settings are there to avoid undoing that work.
In OpenClaw config:
- Keep gateway bind local.
- Disable mDNS discovery.
- Keep browser private-network SSRF allowance disabled.
- Disable elevated tools.
- Deny runtime and filesystem tool groups for Telegram agent profile.
- Use strict pairing policy for direct-message channels.
Do not treat the OpenClaw mDNS setting as the only multicast control. The firewall rules from Step 3 are still doing the real work there.
Run audit:
openclaw security audit
openclaw security audit --deep
Step 8: Verification Matrix
This is the part that matters most. If I do not test from the container, the VM, and another LAN host, I do not know whether the guide worked.
From inside container, these should fail:
curl -m 5 http://192.168.1.1
curl -m 5 http://192.168.1.2
curl -m 5 http://10.0.0.1
curl -m 5 http://169.254.169.254
From inside container, these should work:
curl -m 10 https://api.telegram.org
nslookup google.com 1.1.1.1
From the VM host, confirm the local exposure and policy state:
sudo ufw status verbose
sudo iptables -S DOCKER-USER
sudo ss -tlnp | grep 18789
ip -6 addr
Expected host results:
- UFW shows default deny for incoming and outgoing, with only the expected exceptions.
DOCKER-USERcontains the private-range drops in the expected order.- The gateway is bound only to
127.0.0.1:18789on the host. - No global IPv6 address is active for this deployment profile.
From a different LAN host, only SSH should appear in scan results:
nmap -Pn -p 22,18789 <vm-ip>
curl -m 5 http://<vm-ip>:18789
Expected LAN results:
- TCP 22 is reachable only from the admin source.
- Direct access to
http://<vm-ip>:18789fails without the SSH tunnel. - No multicast or broadcast dependent feature is required for normal operation.
Optional multicast sanity check on the VM while OpenClaw is running:
sudo tcpdump -ni any 'udp port 5353 or multicast or broadcast'
Expected result: OpenClaw should not depend on mDNS or broadcast traffic, and you should not need to relax the firewall rules for those packets.