Podman in-container DHCP networking
TL;DR
Don’t do this. bind ports to host and port-forward.
Extended version
This was a dead-end but documenting here for clarity in-case it helps someone.
I have a multi-homed/multi-VLAN host running podman kube services for the network. So far I’ve been essentially plugging connectors directly into specific VLANs on the network.
Networking has always been a bit flakey for me with these containers. Today, I decided to fix things.
After much research, I ended up finding a pattern that worked better:
- Separate podman network with bridge networking and external DHCP server
- In-container dhcp client
The in-container DHCP client turns the container into a full network host on a different VLAN to the host as far as the router is concerned, and this has nice side effects like automatic DNS hostnames if you have configured this.
Separate podman network
I manage my podman networks with ansible. I was able to define a new network bound to my separately setup linux network bridge br110 like this:
- name: podman bridge vlan br110/infrastructure
containers.podman.podman_network:
name: podman-vlan-infrastructure
driver: bridge
opt:
bridge_name: br110
ipam_driver: none
force: true
Bridge networking is way more reliable then MACVLAN which I was previously using.
The network is referenced it in a .kube file like this:
PodmanArgs=--mac-address de:3b:ee:01:02:03
PublishPort=1883:1883
PublishPort=8883:8883
PublishPort=9001:9001
Network=podman-vlan-infrastructure
The fixed MAC address is so I can issue a static DHCP lease on the router.
In-container DHCP client
There’s a few moving parts to this:
Container image must provide a DHCP client
Alpine images provide udhcpc, if nothing already in image, you will need to build a custom image
Writable /etc/resolve.conf
Add a volume to the pod:
spec:
volumes:
- name: resolv-override
emptyDir: {}
Mount in the container(s) at the right place:
volumeMounts:
- name: resolv-override
mountPath: /etc/resolv.conf
Security context
Boost privileges as needed in the container(s) to allow DHCP broadcasts and file updates:
securityContext:
capabilities:
# if container runs as non-root, boost UID to root to let it
# write /etc/resolve.conf
# runAsUser: 0
# runAsGroup: 0
add:
- NET_ADMIN
- NET_RAW
Container startup
Container(s) need their command and args tweaked to run DHCP client first and then delegate to the normal docker entrypoint with exec. This needs to be customized for each image you want to deal with, eg this one is for mosquitto:
command:
- /bin/sh
args:
- -c
- |
udhcpc
exec sh /docker-entrypoint.sh /usr/sbin/mosquitto -c /mosquitto/config/mosquitto.conf
Complete example
My completed mosquitto definition looks like this:
# Save the output of this file and use kubectl create -f to import
# it into Kubernetes.
#
# Created with podman-4.3.1
apiVersion: v1
kind: Pod
metadata:
annotations:
io.kubernetes.cri-o.ContainerType/app: container
io.kubernetes.cri-o.SandboxID/app: e026b5b478c4489f78b85e875af39f4d439467a7b7abdf469629a9daa312cdf
io.kubernetes.cri-o.TTY/app: "false"
io.podman.annotations.autoremove/app: "FALSE"
io.podman.annotations.init/app: "FALSE"
io.podman.annotations.privileged/app: "FALSE"
io.podman.annotations.publish-all/app: "FALSE"
creationTimestamp: "2025-01-27T10:57:38Z"
labels:
app: nexus
name: nexus
spec:
automountServiceAccountToken: false
containers:
- image: docker.io/sonatype/nexus3:3.89.1-alpine
name: app
command:
- /bin/sh
args:
- -c
- |
udhcpc
exec sh /opt/sonatype/nexus/bin/nexus run
ports:
- containerPort: 1234
hostPort: 1234
- containerPort: 8081
hostPort: 8081
- containerPort: 8443
hostPort: 8443
resources: {}
securityContext:
capabilities:
runAsUser: 0
runAsGroup: 0
add:
- NET_ADMIN
- NET_RAW
drop:
- CAP_MKNOD
- CAP_AUDIT_WRITE
volumeMounts:
- name: resolv-override
mountPath: /etc/resolv.conf
- mountPath: /nexus-data
name: nexus-data-vol
- mountPath: /nexus-data/etc
name: nexus-config-vol
- mountPath: /nexus-data/etc/tls
name: nexus-tls-vol
enableServiceLinks: false
hostname: nexus
restartPolicy: Never
volumes:
- hostPath:
path: /data/containers/nexus/data
type: Directory
name: nexus-data-vol
- hostPath:
path: /data/containers/nexus/config
type: Directory
name: nexus-config-vol
- hostPath:
path: /data/containers/nexus/tls
type: Directory
name: nexus-tls-vol
- name: resolv-override
emptyDir: {}
status: {}
Reboot and test
The container should appear on the network and DHCP should now be rock solid
Why not do this?
While this works, running DHCP clients in containers reduces security and increases complexity. Specifically, entrypoint needs to be configured differently for each container.
In the end it turns out to be much simpler to just forward ports from the host and register a DNS alias on the router.
In the past, I’ve held off doing this because having multiple VLANs with active IP addresses on my host was creating asymmetric routing chaos.
I’m confident I can solve these problems now and install a simple reverse proxy instead which should be way simpler to manage - stay tuned.