Skip to content
Light Dark

Container Networking on FreeBSD: What Works, What Doesn't, and the Real Risk

- 14 mins

This is post 3 in the FreeBSD jail orchestration series. Post 2 got a container running with ZFS storage, pf NAT, and VNET. Now: can containers talk to each other?

TL;DR

IP connectivity between containers works out of the box: same bridge, ping by IP, sub-millisecond latency. Port forwarding via pf works from external hosts but NOT from the FreeBSD host itself (a net.pf.filter_local / conmon socket interaction). Pods work and share localhost (after installing catatonit). DNS-based service discovery does NOT work: the dnsname CNI plugin is not included in FreeBSD’s containernetworking-plugins. The workaround is --add-host for static name resolution, or pods for shared-localhost communication.

The gap is real but smaller than I expected for single-node setups. The real problem isn’t what’s missing today: it’s that upstream Podman is migrating from CNI to Netavark, and Netavark doesn’t support FreeBSD.

The Default Network

Podman on FreeBSD creates a single default network using CNI (Container Network Interface):

$ sudo podman network ls
NETWORK ID    NAME        DRIVER
2f259bab93aa  podman      bridge

$ sudo podman network inspect podman
[
     {
          "name": "podman",
          "driver": "bridge",
          "network_interface": "cni-podman0",
          "subnets": [
               {
                    "subnet": "10.88.0.0/16",
                    "gateway": "10.88.0.1"
               }
          ],
          "dns_enabled": false
     }
]

dns_enabled: false. That’s the first hint. FreeBSD ships 7 CNI plugins:

$ ls /usr/local/libexec/cni/
bridge    firewall    host-local    loopback    portmap    static    tuning

No dnsname. No dnsmasq. Custom networks you create also get dns_enabled: false. This is the key difference from Linux, where Podman’s default network includes DNS-based container name resolution via the dnsname plugin.

IP Connectivity: It Just Works

Two containers on the default bridge can ping each other by IP immediately:

$ sudo podman run -d --name c1 freebsd/freebsd-notoolchain:15.0 sleep 600
$ sudo podman run -d --name c2 freebsd/freebsd-notoolchain:15.0 sleep 600

$ C1_IP=$(sudo podman inspect c1 --format '{{.NetworkSettings.IPAddress}}')
$ C2_IP=$(sudo podman inspect c2 --format '{{.NetworkSettings.IPAddress}}')
$ echo "c1: $C1_IP  c2: $C2_IP"
c1: 10.88.0.9  c2: 10.88.0.10

$ sudo podman exec c1 ping -c 2 $C2_IP
PING 10.88.0.10 (10.88.0.10): 56 data bytes
64 bytes from 10.88.0.10: icmp_seq=0 ttl=64 time=0.085 ms
64 bytes from 10.88.0.10: icmp_seq=1 ttl=64 time=0.178 ms

0.085 ms round-trip. Each container gets its own VNET network stack with an epair interface. The cni-podman0 bridge connects them. pf handles NAT for outbound traffic. This part is solid.

But try by name:

$ sudo podman exec c1 ping -c 2 c2
ping: cannot resolve c2: Name does not resolve

DNS lookup goes to the host’s resolver (192.168.122.1 in my case), which obviously doesn’t know about container names. NXDOMAIN, as expected.

Port Forwarding: Works, With a Gotcha

Port forwarding uses the CNI portmap plugin, which creates pf rdr (redirect) rules:

$ sudo podman run -d --name nginx-pf -p 8080:80 \
  freebsd/freebsd-notoolchain:15.0 \
  sh -c 'ASSUME_ALWAYS_YES=yes pkg install -y nginx > /dev/null 2>&1 && \
    echo "Port forwarding works from FreeBSD jail" \
    > /usr/local/www/nginx/index.html && \
    nginx -g "daemon off;"'

The pf anchor shows the redirect rules:

$ sudo pfctl -a 'cni-rdr/20e8960e94a6...' -sn
rdr pass inet proto tcp from any to 192.168.122.149 port = http-alt -> 10.88.0.11 port 80
rdr pass inet proto tcp from any to 127.0.0.1 port = http-alt -> 10.88.0.11 port 80
rdr pass inet proto tcp from any to 10.88.0.1 port = http-alt -> 10.88.0.11 port 80

Testing from the Linux host (external access):

# From the Linux host:
$ curl http://192.168.122.149:8080
Port forwarding works from FreeBSD jail

Works. But from the FreeBSD host itself:

# From the FreeBSD VM:
$ fetch -qo- http://127.0.0.1:8080
fetch: http://127.0.0.1:8080: Connection refused

$ fetch -qo- http://192.168.122.149:8080
fetch: http://192.168.122.149:8080: Operation timed out

Connection refused on localhost, timeout on the VM’s own IP. The conmon process IS listening on port 8080 (sockstat confirms it), but the connection doesn’t reach the container.

What happens: conmon binds the port to reserve it, but the actual traffic routing relies on pf rdr rules. Locally-generated packets hit the conmon socket instead of going through pf’s redirect. Setting net.pf.filter_local=1 doesn’t fix it.

From another container on the same network, port forwarding works fine:

$ sudo podman run --rm freebsd/freebsd-notoolchain:15.0 \
  fetch -qo- http://192.168.122.149:8080
Port forwarding works from FreeBSD jail

So: port forwarding works for external clients and for container-to-container via the host IP. It fails from the host to itself. For a real deployment where external clients hit the host, this is fine. For local testing, use the container’s IP directly.

DNS Service Discovery: The Gap

This is the big one. On Linux, when you create a Podman network, the dnsname CNI plugin sets up a dnsmasq instance per network. Containers on the same network can find each other by name. On FreeBSD, that plugin doesn’t exist.

dnsmasq IS available as a FreeBSD package. The dnsname plugin could theoretically be built from source (github.com/containers/dnsname). I didn’t go down that path because the dnsname plugin is being deprecated upstream along with the rest of CNI in favor of Netavark/Aardvark-dns, and Netavark doesn’t support FreeBSD. Building a dependency on a deprecated plugin seemed like the wrong investment.

Workaround: –add-host

The --add-host flag injects entries into a container’s /etc/hosts:

$ sudo podman run -d --name web freebsd/freebsd-notoolchain:15.0 sleep 300
$ WEB_IP=$(sudo podman inspect web --format '{{.NetworkSettings.IPAddress}}')

$ sudo podman run -d --name app --add-host web:$WEB_IP \
  freebsd/freebsd-notoolchain:15.0 sleep 300

$ sudo podman exec app ping -c 2 web
PING web (10.88.0.14): 56 data bytes
64 bytes from 10.88.0.14: icmp_seq=0 ttl=64 time=0.085 ms
64 bytes from 10.88.0.14: icmp_seq=1 ttl=64 time=0.115 ms

$ sudo podman exec app cat /etc/hosts
10.88.0.14	web
::1	localhost localhost.my.domain
127.0.0.1	localhost localhost.my.domain
10.88.0.1	host.containers.internal host.docker.internal
10.88.0.15	4306e46506b4 app

This works, but it’s static: you need to know the target container’s IP when launching the client container. If the target restarts and gets a new IP, the hosts entry is stale. For containers you start together in a known order, it’s good enough.

Better Workaround: Pods

Pods solve the DNS problem by making it irrelevant. Containers in a pod share a network namespace: they talk to each other on localhost.

Pods: Shared Network via Jails

Before pods work, you need one extra package:

$ sudo podman pod create --name mypod
Error: finding catatonit binary: exec: "catatonit": executable file not found in $PATH

$ sudo pkg install -y catatonit

catatonit is a minimal init process for containers. The infra container in a pod needs it. Not mentioned in the Podman install messages on FreeBSD, not pulled in as a dependency. You hit it when you try.

After installing catatonit:

$ sudo podman pod create --name mypod -p 7070:80
12b156b84429...

$ sudo podman run -d --pod mypod --name pod-nginx \
  freebsd/freebsd-notoolchain:15.0 \
  sh -c 'ASSUME_ALWAYS_YES=yes pkg install -y nginx > /dev/null 2>&1 && \
    echo "Hello from pod nginx" > /usr/local/www/nginx/index.html && \
    nginx -g "daemon off;"'

$ sudo podman run -d --pod mypod --name pod-sidecar \
  freebsd/freebsd-notoolchain:15.0 sleep 600

The sidecar can reach nginx on localhost:

$ sudo podman exec pod-sidecar fetch -qo- http://localhost:80
Hello from pod nginx

The jail structure shows what’s happening:

$ sudo jls
   JID  IP Address  Hostname  Path
    15              mypod     /var/run/libpod/infra-container
    16              mypod     /var/db/containers/storage/zfs/graph/54a7ae...
    17              mypod     /var/db/containers/storage/zfs/graph/9cd68a...

Three jails, all with hostname mypod. JID 15 is the infra container (owns the network namespace). JIDs 16 and 17 are the app containers sharing that namespace. Both containers see the same eth0 with the same IP (10.88.0.16), the same MAC address, the same network stack.

For containers that need to talk to each other on FreeBSD right now, pods are the simplest approach: everything goes through localhost, so DNS and IP management are irrelevant.

Multi-Container App: The “So What”

To see if this actually holds up, I put together a frontend + backend in a pod:

$ sudo podman pod create --name webapp -p 8080:80

# Backend: JSON API on port 8081
$ sudo podman run -d --pod webapp --name backend \
  freebsd/freebsd-notoolchain:15.0 \
  sh -c 'mkdir -p /tmp/www && chmod 755 /tmp/www && \
    echo "{\"status\":\"ok\",\"source\":\"freebsd-jail\"}" > /tmp/www/api.json && \
    chmod 644 /tmp/www/api.json && \
    ASSUME_ALWAYS_YES=yes pkg install -y nginx > /dev/null 2>&1 && \
    cat > /usr/local/etc/nginx/nginx.conf <<EOF
worker_processes 1;
events { worker_connections 64; }
http {
    server {
        listen 8081;
        location / { root /tmp/www; default_type application/json; }
    }
}
EOF
    nginx -g "daemon off;"'

# Frontend: reverse proxy on port 80
$ sudo podman run -d --pod webapp --name frontend \
  freebsd/freebsd-notoolchain:15.0 \
  sh -c 'ASSUME_ALWAYS_YES=yes pkg install -y nginx > /dev/null 2>&1 && \
    echo "<h1>FreeBSD Jail Webapp</h1>" > /usr/local/www/nginx/index.html && \
    cat > /usr/local/etc/nginx/nginx.conf <<EOF
worker_processes 1;
events { worker_connections 64; }
http {
    server {
        listen 80;
        location / { root /usr/local/www/nginx; }
        location /api/ { proxy_pass http://localhost:8081/; }
    }
}
EOF
    nginx -g "daemon off;"'

From the Linux host:

$ curl http://192.168.122.149:8080
<h1>FreeBSD Jail Webapp</h1>

$ curl http://192.168.122.149:8080/api/api.json
{"status":"ok","source":"freebsd-jail"}

Frontend serves HTML on port 80, reverse proxies /api/ to the backend on port 8081. Both run in separate jails sharing the same network namespace via the pod. Port 8080 on the host forwards to port 80 in the pod. The whole stack works.

The architecture:

[Linux Host] ---> [FreeBSD VM :8080] --pf rdr--> [Pod: webapp]
                                                   ├── infra (catatonit, owns network)
                                                   ├── frontend (nginx :80)
                                                   │   └── proxy_pass /api/ → localhost:8081
                                                   └── backend (nginx :8081)
                                                       └── serves JSON API

Three jails, one pod, one external port. From the outside it looks like a single service.

What Doesn’t Work (Yet)

Tested and documented:

No DNS service discovery. The dnsname CNI plugin is not included. Custom networks have dns_enabled: false. You need --add-host (static) or pods (shared localhost) to work around it.

Port forwarding fails from the host to itself. External clients and other containers can reach published ports. The FreeBSD host can’t reach its own forwarded ports via localhost or its own IP. Use the container IP directly for local testing.

No network isolation between custom networks. I created an isolated-net custom network and launched a container on it. That container could ping pods on the default podman network (different subnet, but routed through the host at ttl=63). CNI on FreeBSD doesn’t enforce cross-network isolation.

IP addresses change on restart. A pod restarted from 10.88.0.17 to 10.88.0.18. No IP persistence, no static assignment by default. If you’re using --add-host, a restart breaks name resolution.

No rootless. podman run without sudo gives a clear error: “rootless mode is not supported on FreeBSD - run podman as root”. Not a surprise, but confirmed.

Pods need catatonit. Not installed by pkg install podman. Not mentioned in the post-install messages. You find out when podman pod create fails with “catatonit binary not found”.

The bigger problem: CNI is going away

Everything tested here uses CNI (Container Network Interface). On Linux, Podman 4.0+ switched to Netavark as the default network backend. Netavark is faster, has built-in DNS (Aardvark-dns), supports proper network isolation, and is actively developed.

FreeBSD Podman still uses CNI because Netavark doesn’t support FreeBSD. pkg info netavark returns nothing. The FreeBSD port of Podman is pinned to the CNI backend.

This matters because:

  1. CNI is in maintenance mode upstream. New features go to Netavark.
  2. The dnsname plugin (which would fix the DNS gap) is part of the CNI ecosystem that’s being deprecated.
  3. As Linux Podman moves further ahead with Netavark features, the gap between Linux and FreeBSD Podman will grow.

Porting Netavark (and Aardvark-dns) to FreeBSD is the real path forward, not building dnsname on a deprecated stack. That’s a non-trivial effort: Netavark is written in Rust and makes heavy use of Linux-specific netlink APIs for network configuration. Someone would need to replace those with FreeBSD’s ifconfig/route/VNET equivalents.

Until that happens, FreeBSD container networking works for single-node setups where you control the topology, but it’s missing the automatic service discovery and network isolation that Linux users take for granted.

Watch Out

Four new gotchas on top of the 10 from previous posts:

  1. catatonit is not installed with Podman. Pods fail with a cryptic “finding catatonit binary” error. pkg install catatonit fixes it. Should be a dependency of the Podman package, but isn’t.

  2. Port forwarding doesn’t work from the host to itself. The conmon socket and pf rdr rules interact badly for locally-generated traffic. Test port-forwarded services from an external machine or from another container, not from the FreeBSD host.

  3. No DNS-based container name resolution. The dnsname CNI plugin is not included in containernetworking-plugins on FreeBSD. Use --add-host for static entries or pods for shared localhost.

  4. Custom networks don’t provide isolation. Containers on different Podman networks can reach each other. The CNI firewall plugin is present but doesn’t enforce cross-network isolation the way Netavark does on Linux.

What’s Next

Networking works well enough for single-node setups. Next up: storage. ZFS datasets as container volumes, snapshots for rollback, and persistent data across container restarts. This is where FreeBSD should be ahead of Linux, not behind it.


Sources and references:

Antenore Gatta

Antenore Gatta

A proud and busy Hacker, Father and Kyndrol

Keep the Lab Running

Testing container networking means running experiments, breaking things, and documenting what actually happens. If this saved you from the same afternoon of debugging, consider keeping the lab running.

Most readers scroll past. Less than 3% of readers contribute to keeping independent technical content free and accessible.

Post comment

Markdown is allowed, HTML is not. All comments are moderated.