Skip to content
Light Dark

Container Networking on FreeBSD with Podman and CNI

- 12 mins

Post 0 called networking the biggest risk for FreeBSD container orchestration. After getting a container running with ZFS and pf, the next step was checking how far the default Podman networking stack goes on FreeBSD. Container-to-container IP traffic worked immediately, published ports were mostly usable, and pods filled part of the gap. DNS service discovery was where the limits started to show.

The Default Network

Podman on FreeBSD creates a single default network using CNI (Container Network Interface):

$ sudo podman network ls
NETWORK ID    NAME        DRIVER
2f259bab93aa  podman      bridge

$ sudo podman network inspect podman
[
     {
          "name": "podman",
          "driver": "bridge",
          "network_interface": "cni-podman0",
          "subnets": [
               {
                    "subnet": "10.88.0.0/16",
                    "gateway": "10.88.0.1"
               }
          ],
          "dns_enabled": false
     }
]

dns_enabled: false is one clue. FreeBSD ships 7 CNI plugins:

$ ls /usr/local/libexec/cni/
bridge    firewall    host-local    loopback    portmap    static    tuning

There is no dnsname plugin and no bundled dnsmasq integration. Custom networks also get dns_enabled: false. That is one of the main differences from Linux, where Podman’s default network includes DNS-based container name resolution through dnsname.

IP Connectivity

Two containers on the default bridge can ping each other by IP immediately:

$ sudo podman run -d --name c1 freebsd/freebsd-notoolchain:15.0 sleep 600
$ sudo podman run -d --name c2 freebsd/freebsd-notoolchain:15.0 sleep 600

$ C1_IP=$(sudo podman inspect c1 --format '{{.NetworkSettings.IPAddress}}')
$ C2_IP=$(sudo podman inspect c2 --format '{{.NetworkSettings.IPAddress}}')
$ echo "c1: $C1_IP  c2: $C2_IP"
c1: 10.88.0.9  c2: 10.88.0.10

$ sudo podman exec c1 ping -c 2 $C2_IP
PING 10.88.0.10 (10.88.0.10): 56 data bytes
64 bytes from 10.88.0.10: icmp_seq=0 ttl=64 time=0.085 ms
64 bytes from 10.88.0.10: icmp_seq=1 ttl=64 time=0.178 ms

Round-trip time was 0.085 ms. Each container gets its own VNET network stack with an epair interface, cni-podman0 bridges them together, and pf handles outbound NAT.

But try by name:

$ sudo podman exec c1 ping -c 2 c2
ping: cannot resolve c2: Name does not resolve

DNS lookup goes to the host’s resolver (192.168.122.1 in my case), which does not know about container names, so the result is NXDOMAIN.

Port Forwarding

Port forwarding uses the CNI portmap plugin, which creates pf rdr (redirect) rules:

$ sudo podman run -d --name nginx-pf -p 8080:80 \
  freebsd/freebsd-notoolchain:15.0 \
  sh -c 'ASSUME_ALWAYS_YES=yes pkg install -y nginx > /dev/null 2>&1 && \
    echo "Port forwarding works from FreeBSD jail" \
    > /usr/local/www/nginx/index.html && \
    nginx -g "daemon off;"'

The pf anchor shows the redirect rules:

$ sudo pfctl -a 'cni-rdr/20e8960e94a6...' -sn
rdr pass inet proto tcp from any to 192.168.122.149 port = http-alt -> 10.88.0.11 port 80
rdr pass inet proto tcp from any to 127.0.0.1 port = http-alt -> 10.88.0.11 port 80
rdr pass inet proto tcp from any to 10.88.0.1 port = http-alt -> 10.88.0.11 port 80

Testing from the Linux host (external access):

# From the Linux host:
$ curl http://192.168.122.149:8080
Port forwarding works from FreeBSD jail

From the FreeBSD host itself:

# From the FreeBSD VM:
$ fetch -qo- http://127.0.0.1:8080
fetch: http://127.0.0.1:8080: Connection refused

$ fetch -qo- http://192.168.122.149:8080
fetch: http://192.168.122.149:8080: Operation timed out

Localhost returned connection refused, and the VM’s own IP timed out. The conmon process is listening on port 8080 (sockstat confirms it), but the connection still does not reach the container.

My working theory is that conmon binds the port to reserve it, while the actual traffic routing depends on pf rdr rules. Locally generated packets hit the conmon socket instead of taking the redirect path. Setting net.pf.filter_local=1 did not change that behavior.

From another container on the same network, port forwarding works fine:

$ sudo podman run --rm freebsd/freebsd-notoolchain:15.0 \
  fetch -qo- http://192.168.122.149:8080
Port forwarding works from FreeBSD jail

Port forwarding works for external clients and for container-to-container traffic through the host IP. It fails from the host to itself, so for local testing the container IP is simpler.

DNS Service Discovery

On Linux, creating a Podman network with CNI also gives you the dnsname plugin and per-network DNS through dnsmasq. On FreeBSD, that plugin is not there.

dnsmasq is available as a FreeBSD package, and the dnsname plugin could theoretically be built from source (github.com/containers/dnsname). I did not go down that path because the whole CNI stack is being deprecated upstream in favor of Netavark and Aardvark-dns, and Netavark does not support FreeBSD.

Workaround: –add-host

The --add-host flag injects entries into a container’s /etc/hosts:

$ sudo podman run -d --name web freebsd/freebsd-notoolchain:15.0 sleep 300
$ WEB_IP=$(sudo podman inspect web --format '{{.NetworkSettings.IPAddress}}')

$ sudo podman run -d --name app --add-host web:$WEB_IP \
  freebsd/freebsd-notoolchain:15.0 sleep 300

$ sudo podman exec app ping -c 2 web
PING web (10.88.0.14): 56 data bytes
64 bytes from 10.88.0.14: icmp_seq=0 ttl=64 time=0.085 ms
64 bytes from 10.88.0.14: icmp_seq=1 ttl=64 time=0.115 ms

$ sudo podman exec app cat /etc/hosts
10.88.0.14	web
::1	localhost localhost.my.domain
127.0.0.1	localhost localhost.my.domain
10.88.0.1	host.containers.internal host.docker.internal
10.88.0.15	4306e46506b4 app

This works, but it’s static: you need to know the target container’s IP when launching the client container. If the target restarts and gets a new IP, the hosts entry is stale. For containers you start together in a known order, it’s good enough.

Better Workaround: Pods

Pods avoid the DNS problem because containers in the same pod share a network namespace and can talk to each other on localhost.

Pods: Shared Network via Jails

Before pods work, you need one extra package:

$ sudo podman pod create --name mypod
Error: finding catatonit binary: exec: "catatonit": executable file not found in $PATH

$ sudo pkg install -y catatonit

catatonit is a minimal init process for containers. The infra container in a pod needs it. It is not installed as a dependency with Podman on FreeBSD.

After installing catatonit:

$ sudo podman pod create --name mypod -p 7070:80
12b156b84429...

$ sudo podman run -d --pod mypod --name pod-nginx \
  freebsd/freebsd-notoolchain:15.0 \
  sh -c 'ASSUME_ALWAYS_YES=yes pkg install -y nginx > /dev/null 2>&1 && \
    echo "Hello from pod nginx" > /usr/local/www/nginx/index.html && \
    nginx -g "daemon off;"'

$ sudo podman run -d --pod mypod --name pod-sidecar \
  freebsd/freebsd-notoolchain:15.0 sleep 600

The sidecar can reach nginx on localhost:

$ sudo podman exec pod-sidecar fetch -qo- http://localhost:80
Hello from pod nginx

The jail structure shows what’s happening:

$ sudo jls
   JID  IP Address  Hostname  Path
    15              mypod     /var/run/libpod/infra-container
    16              mypod     /var/db/containers/storage/zfs/graph/54a7ae...
    17              mypod     /var/db/containers/storage/zfs/graph/9cd68a...

Three jails show up with the hostname mypod. JID 15 is the infra container that owns the network namespace. JIDs 16 and 17 are the app containers sharing it. Inside the pod, both containers see the same eth0, IP 10.88.0.16, and MAC address.

For containers that need to talk to each other on FreeBSD right now, pods are the most practical approach because everything goes through localhost.

Multi-Container App: Frontend + Backend in a Pod

To see if this actually holds up, I put together a frontend + backend in a pod:

$ sudo podman pod create --name webapp -p 8080:80

# Backend: JSON API on port 8081
$ sudo podman run -d --pod webapp --name backend \
  freebsd/freebsd-notoolchain:15.0 \
  sh -c 'mkdir -p /tmp/www && chmod 755 /tmp/www && \
    echo "{\"status\":\"ok\",\"source\":\"freebsd-jail\"}" > /tmp/www/api.json && \
    chmod 644 /tmp/www/api.json && \
    ASSUME_ALWAYS_YES=yes pkg install -y nginx > /dev/null 2>&1 && \
    cat > /usr/local/etc/nginx/nginx.conf <<EOF
worker_processes 1;
events { worker_connections 64; }
http {
    server {
        listen 8081;
        location / { root /tmp/www; default_type application/json; }
    }
}
EOF
    nginx -g "daemon off;"'

# Frontend: reverse proxy on port 80
$ sudo podman run -d --pod webapp --name frontend \
  freebsd/freebsd-notoolchain:15.0 \
  sh -c 'ASSUME_ALWAYS_YES=yes pkg install -y nginx > /dev/null 2>&1 && \
    echo "<h1>FreeBSD Jail Webapp</h1>" > /usr/local/www/nginx/index.html && \
    cat > /usr/local/etc/nginx/nginx.conf <<EOF
worker_processes 1;
events { worker_connections 64; }
http {
    server {
        listen 80;
        location / { root /usr/local/www/nginx; }
        location /api/ { proxy_pass http://localhost:8081/; }
    }
}
EOF
    nginx -g "daemon off;"'

From the Linux host:

$ curl http://192.168.122.149:8080
<h1>FreeBSD Jail Webapp</h1>

$ curl http://192.168.122.149:8080/api/api.json
{"status":"ok","source":"freebsd-jail"}

The frontend serves HTML on port 80 and reverse proxies /api/ to the backend on port 8081. Both run in separate jails that share the same network namespace through the pod. Port 8080 on the host forwards to port 80 in the pod.

The architecture:

[Linux Host] ---> [FreeBSD VM :8080] --pf rdr--> [Pod: webapp]
                                                   ├── infra (catatonit, owns network)
                                                   ├── frontend (nginx :80)
                                                   │   └── proxy_pass /api/ → localhost:8081
                                                   └── backend (nginx :8081)
                                                       └── serves JSON API

From the outside, that pod looks like a single service behind one published port.

CNI and Netavark

Everything tested here uses CNI (Container Network Interface). On Linux, Podman 4.0 and later switched to Netavark as the default network backend. Netavark brings DNS through Aardvark-dns, handles network isolation differently, and is where active development is happening.

FreeBSD Podman still uses CNI because Netavark does not support FreeBSD. pkg info netavark returns nothing, and the FreeBSD port of Podman stays on the CNI backend.

This matters because:

  1. CNI is in maintenance mode upstream. New features go to Netavark.
  2. The dnsname plugin (which would fix the DNS gap) is part of the CNI ecosystem that’s being deprecated.
  3. As Linux Podman moves further ahead with Netavark features, the gap between Linux and FreeBSD Podman will grow.

Porting Netavark and Aardvark-dns to FreeBSD looks like a more durable path than extending dnsname on a deprecated stack. That is not a small porting job: Netavark is written in Rust and relies heavily on Linux netlink APIs for network configuration. Those pieces would need FreeBSD equivalents built around ifconfig, route, and VNET.

Until then, FreeBSD container networking works for single-node setups where you control the topology, but it still lacks automatic service discovery and stronger network isolation in the current Podman stack.

Watch Out

More gotchas, on top of the previous ones:

  1. catatonit is not installed with Podman. Pods fail with a cryptic “finding catatonit binary” error. pkg install catatonit fixes it. Should be a dependency of the Podman package, but isn’t.

  2. Port forwarding doesn’t work from the host to itself. The conmon socket and pf rdr rules interact badly for locally-generated traffic. Test port-forwarded services from an external machine or from another container, not from the FreeBSD host.

  3. No DNS-based container name resolution. The dnsname CNI plugin is not included in containernetworking-plugins on FreeBSD. Use --add-host for static entries or pods for shared localhost.

  4. Custom networks don’t provide isolation. Containers on different Podman networks can reach each other. The CNI firewall plugin is present but doesn’t enforce cross-network isolation the way Netavark does on Linux.

What’s Next

These gaps sit in the Podman-on-FreeBSD stack more than in jails themselves. FreeBSD also has native jail tools that handle networking and service layout in different ways. The next post is Bastille, Pot, and the Nomad stack.


Sources and references:

Antenore Gatta

Antenore Gatta

A proud and busy Hacker, Father and Kyndrol

Keep the Lab Running

Testing container networking means running experiments, breaking things, and documenting what actually happens. If this saved you from the same afternoon of debugging, consider keeping the lab running.

Most readers scroll past. Less than 3% of readers contribute to keeping independent technical content free and accessible.

Post comment

Markdown is allowed, HTML is not. All comments are moderated.