Your First OCI Container on FreeBSD Is a Jail (And That's the Point)
- 9 minsThis is post 2 in the FreeBSD jail orchestration series. If you don’t have a FreeBSD box yet, start with the headless VM setup guide.
TL;DR
On a fresh FreeBSD 15.0 with ZFS, installing Podman and ocijail takes one pkg install command and 4 configuration steps. After that, podman run pulls OCI images from Docker Hub and runs them as native FreeBSD jails. Run jls while a container is up: it shows a jail with a JID, a ZFS-backed rootfs, and a VNET network interface. Nginx serves pages from inside a jail-based container. The whole point of this series is proving that the pieces fit together, and they do.
Install the Tooling
Three packages, 34 dependencies, one command:
sudo pkg install -y ocijail podman buildah
What you get:
- ocijail 0.4.0: the OCI-compatible runtime that creates jails. Think of it as the FreeBSD equivalent of runc/crun.
- Podman 5.7.1: container lifecycle management. Same CLI as Docker, no daemon.
- Buildah 1.42.2: OCI image builder. Same role as
docker build, but standalone.
Plus: conmon (container monitor), containernetworking-plugins (CNI for FreeBSD, uses pf), containers-common (shared config including storage.conf and registries.conf).
Both Podman and Buildah are marked “experimental, for evaluation and testing purposes” on FreeBSD. I respect the honesty. The FreeBSD Foundation doesn’t ship things with a “production ready” label until they mean it.
Four Things to Configure Before Your First Container
The pkg install output tells you everything, but you have to read it. Here are the 4 steps, in order.
1. ZFS Dataset for Container Storage
sudo zfs create -o mountpoint=/var/db/containers zroot/containers
Podman’s storage driver on FreeBSD defaults to ZFS (configured in /usr/local/etc/containers/storage.conf). Each image layer becomes a separate ZFS dataset. When you pull an image, Podman creates ZFS clones for each layer: copy-on-write, instant, checksummed. This is where FreeBSD’s container story gets interesting: on Linux, you’d use OverlayFS or devicemapper. Here, ZFS is native and better at the job.
2. fdescfs for conmon
conmon (the container monitor process) needs /dev/fd to properly support restart policies:
sudo mount -t fdescfs fdesc /dev/fd
echo "fdesc /dev/fd fdescfs rw 0 0" | sudo tee -a /etc/fstab
Without this, containers work but --restart=always won’t.
3. pf Firewall for Container NAT
Container networking on FreeBSD uses pf for NAT. The containernetworking-plugins package ships a sample config:
sudo cp /usr/local/etc/containers/pf.conf.sample /etc/pf.conf
Edit /etc/pf.conf and change the interface name from ix0 to your actual interface (on a VM, probably vtnet0):
v4egress_if = "vtnet0"
v6egress_if = "vtnet0"
Enable and start pf:
sudo sysrc pf_enable=YES
sudo service pf start
When a container starts, its IP gets added to the <cni-nat> pf table automatically. The NAT rules translate container traffic through the host’s egress interface. You don’t need to configure individual rules per container.
4. IP Forwarding
The FreeBSD cloud image already has this enabled. If you’re on a manual install, check:
sysctl net.inet.ip.forwarding
If it says 0:
sudo sysctl net.inet.ip.forwarding=1
sudo sysrc gateway_enable=YES
Hello World
sudo podman run --rm quay.io/dougrabson/hello
!... Hello Podman World ...!
.--"--.
/ - - \
/ (O) (O) \
~~~| -=(,Y,)=- |
.---. /` \ |~~
~/ o o \~~~~.----. ~~
| =(X)= |~ / (O (O) \
~~~~~~~ ~| =(Y_)=- |
~~~~ ~~~| U |~~
Project: https://github.com/containers/podman
Website: https://podman.io
That image comes from Doug Rabson’s registry (he’s the ocijail author). It pulled, ran, and exited. Under the hood: Podman asked ocijail to create a jail, ran the hello binary inside it, and destroyed the jail on exit.
Everything runs as root. Rootless Podman is not available on FreeBSD yet: it’s a known gap that the Foundation has documented.
FreeBSD OCI Images
FreeBSD ships official OCI images on Docker Hub. The tag naming is NOT what you’d expect:
| Image | Tag | Size | What’s in it |
|---|---|---|---|
freebsd/freebsd-static | 15.0 | ~5 MB | Statically linked binaries only |
freebsd/freebsd-dynamic | 15.0 | ~16 MB | Dynamic libraries |
freebsd/freebsd-runtime | 15.0 | 34 MB | Minimal runtime |
freebsd/freebsd-notoolchain | 15.0 | ~280 MB | Full userland minus compiler |
freebsd/freebsd-toolchain | 15.0 | ~800 MB | Full userland + compiler |
The tag is 15.0, not 15.0-RELEASE. If you use 15.0-RELEASE, you get a cryptic “manifest unknown” error.
Let’s run a real FreeBSD container:
$ sudo podman run --rm docker.io/freebsd/freebsd-runtime:15.0 freebsd-version
15.0-RELEASE
And check the kernel from inside:
$ sudo podman run --rm docker.io/freebsd/freebsd-runtime:15.0 uname -a
FreeBSD 8c79045701db 15.0-RELEASE FreeBSD 15.0-RELEASE releng/15.0-n280995-7aedc8de6446 GENERIC amd64
FreeBSD 15.0-RELEASE running inside a jail, managed by Podman, pulled from Docker Hub as an OCI image.
The Proof: Container = Jail
This is the money shot. Run a container in the background:
$ sudo podman run -d --name test-jail docker.io/freebsd/freebsd-runtime:15.0 sleep 300
Now look at it from both sides:
$ sudo podman ps
CONTAINER ID IMAGE COMMAND NAMES
498077d3948c docker.io/freebsd/freebsd-runtime:15.0 sleep 300 test-jail
$ sudo jls
JID IP Address Hostname Path
5 498077d3948c /var/db/containers/storage/zfs/graph/e005dd...
The Podman container IS a FreeBSD jail. JID 5. The hostname matches the container ID. The path points to a ZFS dataset. Run zfs list -r zroot/containers and you’ll see datasets for each image layer.
The networking:
$ sudo podman exec test-jail ifconfig eth0
eth0: flags=1008843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST,LOWER_UP>
ether 58:9c:fc:10:df:14
inet 10.88.0.6 netmask 0xffff0000 broadcast 10.88.255.255
groups: epair
Each container gets its own VNET network stack with an epair interface. eth0 inside the container maps to an epair on the host side. pf handles NAT from the container subnet (10.88.0.0/16) to the outside.
Running a Real Service: Nginx in a Jail
The freebsd-runtime image is too minimal for real testing: no pkg, no DNS tools. Use freebsd-notoolchain for anything practical:
sudo podman run -d --name nginx-jail \
docker.io/freebsd/freebsd-notoolchain:15.0 \
sh -c 'ASSUME_ALWAYS_YES=yes pkg install -y nginx && \
echo "FreeBSD jail-based container serving via OCI" \
> /usr/local/www/nginx/index.html && \
nginx -g "daemon off;"'
Wait about 15 seconds for pkg to install nginx, then:
$ sudo jls
JID IP Address Hostname Path
7 f9a6ce316139 /var/db/containers/storage/zfs/graph/6058e5...
$ NGINX_IP=$(sudo podman inspect nginx-jail --format '')
$ fetch -qo- http://$NGINX_IP
FreeBSD jail-based container serving via OCI
Nginx, running inside a FreeBSD jail, managed by Podman, pulled as an OCI image from Docker Hub, stored on ZFS, with pf-based networking. Every layer of this stack is native FreeBSD.
The Full Stack
Here’s what the architecture looks like:
Podman CLI
└── ocijail (OCI runtime)
└── jail(2) system call
├── Isolation: jail with separate root filesystem
├── Storage: ZFS dataset (zroot/containers/...)
├── Network: VNET + epair interface
│ └── pf NAT (10.88.0.0/16 → vtnet0)
└── Monitor: conmon (restart policy, logging)
Podman talks OCI. ocijail translates OCI operations into jail operations. The jail gets a ZFS-backed filesystem, a VNET network stack, and pf-managed connectivity. conmon watches the process and handles restarts.
The entire stack is Podman, ocijail, and the FreeBSD kernel. There’s no Docker daemon, no containerd, no shim processes in between.
Watch Out
Three new gotchas on top of the 7 from the VM setup post:
The
runtimeimage is too minimal for real work. Nopkg, nodrill, nohost, nogetent. DNS lookups fail silently because the resolver infrastructure is incomplete. Usefreebsd-notoolchainfor testing: it’s 280 MB but has a full userland.Image tags are
15.0, not15.0-RELEASE. Every FreeBSD user will try15.0-RELEASEfirst (because that’s whatfreebsd-versionprints). The error message (“manifest unknown”) doesn’t tell you it’s a tag problem. Save yourself 5 minutes: the tag matches the release number without the-RELEASEsuffix.Both Podman and Buildah are experimental. The pkg install messages say it explicitly: “should be used for evaluation and testing purposes only.” This is honest engineering from the FreeBSD team, not a disclaimer to ignore. Expect rough edges.
What’s Next
The container runs. It has network access. It can serve traffic. But everything is single-node and single-container. The next posts tackle the hard parts:
- Networking: Container networking on FreeBSD - IP connectivity, port forwarding, pods, and why DNS service discovery doesn’t work out of the box.
- Storage: ZFS datasets as persistent volumes. Snapshots for rollback. This is where FreeBSD should shine.
- Scheduling: A minimal scheduler that reads a YAML manifest and creates jails. The beginnings of an orchestration layer.
Next up: can these containers actually talk to each other?
Sources and references:
- ocijail on GitHub
- Podman on FreeBSD
- FreeBSD OCI images
- containernetworking-plugins for FreeBSD
- Podman testing on FreeBSD
- containers-common ZFS storage
Keep the Lab Running
This series runs on real hardware and real hours of debugging. If it saved you from trial-and-error, consider keeping the test nodes running.
Most readers scroll past. Less than 3% of readers contribute to keeping independent technical content free and accessible.