Testing FreeBSD Jails for Container Orchestration
- 6 minsEvery time I searched for “containers on FreeBSD”, I ended up reading about Linux VMs on top of FreeBSD via bhyve. That may be practical, but it was not what I wanted to test. I wanted to see how far FreeBSD could go using jails with ZFS, VNET, and pf. This series follows that attempt.
What changed: OCI spec v1.3
On November 4, 2025, FreeBSD was officially added to the OCI Runtime Specification v1.3. The spec now has a freebsd object that defines how to implement containers using jails. The Foundation called it a watershed moment. That was enough to make this worth testing as current infrastructure rather than as a historical curiosity.
On the implementation side, ocijail by Doug Rabson is the OCI runtime that makes this work: it uses jails as the isolation mechanism, and it’s required by the Podman and Buildah ports on FreeBSD. Think of it as the FreeBSD equivalent of crun/runc. FreeBSD also ships official OCI images since 14.2-RELEASE, both amd64 and arm64.
The foundation layer exists. The next question is what happens when you try to build on top of it.
Where I expect trouble
Networking is the biggest unknown. VNET gives each jail its own network stack, and pf gives you filtering and NAT, but there is no equivalent to the Linux CNI plugin ecosystem. If I want inter-jail service discovery or load balancing, I am wiring that together myself. The 2024 Developer Summit kept coming back to networking, which matches what I have seen so far.
Everything also runs as root. Rootless containers are not there yet on FreeBSD. The Foundation’s own Podman testing report calls that out directly. For a test series I can live with it. For something people would want to adopt widely, it matters much more.
The OCI image catalog is still small on the FreeBSD side. You can run FreeBSD OCI images, and Linux images through Linuxulator, but the pool of ready-made FreeBSD images is limited. The moment you need something specific, you are probably building it yourself.
The same testing report describes Podman on FreeBSD as suitable for evaluation and non-critical production. That seems fair given the networking stability issues and orchestration gaps it lists.
Resource limits are another difference. rctl works, but it is not cgroups v2, and it does not expose the same hierarchy or level of control. That is less important for a proof of concept than it would be for a production scheduler, but it is still part of the picture.
Looking past the usual verdict
The usual search loop is familiar by now: someone tries to reproduce the Linux container experience on FreeBSD, runs into the gaps, and decides the platform is not ready. After enough forum threads and blog posts, the pattern is hard to miss.
The f3s project tried “Kubernetes with FreeBSD” and ended up running Linux VMs via bhyve because the networking was not there. The Register pointed to the rootless gap. FreeBSD forums in 2025 still have people asking about Docker Compose and getting pointed toward Linuxulator workarounds.
What gets lost in that conclusion is that FreeBSD already has most of the underlying mechanisms. Jails have been in the kernel since 2000, with process, filesystem, and network isolation exposed through jail(2), years before Docker existed. Linux container infrastructure grew by layering namespaces, cgroups, and image plumbing on top of each other. FreeBSD’s model started from a more unified base. ZFS also gives you copy-on-write snapshots and instant cloning without needing an extra storage layer to fake it.
There are already two serious jail management tools. Bastille is a jail manager with Bastillefiles and a deliberately simple shell-based design. Pot builds around ZFS and already has a Nomad driver for HashiCorp Nomad orchestration.
There is even a working Kubernetes demo. Doug Rabson got a FreeBSD native K8s node running with CRI-O, with kubectl describe node freebsd0 showing os: freebsd. It exists, but it sits far enough off the usual path that most people never see it.
What is still missing looks more like glue code and operational tooling than missing kernel primitives.
Why now
The EU Cyber Resilience Act (in force since December 2024, full requirements by December 2027) is pushing the software supply chain toward SBOM compliance and auditable security practices. Organizations embedding FreeBSD need container workflows that work. Right now, the mature tooling still lives on the Linux side, which matters if you are running FreeBSD in production and need to show how your build and deployment path is controlled.
There is also a visibility problem. People like Doug Rabson, Dave Cottlehuber, Ed Maste, and the Bastille and Pot teams have already done a lot of the groundwork, but much of it stays inside BSD circles. Good material exists, including long conference tutorials, but it does not surface easily if you come at the topic from the usual cloud-native search path. Documentation and discoverability look like part of the problem, not just missing code.
What I’m going to try
For this series I am sticking to:
- FreeBSD only, using jails, ZFS, VNET, pf, and
rctl, without falling back to bhyve or Linuxulator. - OCI-compatible, with
ocijailas the runtime and standard OCI images where possible. - Single-node first, then multi-node, with real services instead of toy examples.
The goal is not to build a Kubernetes replacement. I want to see whether FreeBSD’s own primitives are enough to support orchestration-shaped workflows, and to document what actually happens when you try.
What’s next
- Environment setup: FreeBSD 15.0, ZFS pool, ocijail, Podman. Get a container running. (Start with the headless VM setup guide, then follow the first container walkthrough.)
- Networking: Container networking on FreeBSD - IP connectivity, port forwarding, pods, DNS service discovery gap, and the CNI deprecation risk.
- Native tools: Bastille, Pot, and the Nomad stack - FreeBSD-native jail tools that solve the CNI gaps without needing Linux container infrastructure.
- Deep dives: Bastille with VNET jails for single-node deployments, Pot+Nomad+Consul for orchestration with service discovery.
- Real workload: Deploy a multi-service application and stress test it.
I am not approaching this as a FreeBSD committer. What interests me is whether the whole thing can be made repeatable, documented, and boring enough to run on purpose. I also think FreeBSD is better served by tooling that uses its own primitives instead of defaulting to Linux in a VM. I may be overestimating how close that is, and I would rather test it than argue about it in the abstract.
If you have already worked in this area, or if I am missing something obvious, send it my way.
Sources and references:
- OCI Runtime Spec v1.3 release
- ocijail on GitHub
- FreeBSD OCI Working Group
- FreeBSD Foundation OCI project
- Podman testing on FreeBSD
- Bastille and Pot
- FreeBSD Foundation CRA readiness
- Dave Cottlehuber’s OCI intro
- vermaden: Are FreeBSD Jails Containers?
Keep the Test Nodes Running
Running FreeBSD test nodes for this series has a real cost. If this landscape analysis saved you hours of research, consider keeping the servers running.
Most readers scroll past. Less than 3% of readers contribute to keeping independent technical content free and accessible.