Skip to content
Light Dark

FreeBSD Has Everything It Needs for Container Orchestration. So Why Doesn't It Exist Yet?

- 7 mins

This post starts from a frustration. Every time I searched for “containers on FreeBSD”, I ended up reading about running Linux VMs on top of FreeBSD via bhyve. That may be practical, but it was not the question I was interested in. I wanted to know whether FreeBSD’s own primitives - jails, ZFS, VNET, pf - are enough to run something orchestration-shaped without pretending to be Linux.

I don’t have a definitive answer yet. This series is me finding out.

What changed: OCI spec v1.3

The reason I’m starting now and not two years ago: on November 4, 2025, FreeBSD was officially added to the OCI Runtime Specification v1.3. Unanimous vote, 9-0. The spec now has a freebsd object that defines how to implement containers using jails. The Foundation called it a watershed moment. For me, that’s the point where this stopped feeling purely experimental. FreeBSD containers are in the spec now - not a side project someone could drop.

On the implementation side, ocijail by Doug Rabson is the OCI runtime that makes this work: it uses jails as the isolation mechanism, and it’s required by the Podman and Buildah ports on FreeBSD. Think of it as the FreeBSD equivalent of crun/runc. FreeBSD also ships official OCI images since 14.2-RELEASE, both amd64 and arm64.

So the foundation layer exists. The question is what happens when you try to build on top of it.

Where I expect this to fall apart

I want to be upfront about the parts that concern me, because they’re going to shape the entire series.

Networking is the biggest unknown. VNET gives each jail its own network stack, pf gives you filtering and NAT, and that sounds like it should be enough. But there’s no equivalent to Linux CNI plugins. No Calico, no Cilium, no Flannel. If I need inter-jail service discovery and load balancing, I build it myself. The 2024 Developer Summit flagged networking repeatedly. This is where I expect to lose the most time.

Everything runs as root. Rootless containers are not a thing on FreeBSD yet. The Foundation’s own Podman testing report calls it a known gap. For a test series this is acceptable, but it’s the kind of thing that blocks real adoption.

The OCI image world is tiny. You can run FreeBSD OCI images, and Linux images via Linuxulator, but the catalog of pre-built FreeBSD container images is small. If you need something specific, you’re building it yourself.

Podman on FreeBSD is “evaluation and non-critical production.” That’s a direct quote from the same testing report. They found networking stability issues and orchestration integration gaps. Honest assessment. I respect that more than marketing.

rctl is not cgroups. Resource limits work, but the granularity is different. cgroups v2 gives you hierarchical CPU/memory/IO control with pressure stall information. rctl is flatter. For a proof of concept this probably doesn’t matter. I think for production orchestration, it will.

These are not reasons to stop. They’re reasons to pay attention.

What search results won’t tell you

The usual search loop goes like this: someone tries to replicate the Linux container experience on FreeBSD 1:1, hits the gaps listed above, and concludes FreeBSD isn’t ready. I’ve read enough forum threads and blog posts to recognize the pattern.

The f3s project tried “Kubernetes with FreeBSD” and ended up running Linux VMs via bhyve because the networking wasn’t there. The Register flagged the rootless gap. Both fair assessments. FreeBSD forums in 2025 still have people asking about Docker Compose and getting pointed to Linuxulator workarounds.

But I think this misses something. Jails have been in the FreeBSD kernel since 2000 - that is, process, filesystem, and network isolation through a single jail(2) system call, years before Linux had cgroups or Docker existed. Where Linux bolted on namespaces piece by piece, FreeBSD had a coherent isolation model from the start. The subtractive approach (start with a full system, restrict) is, at least on paper, simpler to audit than the additive one (start with nothing, add capabilities). ZFS gives you copy-on-write snapshots and instant cloning natively - the primitives you’d want for container layers, without the OverlayFS complexity.

And there are already two serious jail management tools: Bastille, a jail manager with Bastillefiles (think Dockerfile), pure shell, zero dependencies. And Pot, which has ZFS integration and a Nomad driver for HashiCorp Nomad orchestration.

There’s even a working Kubernetes demo. Doug Rabson got a FreeBSD native K8s node running with CRI-O - kubectl describe node freebsd0 showing os: freebsd. That demo has 4 stars on GitHub. Almost nobody knows it exists.

My reading is that the primitives are mostly there, but they don’t line up cleanly yet. Storage looks good. Isolation is solid. Networking is where I expect to get stuck.

Why now

Two things beyond the OCI spec pushed me to start.

The EU Cyber Resilience Act (in force since December 2024, full requirements by December 2027) is pushing the software supply chain toward SBOM compliance and auditable security practices. Organizations embedding FreeBSD need container workflows that work. Right now, the only option with mature tooling is Linux. That’s a problem if you’re running FreeBSD in production and need to demonstrate compliance.

And visibility. The people doing this work - Doug Rabson, Dave Cottlehuber, Ed Maste, the Bastille and Pot teams - are building solid infrastructure. But it’s invisible outside the BSD community. A 6-hour tutorial at EuroBSDCon 2024 that most people will never watch. I think the lack of documentation is holding things back as much as any missing code, actually maybe more.

What I’m going to try

The constraints for this series:

  • FreeBSD only. No bhyve, no Linuxulator. Jails, ZFS, VNET, pf, rctl.
  • OCI-compatible. ocijail as the runtime, standard OCI images.
  • Simple first. Single-node before multi-node. Real services, not just echo hello.

I’m not building a Kubernetes replacement. K8s is a 4-million-line codebase with a decade of production hardening. I want to find out whether FreeBSD’s primitives are enough for orchestration, and document what actually happens along the way.

What’s next

  1. Environment setup: FreeBSD 15.0, ZFS pool, ocijail, Podman. Get a container running. (Start with the headless VM setup guide, then follow the first container walkthrough.)
  2. Networking: Container networking on FreeBSD - IP connectivity, port forwarding, pods, DNS service discovery gap, and the CNI deprecation risk.
  3. Native tools: Bastille, Pot, and the Nomad stack - FreeBSD-native jail tools that solve the CNI gaps without needing Linux container infrastructure.
  4. Deep dives: Bastille with VNET jails for single-node deployments, Pot+Nomad+Consul for orchestration with service discovery.
  5. Real workload: Deploy a multi-service application and stress test it.

I’m an infrastructure engineer, not a FreeBSD committer. I maintained Remmina for 9 years and spent 16 years in IT compliance and cloud governance. FreeBSD deserves better cloud tooling than “just run Linux in a VM” - that is, tooling that actually uses its own primitives. I could be wrong about how close it is. That’s what this series is for.

If you’ve done work in this space, or you think I’m wrong about something, I want to hear about it.


Sources and references:

Antenore Gatta

Antenore Gatta

A proud and busy Hacker, Father and Kyndrol

Keep the Test Nodes Running

Running FreeBSD test nodes for this series has a real cost. If this landscape analysis saved you hours of research, consider keeping the servers running.

Most readers scroll past. Less than 3% of readers contribute to keeping independent technical content free and accessible.

Post comment

Markdown is allowed, HTML is not. All comments are moderated.