FreeBSD Native Jail Tools: Bastille, Pot, and the Nomad Stack
- 14 minsThis is post 4 in the FreeBSD jail orchestration series. Post 3 tested container networking with Podman and CNI and found the limits: no DNS service discovery, no network isolation, CNI deprecated upstream. This post goes native.
TL;DR
FreeBSD doesn’t need a Netavark port to get container orchestration working. It has native tools that are already there. Bastille gives you VNET jails with a Dockerfile-like config language and ZFS integration in under 5 minutes. Pot gives you exportable jail images and a Nomad driver for real scheduling. Pot + Nomad + Consul gives you the full stack: scheduled jails, automatic service registration, health checks, and a path to multi-node. Both tested on FreeBSD 15.0-RELEASE, both work, and neither needs CNI.
The environment
All tests run on FreeBSD 15.0-RELEASE in a QEMU/KVM VM (4 GB RAM, 2 vCPUs, 30 GB ZFS) on a Manjaro host. Same VM from the headless setup guide.
$ uname -r
15.0-RELEASE
$ zpool list
NAME SIZE ALLOC FREE HEALTH
zroot 28.5G 3.58G 24.9G ONLINE
Bastille: single-node jail management done right
Bastille is a jail automation framework. One package, one config file, and you’re creating VNET jails with ZFS-backed storage.
Setup
# Install
pkg install -y bastille
Edit /usr/local/etc/bastille/bastille.conf to enable ZFS:
bastille_zfs_enable="YES"
bastille_zfs_zpool="zroot"
Bootstrap a release (downloads base.txz, ~157 MB, validates checksum):
$ bastille bootstrap 15.0-RELEASE
Fetching MANIFEST...
Fetching distfile: base.txz 157 MB 14 MBps 11s
Checksum validated.
Extracting archive: base.txz
Bootstrap successful.
That’s it. You’re ready to create jails.
Creating VNET jails
The -V flag creates a VNET jail with its own network stack. You assign an IP on your existing subnet and specify the host interface:
$ bastille create -V jail1 15.0-RELEASE 192.168.122.50/24 vtnet0
$ bastille create -V jail2 15.0-RELEASE 192.168.122.51/24 vtnet0
Bastille creates epair interfaces, bridges them to vtnet0bridge, sets the default route, copies the host’s resolv.conf, and applies default templates. The jails are up and running:
$ bastille list
JID Name State Type IP Address Release
2 jail1 Up thin 192.168.122.50 15.0-RELEASE
4 jail2 Up thin 192.168.122.51 15.0-RELEASE
Networking: it just works
Jail-to-jail, jail-to-host, jail-to-internet: all sub-millisecond on the bridge, no configuration beyond what Bastille already did.
$ bastille cmd jail1 ping -c 2 192.168.122.51
64 bytes from 192.168.122.51: icmp_seq=0 ttl=64 time=0.063 ms
64 bytes from 192.168.122.51: icmp_seq=1 ttl=64 time=0.111 ms
$ bastille cmd jail1 ping -c 2 8.8.8.8
64 bytes from 8.8.8.8: icmp_seq=0 ttl=116 time=11.877 ms
External DNS works out of the box (resolv.conf points to the libvirt gateway’s dnsmasq). Jail-to-jail by name does NOT work: Bastille has no built-in service discovery. That’s the same gap as CNI, but here it’s the only gap. Everything else works without touching pf rules, without NAT configuration, without CNI plugins.
VNET jails sit directly on the bridge as first-class network citizens. From the Linux host:
$ curl http://192.168.122.50/
Hello from jail1
No port forwarding needed. The jails are on the same subnet as the VM.
Bastillefiles
Bastillefiles are a declarative config language for jail provisioning:
PKG nginx
SYSRC nginx_enable=YES
CMD echo 'Hello from Bastillefile jail' > /usr/local/www/nginx/index.html
SERVICE nginx start
Save it as a template, create a jail, apply it:
$ bastille create -V jail3 15.0-RELEASE 192.168.122.52/24 vtnet0
$ bastille template jail3 local/nginx
[jail3]: Installing nginx-1.28.0...
[jail3]: nginx_enable: -> YES
[jail3]: Starting nginx.
Template applied: local/nginx
$ fetch -qo - http://192.168.122.52/
Hello from Bastillefile jail
From zero to a running nginx jail in two commands. Bastillefile supports PKG, SYSRC, CMD, SERVICE, CP, MOUNT, and more. It’s not trying to be Docker: it’s jail-native and it fits the FreeBSD model.
ZFS integration
Bastille creates a ZFS dataset per jail with compression enabled:
$ zfs list -r zroot/bastille/jails
NAME USED AVAIL MOUNTPOINT
zroot/bastille/jails 159M 24.0G /usr/local/bastille/jails
zroot/bastille/jails/jail1 76.2M /usr/local/bastille/jails/jail1
zroot/bastille/jails/jail1/root 76.1M /usr/local/bastille/jails/jail1/root
Default jails are “thin” (shared base via nullfs mounts). Snapshots, clones, rollback: all the ZFS primitives are there.
Pot: built for orchestration
Pot is a different animal. Where Bastille optimizes for single-node simplicity, Pot is designed from the start to work with Nomad and Consul. It manages its own internal network, supports image export/import, and has an official Nomad driver.
Setup
pkg install -y pot potnet nomad-pot-driver
Edit /usr/local/etc/pot/pot.conf:
POT_ZFS_ROOT=zroot/pot
POT_FS_ROOT=/opt/pot
POT_NETWORK=10.192.0.0/10
POT_NETMASK=255.192.0.0
POT_GATEWAY=10.192.0.1
POT_EXTIF=vtnet0
POT_DNS_NAME=dns
POT_DNS_IP=10.192.0.2
Enable PF (Pot needs it for NAT) and initialize:
kldload pf
sysrc pf_enable=YES
pfctl -e
pot init
Pot creates a bridge (bridge1), configures PF NAT anchors, and reserves IPs for the gateway (10.192.0.1) and a DNS pot (10.192.0.2).
Create the base:
pot create-base -r 15.0
Creating pots
$ pot create -p pot1 -b 15.0 -N public-bridge -i auto
===> pot name : pot1
===> network-type : public-bridge
===> ip : 10.192.0.3
$ pot create -p pot2 -b 15.0 -N public-bridge -i auto
===> ip : 10.192.0.4
The -i auto flag assigns IPs automatically from the 10.192.0.0/10 pool. potnet show gives you the full topology:
$ potnet show
Network topology:
network : 10.192.0.0/10
Addresses already taken:
10.192.0.1 default gateway
10.192.0.2 dns
10.192.0.3 pot1
10.192.0.4 pot2
This is already more structured than anything CNI gives you on FreeBSD.
Networking
Same result as Bastille: full connectivity between pots, to the gateway, to the internet. The difference is the network model. Pot uses an isolated 10.192.0.0/10 subnet with PF NAT for internet access, while Bastille puts jails directly on the host bridge.
$ jexec pot1 ping -c 2 10.192.0.4
64 bytes from 10.192.0.4: icmp_seq=0 ttl=64 time=0.120 ms
$ jexec pot1 ping -c 2 8.8.8.8
64 bytes from 8.8.8.8: icmp_seq=0 ttl=115 time=13.211 ms
$ jexec pot1 host google.com
google.com has address 74.125.29.102
Image export
Pot can export jails as compressed images, which is the key for Nomad integration:
$ pot create -p nginx-img -b 15.0 -t single -N public-bridge -i auto
$ pot start nginx-img
$ jexec nginx-img pkg install -y nginx
$ jexec nginx-img sysrc nginx_enable=YES
$ jexec nginx-img sh -c 'echo "Hello from pot image" > /usr/local/www/nginx/index.html'
$ pot stop nginx-img
$ pot snapshot -p nginx-img
$ pot export -p nginx-img -l 0 -t 1.0
===> exporting nginx-img @ 1773693319 to ./nginx-img_1.0.xz
The exported image is 239 MB compressed. It includes a .skein checksum for verification.
One gotcha: only single-type pots can be exported. The default multi type uses nullfs mounts to share the base, which can’t be packed into a portable image. Create with -t single if you plan to export.
Pot + Nomad + Consul: the full stack
Nomad is HashiCorp’s workload scheduler. Consul is their service discovery and health check system. The nomad-pot-driver connects them to Pot. Together, they give FreeBSD something it never had: a real orchestration stack.
Setup
pkg install -y nomad consul
Consul config (/usr/local/etc/consul.d/consul.hcl):
datacenter = "dc1"
data_dir = "/var/db/consul"
bind_addr = "192.168.122.20"
client_addr = "0.0.0.0"
server = true
bootstrap_expect = 1
ui_config { enabled = true }
ports { dns = 8600 }
Nomad config (/usr/local/etc/nomad.d/nomad.hcl):
datacenter = "dc1"
data_dir = "/var/db/nomad"
bind_addr = "0.0.0.0"
addresses { http = "0.0.0.0" }
server { enabled = true; bootstrap_expect = 1 }
client { enabled = true; network_interface = "vtnet0" }
plugin_dir = "/usr/local/libexec/nomad/plugins"
plugin "nomad-pot-driver" {}
consul { address = "192.168.122.20:8500" }
Start both:
$ service consul start
Starting consul.
$ service nomad start
Starting nomad.
One gotcha: the Nomad rc.d script returns before Nomad is fully ready. Give it about 10 seconds, or run nomad agent -config=/usr/local/etc/nomad.d/ in the foreground for debugging.
Verification
$ consul members
Node Address Status Type Build
freebsd-oci.local 192.168.122.20:8301 alive server 1.22.2
$ nomad server members
Name Address Port Status Leader Build
freebsd-oci.local.global 192.168.122.20 4648 alive true 1.9.6
$ nomad node status -self | grep pot
Driver Status = mock_driver,pot
driver.pot = 1
driver.pot.version = v0.10.0
Nomad sees the pot driver, Consul is running. The stack is up.
Scheduling a pot via Nomad
Move the exported image to Pot’s cache:
cp nginx-img_1.0.xz* /var/cache/pot/
The Nomad job file:
job "web" {
datacenters = ["dc1"]
type = "service"
group "web" {
count = 1
service {
name = "web-nginx"
provider = "consul"
}
task "nginx" {
driver = "pot"
config {
image = "file:///var/cache/pot"
pot = "nginx-img"
tag = "1.0"
network_mode = "public-bridge"
command = "/usr/local/sbin/nginx"
args = ["-g", "'daemon off;'"]
}
resources {
cpu = 200
memory = 256
}
}
}
}
Three things to know about this job file:
- The pot driver requires
image,pot, andtag. It doesn’t work with just a local pot name. commandneeds the full path (/usr/local/sbin/nginx), not justnginx.- The process must run in the foreground. Nomad monitors the process: if it daemonizes and the parent exits, Nomad thinks the task died.
daemon off;keeps nginx in the foreground.
$ nomad job run web.nomad
==> Monitoring deployment "b1371352"
Deployment completed successfully
Deployed
Task Group Desired Placed Healthy Unhealthy
web 1 1 1 0
Service discovery via Consul
The moment the pot started, Consul registered it:
$ consul catalog services
consul
nomad
nomad-client
web-nginx
The Consul API returns the full service record:
$ fetch -qo - http://127.0.0.1:8500/v1/catalog/service/web-nginx | python3 -m json.tool
[
{
"ServiceName": "web-nginx",
"ServiceMeta": {
"external-source": "nomad"
},
"Node": "freebsd-oci.local",
"Datacenter": "dc1"
}
]
external-source: nomad confirms it: Nomad scheduled the pot, Consul registered the service, and the whole thing is queryable via the API. For DNS-based discovery, Consul listens on port 8600, but you’ll need to configure PF to allow UDP on loopback or set up dnsmasq forwarding. The API path works immediately.
And the pot serves its content:
$ fetch -qo - http://10.192.0.7/
Hello from Nomad-scheduled pot
Comparison
| Feature | Podman/ocijail/CNI | Bastille VNET | Pot + Nomad + Consul |
|---|---|---|---|
| IP connectivity | works | works | works |
| External DNS | works | works | works |
| Service discovery | no | no | Consul API + DNS |
| Network isolation | no | configurable | PF + NAT |
| Declarative config | no | Bastillefile | Nomad HCL |
| Image export | no | no | pot export/import |
| ZFS integration | manual | built-in | built-in |
| Scheduler | no | no | Nomad |
| Health checks | no | no | Consul |
| Multi-node | no | no | yes (designed for it) |
| Setup complexity | high | low | medium-high |
| OCI compatibility | yes | no | no |
Watch Out
Bastille ZFS destroy can leave orphaned datasets. If
bastille destroyfails with “pool or dataset is busy”, you’ll need to manuallyzfs destroy -rthe orphaned datasets. Stop the jail first, wait a moment, then destroy.Pot snapshots require a stopped pot. Unlike Bastille (which can snapshot live jails), Pot refuses to snapshot a running pot. Plan your snapshot workflow around maintenance windows.
Pot export only works with single-type pots. The default
multitype shares the base via nullfs and can’t be packed into an image. Create with-t singleif you intend to export.Nomad rc.d script doesn’t wait for readiness.
service nomad startreturns before Nomad is ready to accept jobs. Either wait 10 seconds or check withnomad server members.nginx must run in foreground under Nomad. The pot driver monitors the process. Use
command = "/usr/local/sbin/nginx"withargs = ["-g", "'daemon off;'"]. If the process daemonizes, Nomad marks the task as dead.Consul DNS needs PF rules. Consul binds DNS on port 8600 but PF may block UDP on loopback. The API (port 8500) works without extra config.
What’s next
Bastille gets a dedicated post with a full Bastillefile walkthrough for a multi-service setup. Pot + Nomad + Consul gets a deep dive on building images, writing job files, and wiring up service discovery end-to-end. Both with reproducible sessions you can follow along.
Sources and references:
- Bastille - jail automation framework
- Pot - container framework for FreeBSD
- potnet - Pot network management utility
- nomad-pot-driver - Nomad driver for Pot
- Nomad by HashiCorp - workload scheduler
- Consul by HashiCorp - service discovery and health checks
- FreeBSD Jails Handbook
- pf on FreeBSD
Keep the Lab Running
Three tools tested, dozens of jails created and destroyed, one full orchestration stack stood up. All on a single VM running real FreeBSD 15.0. If these results save you a weekend of testing, consider keeping the lab running.
Most readers scroll past. Less than 3% of readers contribute to keeping independent technical content free and accessible.