Skip to content
Light Dark

FreeBSD Native Jail Tools: Bastille, Pot, and the Nomad Stack

- 13 mins

After spending a post on the limits of Podman’s CNI networking on FreeBSD, I wanted to compare that with the native jail tools. This post covers Bastille first, then Pot, and finally a Nomad plus Consul stack running on top of Pot.

The environment

All tests run on FreeBSD 15.0-RELEASE in a QEMU/KVM VM (4 GB RAM, 2 vCPUs, 30 GB ZFS) on a Manjaro host. Same VM from the headless setup guide.

$ uname -r
15.0-RELEASE

$ zpool list
NAME    SIZE  ALLOC   FREE  HEALTH
zroot  28.5G  3.58G  24.9G  ONLINE

Bastille

Bastille is a jail automation framework. The setup is short and it gets you to VNET jails with ZFS-backed storage quickly.

Setup

# Install
pkg install -y bastille

Edit /usr/local/etc/bastille/bastille.conf to enable ZFS:

bastille_zfs_enable="YES"
bastille_zfs_zpool="zroot"

Bootstrap a release (downloads base.txz, ~157 MB, validates checksum):

$ bastille bootstrap 15.0-RELEASE
Fetching MANIFEST...
Fetching distfile: base.txz  157 MB  14 MBps  11s
Checksum validated.
Extracting archive: base.txz
Bootstrap successful.

After the bootstrap, you can start creating jails.

Creating VNET jails

The -V flag creates a VNET jail with its own network stack. You assign an IP on your existing subnet and specify the host interface:

$ bastille create -V jail1 15.0-RELEASE 192.168.122.50/24 vtnet0
$ bastille create -V jail2 15.0-RELEASE 192.168.122.51/24 vtnet0

Bastille creates the epair interfaces, attaches them to vtnet0bridge, sets the default route, copies the host’s resolv.conf, and applies its default templates. The jails come up like this:

$ bastille list
 JID  Name   State  Type   IP Address      Release
 2    jail1  Up     thin   192.168.122.50  15.0-RELEASE
 4    jail2  Up     thin   192.168.122.51  15.0-RELEASE

Networking

Jail traffic worked to other jails, to the host, and out to the internet without extra network configuration beyond what Bastille had already done.

$ bastille cmd jail1 ping -c 2 192.168.122.51
64 bytes from 192.168.122.51: icmp_seq=0 ttl=64 time=0.063 ms
64 bytes from 192.168.122.51: icmp_seq=1 ttl=64 time=0.111 ms

$ bastille cmd jail1 ping -c 2 8.8.8.8
64 bytes from 8.8.8.8: icmp_seq=0 ttl=116 time=11.877 ms

External DNS works out of the box because resolv.conf points to the libvirt gateway’s dnsmasq. Jail-to-jail name resolution does not work because Bastille does not provide built-in service discovery. The rest of the network setup worked without manual pf rules, NAT configuration, or CNI plugins.

VNET jails sit directly on the bridge as first-class network citizens. From the Linux host:

$ curl http://192.168.122.50/
Hello from jail1

No port forwarding was needed because the jails sit on the same subnet as the VM.

Bastillefiles

Bastillefiles are a declarative config language for jail provisioning:

PKG nginx
SYSRC nginx_enable=YES
CMD echo 'Hello from Bastillefile jail' > /usr/local/www/nginx/index.html
SERVICE nginx start

Save it as a template, create a jail, apply it:

$ bastille create -V jail3 15.0-RELEASE 192.168.122.52/24 vtnet0
$ bastille template jail3 local/nginx
[jail3]: Installing nginx-1.28.0...
[jail3]: nginx_enable:  -> YES
[jail3]: Starting nginx.
Template applied: local/nginx
$ fetch -qo - http://192.168.122.52/
Hello from Bastillefile jail

That was enough to get nginx running in a new jail with the template applied. Bastillefile supports PKG, SYSRC, CMD, SERVICE, CP, MOUNT, and more. It is a jail provisioning format built around the FreeBSD model rather than around OCI containers.

ZFS integration

Bastille creates a ZFS dataset per jail with compression enabled:

$ zfs list -r zroot/bastille/jails
NAME                                  USED  AVAIL  MOUNTPOINT
zroot/bastille/jails                  159M  24.0G  /usr/local/bastille/jails
zroot/bastille/jails/jail1           76.2M         /usr/local/bastille/jails/jail1
zroot/bastille/jails/jail1/root      76.1M         /usr/local/bastille/jails/jail1/root

Default jails are “thin” and share the base through nullfs mounts. Snapshots, clones, and rollback are all available on top of that.

Pot

Pot takes a different approach from Bastille. It comes with its own network model, supports image export and import, and has an official Nomad driver.

Setup

pkg install -y pot potnet nomad-pot-driver

Edit /usr/local/etc/pot/pot.conf:

POT_ZFS_ROOT=zroot/pot
POT_FS_ROOT=/opt/pot
POT_NETWORK=10.192.0.0/10
POT_NETMASK=255.192.0.0
POT_GATEWAY=10.192.0.1
POT_EXTIF=vtnet0
POT_DNS_NAME=dns
POT_DNS_IP=10.192.0.2

Enable PF (Pot needs it for NAT) and initialize:

kldload pf
sysrc pf_enable=YES
pfctl -e
pot init

Pot creates a bridge (bridge1), configures PF NAT anchors, and reserves IPs for the gateway (10.192.0.1) and a DNS pot (10.192.0.2).

Create the base:

pot create-base -r 15.0

Creating pots

$ pot create -p pot1 -b 15.0 -N public-bridge -i auto
===>  pot name     : pot1
===>  network-type : public-bridge
===>  ip           : 10.192.0.3

$ pot create -p pot2 -b 15.0 -N public-bridge -i auto
===>  ip           : 10.192.0.4

The -i auto flag assigns IPs automatically from the 10.192.0.0/10 pool. potnet show gives you the full topology:

$ potnet show
Network topology:
    network : 10.192.0.0/10
Addresses already taken:
    10.192.0.1  default gateway
    10.192.0.2  dns
    10.192.0.3  pot1
    10.192.0.4  pot2

Pot makes the addressing and topology more explicit than the default Podman setup on FreeBSD.

Networking

The pots had full connectivity to each other, to the gateway, and to the internet. The main difference is the network model: Pot uses an isolated 10.192.0.0/10 subnet with PF NAT for outbound access, while Bastille puts jails directly on the host bridge.

$ jexec pot1 ping -c 2 10.192.0.4
64 bytes from 10.192.0.4: icmp_seq=0 ttl=64 time=0.120 ms

$ jexec pot1 ping -c 2 8.8.8.8
64 bytes from 8.8.8.8: icmp_seq=0 ttl=115 time=13.211 ms

$ jexec pot1 host google.com
google.com has address 74.125.29.102

Image export

Pot can export jails as compressed images, which is the key for Nomad integration:

$ pot create -p nginx-img -b 15.0 -t single -N public-bridge -i auto
$ pot start nginx-img
$ jexec nginx-img pkg install -y nginx
$ jexec nginx-img sysrc nginx_enable=YES
$ jexec nginx-img sh -c 'echo "Hello from pot image" > /usr/local/www/nginx/index.html'
$ pot stop nginx-img
$ pot snapshot -p nginx-img
$ pot export -p nginx-img -l 0 -t 1.0
===>  exporting nginx-img @ 1773693319 to ./nginx-img_1.0.xz

The exported image is 239 MB compressed. It includes a .skein checksum for verification.

One gotcha: only single-type pots can be exported. The default multi type uses nullfs mounts to share the base, which can’t be packed into a portable image. Create with -t single if you plan to export.

Pot + Nomad + Consul

Nomad is HashiCorp’s workload scheduler. Consul provides service discovery and health checks. The nomad-pot-driver connects both of them to Pot.

Setup

pkg install -y nomad consul

Consul config (/usr/local/etc/consul.d/consul.hcl):

datacenter     = "dc1"
data_dir       = "/var/db/consul"
bind_addr      = "192.168.122.20"
client_addr    = "0.0.0.0"
server         = true
bootstrap_expect = 1
ui_config { enabled = true }
ports { dns = 8600 }

Nomad config (/usr/local/etc/nomad.d/nomad.hcl):

datacenter = "dc1"
data_dir   = "/var/db/nomad"
bind_addr  = "0.0.0.0"
addresses  { http = "0.0.0.0" }
server     { enabled = true; bootstrap_expect = 1 }
client     { enabled = true; network_interface = "vtnet0" }
plugin_dir = "/usr/local/libexec/nomad/plugins"
plugin "nomad-pot-driver" {}
consul     { address = "192.168.122.20:8500" }

Start both:

$ service consul start
Starting consul.

$ service nomad start
Starting nomad.

One gotcha: the Nomad rc.d script returns before Nomad is fully ready. Give it about 10 seconds, or run nomad agent -config=/usr/local/etc/nomad.d/ in the foreground for debugging.

Verification

$ consul members
Node               Address              Status  Type    Build
freebsd-oci.local  192.168.122.20:8301  alive   server  1.22.2

$ nomad server members
Name                      Address         Port  Status  Leader  Build
freebsd-oci.local.global  192.168.122.20  4648  alive   true    1.9.6

$ nomad node status -self | grep pot
Driver Status   = mock_driver,pot
driver.pot                = 1
driver.pot.version        = v0.10.0

Nomad sees the pot driver, and Consul is running.

Scheduling a pot via Nomad

Move the exported image to Pot’s cache:

cp nginx-img_1.0.xz* /var/cache/pot/

The Nomad job file:

job "web" {
  datacenters = ["dc1"]
  type = "service"

  group "web" {
    count = 1

    service {
      name     = "web-nginx"
      provider = "consul"
    }

    task "nginx" {
      driver = "pot"

      config {
        image        = "file:///var/cache/pot"
        pot          = "nginx-img"
        tag          = "1.0"
        network_mode = "public-bridge"
        command      = "/usr/local/sbin/nginx"
        args         = ["-g", "'daemon off;'"]
      }

      resources {
        cpu    = 200
        memory = 256
      }
    }
  }
}

Three things to know about this job file:

  1. The pot driver requires image, pot, and tag. It doesn’t work with just a local pot name.
  2. command needs the full path (/usr/local/sbin/nginx), not just nginx.
  3. The process must run in the foreground. Nomad monitors the process: if it daemonizes and the parent exits, Nomad thinks the task died. daemon off; keeps nginx in the foreground.
$ nomad job run web.nomad
==> Monitoring deployment "b1371352"
    Deployment completed successfully

Deployed
Task Group  Desired  Placed  Healthy  Unhealthy
web         1        1       1        0

Service discovery via Consul

The moment the pot started, Consul registered it:

$ consul catalog services
consul
nomad
nomad-client
web-nginx

The Consul API returns the full service record:

$ fetch -qo - http://127.0.0.1:8500/v1/catalog/service/web-nginx | python3 -m json.tool
[
    {
        "ServiceName": "web-nginx",
        "ServiceMeta": {
            "external-source": "nomad"
        },
        "Node": "freebsd-oci.local",
        "Datacenter": "dc1"
    }
]

external-source: nomad shows that Consul registered the service through Nomad. The pot was scheduled, the service appeared in Consul, and the record was available through the API. For DNS-based discovery, Consul listens on port 8600, but you’ll still need PF rules for UDP on loopback or some form of dnsmasq forwarding. The API path works immediately.

And the pot serves its content:

$ fetch -qo - http://10.192.0.7/
Hello from Nomad-scheduled pot

Comparison

FeaturePodman/ocijail/CNIBastille VNETPot + Nomad + Consul
IP connectivityworksworksworks
External DNSworksworksworks
Service discoverynonoConsul API + DNS
Network isolationnoconfigurablePF + NAT
Declarative confignoBastillefileNomad HCL
Image exportnonopot export/import
ZFS integrationmanualbuilt-inbuilt-in
SchedulernonoNomad
Health checksnonoConsul
Multi-nodenonoyes (designed for it)
Setup complexityhighlowmedium-high
OCI compatibilityyesnono

For most single-node use cases, Bastille’s simpler model still counts for a lot.

Watch Out

  1. Bastille ZFS destroy can leave orphaned datasets. If bastille destroy fails with “pool or dataset is busy”, you’ll need to manually zfs destroy -r the orphaned datasets. Stop the jail first, wait a moment, then destroy.

  2. Pot snapshots require a stopped pot. Unlike Bastille (which can snapshot live jails), Pot refuses to snapshot a running pot. Plan your snapshot workflow around maintenance windows.

  3. Pot export only works with single-type pots. The default multi type shares the base via nullfs and can’t be packed into an image. Create with -t single if you intend to export.

  4. Nomad rc.d script doesn’t wait for readiness. service nomad start returns before Nomad is ready to accept jobs. Either wait 10 seconds or check with nomad server members.

  5. nginx must run in foreground under Nomad. The pot driver monitors the process. Use command = "/usr/local/sbin/nginx" with args = ["-g", "'daemon off;'"]. If the process daemonizes, Nomad marks the task as dead.

  6. Consul DNS needs PF rules. Consul binds DNS on port 8600 but PF may block UDP on loopback. The API (port 8500) works without extra config.

What’s next

The Bastille deep dive covers the full multi-service setup: two VNET jails on a bridge, Bastillefiles with CP directives, ZFS snapshots, live clone, and all the networking that the quick test above didn’t need. Pot + Nomad + Consul gets a deep dive on building images, writing job files, and wiring up service discovery end-to-end.


Sources and references:

Antenore Gatta

Antenore Gatta

A proud and busy Hacker, Father and Kyndrol

Keep the Lab Running

This post comes from hands-on testing on a FreeBSD 15.0 VM, including Bastille, Pot, and a working Nomad plus Consul stack. If it saves you a weekend of trial and error, consider keeping the lab running.

Most readers scroll past. Less than 3% of readers contribute to keeping independent technical content free and accessible.

Post comment

Markdown is allowed, HTML is not. All comments are moderated.