Skip to content
Light Dark

Running FreeBSD 15.0 on a Headless Linux Host: Cloud-Init, nftables, and 7 Gotchas

- 12 mins

This is a companion post to the FreeBSD jail orchestration series. Before you can build anything on FreeBSD, you need a FreeBSD box. If your only server is a headless Linux machine accessed over SSH, this is how you get there.

TL;DR

FreeBSD 15.0 provides pre-built cloud images with ZFS and cloud-init support. The BASIC-CLOUDINIT-zfs variant, combined with a NoCloud seed ISO, gives you SSH access on first boot: no VGA, no installer, no graphical console needed. The real fight is not FreeBSD: it’s your Linux host’s firewall. If you run nftables with policy drop (and especially if Docker is also installed), you’ll need to punch holes for the libvirt bridge or the VM will boot into a network black hole.

Total setup time once you know the steps: about 15 minutes plus download time.

The Wrong Image Will Waste Your Afternoon

FreeBSD 15.0-RELEASE ships four qcow2 VM images. They look similar, they are NOT interchangeable:

ImageFilesystemCloud-initHeadless-friendly
amd64-ufs.qcow2.xzUFSNoNo
amd64-zfs.qcow2.xzZFSNoNo
amd64-BASIC-CLOUDINIT-ufs.qcow2.xzUFSnuageinitYes
amd64-BASIC-CLOUDINIT-zfs.qcow2.xzZFSnuageinitYes

The non-CLOUDINIT images ship with no root password, no SSH keys, no DHCP client, and no serial console. If you boot one on a headless host, your VM is running but you have zero way to reach it. I learned this the hard way.

If you need ZFS (and you do, if you’re planning to work with jails or bhyve), pick BASIC-CLOUDINIT-zfs. You cannot convert UFS to ZFS in-place after the fact.

Download from the FreeBSD VM images directory:

wget -O ~/Downloads/FreeBSD-15.0-RELEASE-amd64-BASIC-CLOUDINIT-zfs.qcow2.xz \
  "https://download.freebsd.org/releases/VM-IMAGES/15.0-RELEASE/amd64/Latest/FreeBSD-15.0-RELEASE-amd64-BASIC-CLOUDINIT-zfs.qcow2.xz"

626 MB compressed, 2.7 GB decompressed.

What You Need on the Linux Host

I’m running Manjaro (Arch-based), kernel 6.12. The packages:

PackageWhat it does
qemu-system-x86The hypervisor
libvirtVM lifecycle management
virt-installVM creation from the command line
edk2-ovmfUEFI firmware (FreeBSD cloud images are UEFI-only)
dnsmasqDHCP and DNS for the libvirt NAT network
cdrtoolsmkisofs for building the cloud-init ISO

On Arch/Manjaro:

sudo pacman -S qemu-full libvirt virt-install dnsmasq edk2-ovmf cdrtools
sudo systemctl enable --now libvirtd

You also need an SSH key. If you don’t have one:

ssh-keygen -t ed25519

Cloud-Init: nuageinit, Not the Python One

FreeBSD doesn’t use the Python cloud-init you know from Ubuntu or RHEL. It has nuageinit: a native C implementation that reads a CD-ROM labeled cidata (the NoCloud datasource). It supports the basics: hostname, users, SSH keys, write_files, runcmd, and packages.

Create two files in /tmp/cidata/:

meta-data:

instance-id: freebsd-oci
local-hostname: freebsd-oci

user-data:

#cloud-config
hostname: freebsd-oci
fqdn: freebsd-oci.local

ssh_pwauth: true

users:
  - name: freebsd
    shell: /bin/sh
    groups: wheel
    sudo: ALL=(ALL) NOPASSWD:ALL
    lock_passwd: false
    ssh_authorized_keys:
      - ssh-ed25519 AAAA... your-key-here

packages:
  - sudo

network:
  ethernets:
    vtnet0:
      dhcp4: true

write_files:
  - path: /boot/loader.conf.d/serial.conf
    content: |
      boot_multicons="YES"
      boot_serial="YES"
      comconsole_speed="115200"
      console="comconsole,vidconsole"
  - path: /etc/rc.conf.d/sshd
    content: |
      sshd_enable="YES"

runcmd:
  - echo '-S115200 -Dh' > /boot.config
  - service sshd enable
  - service sshd start

A few notes on the user-data:

  • Serial console goes in /boot/loader.conf.d/serial.conf, not the main loader.conf. Cleaner, and you won’t accidentally overwrite existing settings.
  • Update: the network: block handles DHCP natively via nuageinit. My original version used sysrc ifconfig_vtnet0="DHCP" in runcmd as a workaround because I didn’t know about the network: directive. Turns out nuageinit’s code (/usr/libexec/nuageinit, line 403) checks for dhcp4: true and writes ifconfig_vtnet0="DHCP" to the network config. The man page even has an example with network: ethernets:. Thanks again to the r/freebsd thread for pushing me to look harder.
  • Put your SSH key on both the freebsd user and root. Belt and suspenders.

Build the ISO:

mkisofs -output /var/lib/libvirt/images/freebsd-cidata.iso \
  -volid cidata -joliet -rock \
  /tmp/cidata/user-data /tmp/cidata/meta-data

Prepare the Disk and Create the VM

# Decompress (keep the original)
xz -dk ~/Downloads/FreeBSD-15.0-RELEASE-amd64-BASIC-CLOUDINIT-zfs.qcow2.xz

# Copy to libvirt's image directory and resize
sudo cp ~/Downloads/FreeBSD-15.0-RELEASE-amd64-BASIC-CLOUDINIT-zfs.qcow2 \
  /var/lib/libvirt/images/freebsd-oci.qcow2
sudo qemu-img resize /var/lib/libvirt/images/freebsd-oci.qcow2 30G

The qcow2 is thin-provisioned: 30 GB virtual, 2.5 GB actual on disk.

Create the VM:

sudo virt-install \
  --name freebsd-oci \
  --memory 4096 \
  --vcpus 2 \
  --os-variant freebsd15.0 \
  --import \
  --disk path=/var/lib/libvirt/images/freebsd-oci.qcow2,format=qcow2 \
  --disk path=/var/lib/libvirt/images/freebsd-cidata.iso,device=cdrom \
  --network network=default \
  --graphics vnc,listen=127.0.0.1,port=5900 \
  --serial pty \
  --boot uefi \
  --noautoconsole

The flags that matter:

  • --os-variant freebsd15.0: on up-to-date Arch, osinfo-db already includes FreeBSD 15.0. Many guides suggest freebsd14.0 as a fallback: check with osinfo-query os | grep freebsd before defaulting to that.
  • --boot uefi: FreeBSD cloud images require UEFI. libvirt automatically uses OVMF from /usr/share/edk2/x64/.
  • --graphics vnc,listen=127.0.0.1: VNC bound to localhost only. You can tunnel it with ssh -L 5900:127.0.0.1:5900 if you need visual access. SPICE does NOT work with FreeBSD on QEMU.
  • --noautoconsole: critical for headless operation. Without this, virt-install tries to open an interactive console and hangs.

The Firewall Problem (Where the Real Debugging Starts)

The VM booted. I waited. And waited. No DHCP lease. virsh net-dhcp-leases default returned nothing for over 3 minutes.

The VM was running (I could confirm via virsh qemu-monitor-command freebsd-oci "info status" --hmp), the network interface was attached, dnsmasq was listening on virbr0. But no DHCP traffic was getting through.

Problem 1: nftables Blocking DHCP on the Bridge

My host has a strict nftables firewall:

chain input {
    type filter hook input priority filter; policy drop;
    ct state established,related accept
    iif lo accept
    ip saddr 192.168.1.0/24 accept
    # ... blocklists, GeoIP, etc.
}

The VM sends a DHCPDISCOVER from 0.0.0.0 on virbr0. That’s not the loopback interface, and 0.0.0.0 is not in 192.168.1.0/24. nftables drops it. dnsmasq never sees the request.

libvirt creates its own nftables table (ip libvirt_network) with proper rules, but it can’t touch your custom inet filter table. The two tables are independent: both are evaluated, and if either one drops the packet, it’s gone.

Fix: accept all traffic on the bridge interface.

# Runtime (immediate)
sudo nft add rule inet filter input position 10 iif "virbr0" accept

# Persistent: add to /etc/nftables.conf, after "iif lo accept"
iif "virbr0" accept

After adding the rule and rebooting the VM: DHCP lease within 10 seconds.

Problem 2: No Internet from the VM

SSH worked. The VM had an IP. But ping 8.8.8.8 showed 100% packet loss. DNS resolution worked (dnsmasq handles that locally), but routed traffic couldn’t leave the host.

Two firewalls were blocking forward traffic:

nftables forward chain:

chain forward {
    type filter hook forward priority filter; policy drop;
    # zero rules
}

iptables-legacy (Docker):

Chain FORWARD (policy DROP)
    DOCKER-USER -> DOCKER-FORWARD -> DROP

Docker installs iptables-legacy rules alongside your nftables. Both have a FORWARD chain, both default to DROP, and a packet must survive BOTH to be forwarded. This is the single most confusing networking setup on modern Linux.

Fix for nftables:

sudo nft add rule inet filter forward iif "virbr0" accept
sudo nft add rule inet filter forward oif "virbr0" ct state established,related accept
sudo nft add rule inet filter forward oif "virbr0" ip daddr 192.168.122.0/24 accept

Fix for iptables-legacy (Docker):

sudo iptables-legacy -I DOCKER-USER -i virbr0 -j ACCEPT
sudo iptables-legacy -I DOCKER-USER -o virbr0 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT

After both: full internet connectivity from the VM.

Making the Rules Persistent

The nftables rules go in /etc/nftables.conf. Add iif "virbr0" accept after the loopback rule in the input chain, and add the three forward rules inside the forward chain.

The iptables-legacy rules (for Docker) require either a systemd service or removal of Docker entirely. If you don’t need Docker on this host, uninstalling it simplifies your firewall considerably.

Post-Boot: sudo and the packages Directive

Update: my original version of this post claimed nuageinit doesn’t install packages. Wrong. Reddit user EinalButtocks pointed out that the packages: directive works fine. I had a typo in my user-data and jumped to the wrong conclusion. The user-data example above now includes packages: [sudo], which installs sudo on first boot automatically.

If you’re working from an older version of this guide without the packages: directive, you can bootstrap manually:

ssh freebsd@192.168.122.149
su -l root -c 'pkg install -y sudo'
su -l root -c 'echo "%wheel ALL=(ALL:ALL) NOPASSWD: ALL" > /usr/local/etc/sudoers.d/wheel'

Either way, the sudo: directive in the user config only creates sudoers entries: FreeBSD base doesn’t ship sudo, so you need to install the binary separately via packages: or pkg.

Final State

FreeBSD 15.0-RELEASE amd64 (GENERIC)
Hostname:  freebsd-oci.local
IP:        192.168.122.149 (DHCP via libvirt NAT)
ZFS pool:  zroot, 28.5 GB, ONLINE, healthy
User:      freebsd (wheel, sudo NOPASSWD)
SSH:       key-based auth
Internet:  full connectivity
Serial:    configured (works after first reboot)

SSH in, and you’re on FreeBSD:

$ ssh freebsd@192.168.122.149
$ uname -a
FreeBSD freebsd-oci.local 15.0-RELEASE FreeBSD 15.0-RELEASE releng/15.0-n280995-7aedc8de6446 GENERIC amd64
$ zpool status -x
all pools are healthy

Watch Out

Seven things that will bite you if you don’t see them coming:

  1. Wrong image, no way in. The non-CLOUDINIT images (without BASIC-CLOUDINIT in the name) have no root password, no SSH keys, no DHCP, no serial console. On a headless host, you’re locked out. Use the CLOUDINIT variant.

  2. virsh needs sudo for network commands. On Arch/Manjaro, virsh net-list --all returns empty without sudo. The networks exist but aren’t visible to your user. Either use sudo consistently or set up polkit rules for the libvirt group.

  3. UFS to ZFS is a one-way street. You cannot convert an existing UFS root to ZFS in-place. Choose the right image from the start.

  4. Serial console needs a reboot. Cloud-init writes the serial config on first boot, but FreeBSD’s bootloader reads /boot/loader.conf at boot time: the settings only apply on the NEXT boot. Your first boot has no serial output. Access via SSH or VNC tunnel.

  5. nftables policy drop blocks libvirt DHCP. libvirt creates its own nftables table, but your custom inet filter table is evaluated independently. If your input chain drops traffic from virbr0, dnsmasq never sees the VM’s DHCP requests. Add iif "virbr0" accept to your input chain.

  6. Docker and libvirt: double firewall. Docker uses iptables-legacy with FORWARD policy DROP. nftables also has a forward chain. Your VM traffic must survive both. This is the most confusing part: a packet traverses nftables, then iptables-legacy, and if either drops it, it’s gone. Explicitly allow virbr0 in both.

  7. FreeBSD base doesn’t include sudo. The cloud-init sudo: directive only creates sudoers entries, it doesn’t install the binary. Add packages: [sudo] to your user-data (nuageinit supports it), or install manually via su -l root -c 'pkg install -y sudo'. Thanks to EinalButtocks on Reddit for the correction on packages: support.

What’s Next

This VM is the lab for the FreeBSD jail orchestration series. Next up: installing ocijail and Podman, pulling FreeBSD OCI images, and running our first jail-based container. The ZFS pool is ready, the network works, and we have a clean FreeBSD 15.0 to build on.


Sources and references:

Antenore Gatta

Antenore Gatta

A proud and busy Hacker, Father and Kyndrol

Save Someone Else the Same Afternoon

This guide came from a real afternoon of debugging. If it saved you a few hours, consider keeping the test infrastructure running.

Most readers scroll past. Less than 3% of readers contribute to keeping independent technical content free and accessible.

Post comment

Markdown is allowed, HTML is not. All comments are moderated.