So, I had a 500GB hard disk drive left over from an old Lenovo laptop I upgraded for my son. I decided to proceed with this RPI
based home lab. I decided to try to boot the RPI
with FreeBSD
from an external USB drive. So I bought an inexpensive enclosure, and set about to search for instructions on how to do this. My ultimate goal, was to have a ZFS drive booting into FreeBSD
on the RPI
. Unfortunately, I was unable to find instructions to boot from the external drive with FreeBSD
. Therefore, I settled for installing and setting up an RPI3B+
with FreeBSD
and an external disk drive with ZFS
for use with BastilleBSD
. FreeBSD
resides on the SD card, and my Jails will reside on the ZFS
drive.
Verify ZFS installation.
So, it is always dangerous to assume that you have everything you need. So, we are going to start by verifying that you have ZFS
installed a running. To give you a heads up, if this is an out of the box install of FreeBSD
on the RPI
, it will not. The FreeBSD
image file is based on UFS (Unix File System)
and that is what the SD card uses, when it is flashed.
I am only using a 500GB hard drive, so I won’t be making this a raid, this will just be a single volume installation.
When you execute the following line, you should receive a blank response in the terminal. This means that it is not loaded. Note I am doing this in the super user account.
# kldstat | grep zfs
However if you do have it loaded, you will get the following response. This indicates that the ZFS kernel module is loaded.
freebsd@RPI3B ~ % kldstat | grep zfs
4 1 0xffff000001385000 415fa8 zfs.ko
If you don’t have ZFS loaded you can load it, and ensure it is loaded next time you reboot by executing the following:
# kldload zfs
# echo 'zfs_load="YES"' >> /boot/loader.conf
You can verify that it the change has been made to loader.conf
, but using the cat
command. Upon execution, it will show you the contents on screen.
# cat /boot/loader.conf
# Configure USB OTG; see usb_template(4).
hw.usb.template=3
umodem_load="YES"
# Multiple console (serial+efi gop) enabled.
boot_multicons="YES"
boot_serial="YES"
# Disable the beastie menu and color
beastie_disable="YES"
loader_color="NO"
zfs_load="YES"
If for some reason you don’t have ZFS installed, you can simply install it by using pkg.
# pkg install zfs
Formatting the Drive
So I am using an external hard drive connected to my freeBSD
over USB 3
. The hard drive enclosure supports UASP
. UASP
stands for USB Attached SCSI Protocol
. This is important because freeBSD
will see the drive as a SCSI
drive. In other cases it may be seen as an SATA
drive, especially if it is an internal drive connected using SATA
. This is important because to format, or mount or add a disk to a zpool, you need to know how the system has identified the drive. In freeBSD
this is done using either the camcontrol
, or gpart
command.
CAM
is the common access method storage subsystem in freeBSD
and is used to implement drivers for storage media. The camcontrol
command is the CAM control program.
To list the devices connected and recognized by your system you enter the following:
# camcontrol devlist
<WDC WD50 00LPCX-24VHAT0 1.02> at scbus0 target 0 lun 0 (da0,pass0)
The da0
highlighted in, in yellow above is the device designation. My western digital hard drive is connected and recognized on this system. da0
indicates that this is a SCSI
connected external drive. ATA
drives start with ada
.
You can also use the gpart
command to determine the drive designation. This command is used to manage partitions on storage devices. To get a list of your devices enter the gpart
command. Note: I entere these commands using the super user (su)
account.
# gpart show
=> 63 250347457 mmcsd0 MBR (119G)
63 1985 - free - (993K)
2048 102400 1 fat32lba [active] (50M)
104448 250243072 2 freebsd (119G)
=> 0 250243072 mmcsd0s2 BSD (119G)
0 128 - free - (64K)
128 246450048 1 freebsd-ufs (118G)
246450176 3792896 2 freebsd-swap (1.8G)
=> 40 976773088 da0 GPT (466G)
40 976773088 1 freebsd-zfs (466G)
=> 40 976773088 diskid/DISK-012345678999 GPT (466G)
40 976773088 1 freebsd-zfs (466G)
#
Here we see all the partitions on the drive. If my ZFS
share was added to a zpool
, it would no longer be listed. The device name is indeed da0
.
To format the drive, you need to start by destroying the partition. This is done as follows. You an type da0
, or /dev/da0
# gpart destroy -F da0
Next you want to create a new partition.
To Now you can add the new drive da0 by setting up the zpool
# gpart create -s gpt /dev/da0
da0 created
Nest we need to format the drive for freeBSD ZFS
, There are 2 versions of ZFS
, Oracle’s and the open source freeBSD
version. To format the drive enter the following command.
# gpart add -t freebsd-zfs -l zfs0 /dev/da0
You can confirm your drive is formatted by using gpart show
.
Setup a Zpool
The next step is to setup a zpool
. A zpool
is a collection of physical storage drives that are pooled together into a single storage entity. In this use case I am creating a single use pool, s indicated in the freeBSD Handbook
. If you need to create a raid , I refer you to the handbook, or to the ZFS Handbook
.
Using the zpool
command I am creating a pool named xenodata
. Note xenodata
highlighted in yellow is the name of the pool you wish to create, and the path to the partition is /dev/gpt/zfs0
highlighted in green, where zfs0
is the name of the partition created earlier.
# zpool create xenodata /dev/gpt/zfs0
To check on the status of the zpool you just created, use the status option. Hopefully there are no errors to address.
# zpool status
pool: xenodata
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
xenodata ONLINE 0 0 0
gpt/zfs0 ONLINE 0 0 0
errors: No known data errors
ZFS
automatically mounts the drive, however you can check to make sure it is mounted, and what its mount point is in FreeBSD
. To do so use zfs list
.
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
xenodata 372K 450G 96K /xenodata
To check for any issues with your zpool, you can use the following command. zIt checks the pool integrity, and will repair any corruption.
# zpool scrub xenodata
Verify Zpool Auto-Mount.
Check the mount information for your pool, remember xenodata is the name of my zpool, you should use the name of the pool you created.
# zfs get mountpoint,canmount xenodata
NAME PROPERTY VALUE SOURCE
xenodata mountpoint /xenodata default
xenodata canmount on default
Check that zfs
is enabled at startup. You sod so by reviewing the contents of your /etc/rc.conf
file. You can do so by editing, or using cat to view your file.
# cat /etc/rc.conf
You should search through the file and verify that you have an entry that states: zfs_enable="YES"
The final test is to reboot, after the reboot, check zpool status
, and zfs list
, to confirm that everything loaded as before.
Additionally, you want to verify that the zpool
automatically imports at boot time. This is done by checking the zpool's
auto expand property. If it returns the value off, then you must enable it.
# zpool get autoexpand xenodata
NAME PROPERTY VALUE SOURCE
xenodata autoexpand off default
To enable the auto expand property you must set it to on.
# zpool set autoexpand=on xenodata
You can get the property again to verify it is set. After rebooting, you can run zpool status, and instead of getting the following. You will get a proper zpool
status display as above.
# zpool status
no pools available
If for some reason, like me, you can’t seem to get your zpool
to import at boot time, then run the following commands. What this does is add the import to the rc.local
script. This runs after all the other boot scripts are run, to ensure everything is loaded. The second command, chmod
, makes it executable.
echo 'zpool import xenodata' >> /etc/rc.local
chmod +x /etc/rc.local
You can confirm it auto imported by running zpool status
after reboot.
# zpool status
pool: xenodata
state: ONLINE
scan: scrub repaired 0B in 00:00:00 with 0 errors on Sun Jan 19 09:26:01 2025
config:
NAME STATE READ WRITE CKSUM
xenodata ONLINE 0 0 0
da0p1 ONLINE 0 0 0
errors: No known data errors