Hey, Rafael!

Rebuilding the NAS, Part I

Introduction

Rafael Fonseca

Rafael Fonseca


nas howto

Rebuilding the NAS, Part I

Posted by Rafael Fonseca on .
Featured

nas howto

Rebuilding the NAS, Part I

Posted by Rafael Fonseca on .

If you follow this blog (it's ok if you don't), you may be aware that a couple of years back I decided to build my own NAS. I won't go into the details of what drove me to build it myself (you can read more about it here), but I will tell you what motivated me to rebuild it recently.

After my NAS was built, I had roughly 1.5TB in storage space available using a mixture of 500GB and 1TB disks, arranged in a RAID10 array. That was fast and reliable, but costly to upgrade. A few months later, I changed my mind and reshaped the array as a RAID5, which gave me roughly 2TB of disk space. Over time, I upgraded all drives to 1TB, which, in a RAID5 configuration, left me with little over 2.3TB available.

Needless to say that, with internet data caps growing constantly, I found myself again constrained by disk space. But with HDD prices still higher than they were pre-Thailand floods, replacing disks was not an option.

Enter ZFS.

UPDATED 16/07/2012: After pretty much destroying my USB stick on the weekend, I'm rebuilding my NAS yet again, and will be seeing if I can recover my ZFS pool from the new system. I have opted to install the whole system to USB (yes, including logs) for now, and will move busy folders to the ZFS array once it's all back to normal.

I'll be deleting the existing RAID-Z array and replacing it with either: a) a RAID-Z2 config with all 6 drives; or b) a RAID-Z configuration of 3 x 2-disk mirrors.

UPDATED 13/07/2012: Added note about RAID-Z drive-adding limitations.

ZF-what?

My original NAS build ran Ubuntu Server 10.04 LTS, which I use on a daily basis at work and am quite happy with. Being a popular Linux distribution, it has a great support community as well as plenty of software packages available. It was a no-brainer.

What it didn't have, though, was support for ZFS. Since a full explanation of ZFS is beyond the scope of this post, I'll give you the short version: ZFS is a combined filesystem and logical volume manager created by Sun Microsystems (heard of them?) for the Solaris OS. It eventually got ported to FreeBSD (and even Ubuntu, although not as stable). Great, right? But what does this mean, exactly?

In essence, you add a bunch of physical HDDs to your system, tell ZFS you want to combine them into one big container to store your files, and it will take care of managing the disks, integrity and replication of the data for you. You see, ZFS was designed from the ground up with transparent data integrity in mind. So it will always ensure your data is safe at all times. And if it can't keep it safe, it will let you know.

But, as I mentioned before, my NAS was running Ubuntu, which doesn't have a good enough (or up-to-date) implementation of ZFS. Seeing as Ubuntu 12.04 was out, I thought it was about time I rebuilt the NAS with just the services I really need, and shave the unnecessary fat off the system. So why not take the time to install FreeBSD instead, which is known for its stability and set-it-and-forget-it approach?

The Result (a teaser)

Just so that you know what I have after following the instructions below, here's the current status of my NAS:

  • FreeBSD 9.0 installed, booting off a USB stick and running the whole system off a ZFS pool, making it FAST
  • SABnzbd+, Sick-Beard and CouchPotato installed and doing all my broadcatching, just as before (see here for more)
  • AFP and Samba installed, serving files to both Windows and Mac devices
  • A custom script that emails me the NAS' current IP address on boot, useful when taking the box to a friend's place or a LAN party
  • Avahi daemon advertising AFP and Samba services on the local network for easier discovery (think Apple's Bonjour)

Mumbo-jumbo

Before we dig into details, here's what you need to know:

  • All the shell commands were run in Terminal on a Mac. I don't know/care what the Windows equivalents are.
  • During the process, I made my system (the NAS) unbootable more than once. It could happen to you, so make sure you have a backup safe ELSEWHERE!
  • I'm not responsible. Even my mom says so. No reason for you to think otherwise, mmmkay? 😉

Down and Dirty

So with the aid of a spare FreeNAS box I had lying around in the office, I transferred all my media out of the NAS and started the process of installing FreeBSD. The first executive decision made was to unplug the old 60GB laptop HDD that was used as a boot drive, and instead boot from a USB stick. Not only would I save up on some power consumption, it would be one less drive to generate noise and heat in the enclosure. Take that, Greenpeace!

Although there are plenty of tutorials on the web for putting the FreeBSD root on ZFS, I have not found a comprehensive one for having root-on-ZFS with the boot code on an external device. Having the bootcode on an external device guarantees that your device will be bootable even in the unlikely event that your ZFS pool is destroyed.

Ready for the terminal hackery?

For this process, I picked two empty USB sticks. One for the install media (my NAS does not have an optical drive) and one to write the boot code to. I downloaded the FreeBSD 9.0 memstick image from ftp.freebsd.org for a 64-bit processor, and wrote it to the install media USB stick with:

dd if=~/Downloads/FreeBSD-9.0-RELEASE-amd64-memstick.img of=/dev/disk2 bs=64k

Please note that /dev/disk2 was the path to the install media USB stick on my system, and it will most likely be different on yours. To find out, open a terminal window and type:

diskutil list

Which in turn would give you a listing similar to this:

/dev/disk0
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *500.1 GB   disk0
   1:                        EFI                         209.7 MB   disk0s1
   2:          Apple_CoreStorage                         398.3 GB   disk0s2
   3:                 Apple_Boot Recovery HD             650.0 MB   disk0s3
   4:       Microsoft Basic Data Untitled                100.9 GB   disk0s4
/dev/disk1
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:                  Apple_HFS Macintosh HD           *398.0 GB   disk1
/dev/disk2
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:                                                   *4.0 GB     disk2

That last one was my 4GB USB stick which I used to boot the FreeBSD installation from.

Once the image is written to disk, eject it (diskutil eject /dev/disk2), plug it into the NAS and boot off it (your NAS may need a key press or settings changed to boot from USB sticks). Select Shell at the dialog prompt after the initial boot. You'll be dropped to a root shell.

Take a dive

At the root shell, it's time to create your ZFS pool. First, let's set up 4K block alignment. For a complete technical breakdown of what 4K sectors are, check this primer from Seagate. In short: it's good.

The ada* devices you see here are my 1TB disks (ada0 to ada3). Depending on how many disks you have, you may have to go with a different RAID arrangement. RAID-Z requires at least 3 drives.

IMPORTANT: You cannot add drives to an existing RAID-Z. You can only replace existing with bigger ones. So when planning your array, ensure you fill all the available slots in your system with drives, even if they are not the ones you plan on using later on. This got me after completing this guide, since I switched to a bigger case and got 2 spare HDDs that I cannot add to pool0 without rebuilding the pool.

To see which devices you have available, type:

ls /dev/ada*

Let's prepare ZFS for 4K sectors:

gnop create -S 4096 ada0

Create a folder to store the ZFS config during the install:

mkdir /boot/zfs

And create the actual ZFS pool:

zpool create -O mountpoint=none pool0 raidz ada0.nop ada1 ada2 ada3

Note how the first device has .nop appended to its name. That's how we tell the ZFS pool creation command to optimize for 4K sectors. In my example, pool0 is the name of the ZFS pool. We'll create partitions on it, so they'll be named pool0/root and pool0/media, among others.

Now let's create the root partition where all system files will reside:

zfs create -p pool0/root

Tell ZFS not to mount the partition automatically, or it will screw things up for the installer shell:

zfs set canmount=noauto pool0/root
zfs set mountpoint=/ pool0/root

And mount this partition on a temporary path so we can copy the system files to it:

mount -t zfs pool0/root /mnt

All your root are belong to us

Time to decompress the files from the install media onto the root of our new system:

cd /mnt
tar Jvxpf /usr/freebsd-dist/base.txz
tar Jvxpf /usr/freebsd-dist/kernel.txz
tar Jvxpf /usr/freebsd-dist/ports.txz
tar Jvxpf /usr/freebsd-dist/doc.txz

The last two lines are optional. However, you'll need to get the ports from somewhere, and it's always good to have documentation available for the commands you don't know by heart. :)

Create a new file under /mnt/boot called loader.conf, using the dreaded vi editor:

vi /mnt/boot/loader.conf

With the following lines, chaing the last one to match your root partition created on your ZFS pool:

padlock_load="YES"
zfs_load="YES"
vm.kmem_size_max="512M"
vm.kmem_size="512M"
vfs.root.mountfrom="zfs:pool0/root"

Press ESC, then :wq to save and quit the editor.

Create a blank fstab so that the bootloader doesn't complain about it:

touch /mnt/etc/fstab

Copy the ZFS pool configuration to the new root:

zpool set bootfs=pool0/root pool0
cp /boot/zfs/zpool.cache /mnt/boot/zfs/

Put the following lines into /mnt/etc/rc.conf:

hostname="nas.lan"
ifconfig_DEFAULT="DHCP"

keymap="us.iso.kbd"
keyrate="fast"

sshd_enable="YES"
sendmail_enable="YES"
zfs_enable="YES"

Changing nas.lan to your NAS' hostname. If you want to set a static IP address instead, replace the ifconfig_DEFAULT line with:

ipv4_addrs_vge0="192.168.0.4/24"
defaultrouter="192.168.0.1"

Make the appropriate changes to match your local network configuration, and change vge0 to the adapter that shows up when you run ifconfig.

There's the door, now kick it

Time to plug in that second USB stick into the NAS, so the bootcode can be written to it. When plugging it in, the console will display some diagnostic messages. Pay attention to the device name (da1 or similar), as that's what we'll use in the next few commands.

Let's partition the disk and write the bootcode to it:

gpart create -s GPT da1
gpart add -t freebsd-boot -b 64k -s 64k da1
gpart add -t freebsd-ufs da1
gpart bootcode -b /mnt/boot/pmbr -p /mnt/boot/gptboot -i 1 da1

Create and mount the filesystem:

newfs -L usbboot /dev/da1p2
mkdir /mnt/usbboot
mount /dev/ufs/usbboot /mnt/usbboot

And finally copy the boot config over:

cp -Rpv /mnt/boot /mnt/usbboot

Brace yourselves, boot is coming

With the newly-formatted USB stick ready to go, it's time to turn the NAS off, remove the install media and let the NAS boot off the new USB stick. With luck, you'll get a nice login prompt after all is done.

By default, the root user won't have a password set. Let's fix that:

passwd

Set up your local timezone (yours may differ from mine):

cp /usr/share/zoneinfo/Pacific/Auckland /etc/localtime

Create sendmail aliases database:

newaliases

Create a non-root user:

adduser

And voilà! A new FreeBSD install ready to go.

Aftermath

In my scenario, ZFS was used to combine all four 1TB HDDs into one single volume, on top of which I store all the files on the NAS. By arranging the disks into a RAIDZ array, I not only added redundancy to the NAS but also increased capacity by 400GB compared to my old RAID5 arrangement. How's that for a surprise?

Wait a minute! Where's the rest?

Since a NAS without any sharing or downloading software is useless, we need to add apps to make it this a truly useful box. But to avoid making this long article TOO LONG, it will be split into two parts.

Stay tuned for part 2, where we'll setup SABnzbd+, Sick-Beard, CouchPotato, AFP/Netatalk, Samba and some extras.

Rafael Fonseca

Rafael Fonseca

View Comments...