Some notes on Nexenta/ZFS

So far I’ve been pretty satisfied with my Nexenta setup. Dedup is a great feature and I’ve rsynced all other computers to it without a single thought of which files are copied more than once (all Windows and program files, multiple copies of pictures, multiple copies of Dropbox dirs, etc.). However, the following three things drove me nuts; here’s a word on how I’ve resolved them.

 

Smbshare vs. Samba

Yes, native ZFS smbshare is great; it even exposes snapshots as “Previous versions” to Windows boxes. And it can be simply managed from napp-it. However, smbshare won’t allow you to share the file system’s children 🙁

Here’s how this works: let’s say you have 3 nested file systems:

  • mypool/data
  • mypool/data/pictures
  • mypool/data/pictures/vacation

When you share mypool/data and navigate to it, you won’t see the pictures dir. When you navigate to pictures, you won’t see the vacation dir.

It drove me crazy and it seems it won’t be supported anywhere in the near future. That’s why I disabled smbshare completely and installed plain old Samba. Because Samba’s not ZFS-aware (but instead a plain old app that accesses the file system) it shares everything as you’d expect. Problem solved.

 

Backing up an entire zpool

I wanted the following:

  • to backup the entire data pool to an external USB drive of the same capacity (2TB)
  • a solution that would be smart enough to recognize matching snapshots and only copy the diffs
  • it should also delete the destination snaps that no longer exist in the source pool
  • it wouldn’t hurt if it supported remote replication in case I wanted that later

Much has been written about the awesomeness of zfs send | zfs receive, but I was immediately disappointed by all the manual work that still needed to be done. Sure, it supports recursive send/receive, it can be used over ssh, but it only operates on snapshots. It can incrementally copy the last delta (if you tell it exactly which two snapshots are to be used for diff), but if you prune your old snapshots to save space, it won’t know anything about that. So your backup size will grow indefinitely, constantly appending more data.

What I wanted was a simple 1:1 copy of my pool to an external drive. I even considered adding the drive to the mirror to create a 3-way mirrored pool; once resilvering would complete, I could split the mirror and disconnect the USB drive. However, resilvering is not that smart and takes days; all the data needs to be copied every time, and making a snapshot interrupts and restarts the process.

Then I found the excellent zxfer. It does work on Nexenta and allows you to mirror an entire pool; the procedure is pretty straightforward: first determine the path of your external USB drive using rmformat:

#rmformat

1. …
2. Logical Node: /dev/rdsk/c3t0d0p0
Physical Node: /pci@0,0/pci103c,1609@12,2/storage@3/disk@0,0
Connected Device: WD 20EADS External 1.75
Device Type: Removable
Bus: USB
Size: 1907.7 GB
Label: <Unknown>
Access permissions: Medium is not write protected.

Then create your USB drive backup pool:

zpool create Mybook-USB-backupzpool /dev/rdsk/c3t0d0

Finally, recursively backup your entire data zpool (here we set the target to be compressed with gzip-5 and deduped with sha256,verify)

zxfer -dFkPv -o compression=gzip-5,dedup=verify -R sourcezpool Mybook-USB-backupzpool

On subsequent runs it identifies the last common snapshot and copies only the diffs. [-d] switch deletes pruned snapshots from the target pool. For more, read the zxfer man page.

 

Virtualbox

This has been pretty disappointing; it’s a pain to set it up and it performs badly (that is, compared to VmWare on similar hardware). It burns around 15% of the CPU running an idle and freshly installed Ubuntu box. Command-line config is a pain and virtualizing (P2V) certain Windows boxes spits out errors that (according to Google) noone has ever seen before. The same image Just Works ™ under VmWare.

Nonetheless, it’ll have to do for now. For more info on how to set it up, consult the following: