Replicating ZFS to FreeNAS

There are plenty of articles that discuss how to handle FreeNAS to FreeNAS replication, its even covered in the FreeNAS documentation (thank you iX Systems for both automatic and manual setups.

Whats not covered replicating something that isn’t FreeNAS, to a FreeNAS. I wanted something simple, that would manage the snapshots by cleaning up after itself. I found Zap in the ports tree . I liked that it was a shell script. This makes it really simple to run on FreeNAS. I don’t need to do any hacking.


Start by installing the port

1pkg install sysutils/zap

The setup is pretty straight forward. Set the zap:snap property to ‘on’ for the filesystems you want to manage with Zap . Set it to off for those you want it to ignore. I don’t need to backup /var/crash, /var/tmp, or /var/mail on this system, but I do want to backup /usr, /var, and /iocage.

1# zfs set zap:snap=on zroot/usr zroot/var/ zroot/iocage
2# zfs set zap:snap=off zroot/var/crash zroot/var/tmp zroot/var/mail
3# zap snap -v 2d

This last line will actually take the snapshots. You an view them with

 1[louisk@jailer louisk 103 ]$ zfs list -t snapshot 
 2NAME                                                                          USED  AVAIL  REFER  MOUNTPOINT
 3zroot@ZAP_jailer_2020-07-18T00:00:00p0000--2d                                        0      -    88K  -
 4zroot/ROOT@ZAP_jailer_2020-07-18T00:00:00p0000--2d                                   0      -    88K  -
 5zroot/ROOT/default@ZAP_jailer_2020-07-18T00:00:00p0000--2d                       1.20M      -  1.62G  -
 6zroot/tmp@ZAP_jailer_2020-07-18T00:00:00p0000--2d                                  64K      -   136K  -
 7zroot/usr@ZAP_jailer_2020-07-18T00:00:00p0000--2d                                    0      -    88K  -
 8zroot/usr/home@ZAP_jailer_2020-07-18T00:00:00p0000--2d                               0      -  1.17G  -
 9zroot/usr/ports@ZAP_jailer_2020-07-18T00:00:00p0000--2d                              0      -   724M  -
10zroot/usr/src@ZAP_jailer_2020-07-18T00:00:00p0000--2d                                0      -   704M  -
11zroot/var@ZAP_jailer_2020-07-18T00:00:00p0000--2d                                    0      -    88K  -
12zroot/var/audit@ZAP_jailer_2020-07-18T00:00:00p0000--2d                              0      -    88K  -
13zroot/var/crash@ZAP_jailer_2020-07-18T00:00:00p0000--2d                              0      -    88K  -
14zroot/var/log@ZAP_jailer_2020-07-18T00:00:00p0000--2d                             292K      -  1020K  -
15zroot/var/mail@ZAP_jailer_2020-07-18T00:00:00p0000--2d                             84K      -   288K  -
16zroot/var/tmp@ZAP_jailer_2020-07-18T00:00:00p0000--2d                                0      -    88K  -
17[louisk@jailer louisk 104 ]$

You can see the date/timestamp that the snapshot was taken, and also on the end, how long it should be kept for. You should see snapshots for each of the filesystems you set the zap:snap=on property. I picked 2 days because its relatively short duration, but long enough that I can leave it overnight and see that its still doing the right thing the next day. I don’t want to get too far into this with out verifying that its working as expected.

The crontab entry looks like this

1#minute	hour	mday	month	wday	command
3# make snapshots, replicate, cleanup remote end
40	0	*	*	*	root	/usr/local/bin/zap snap 2d | logger
530	0	*	*	*	root	/usr/local/bin/zap rep -v | logger
60	6	*	*	*	root	/usr/local/bin/zap destroy | logger

The ‘| logger’ sends the output to syslog, in case I need to see whats happened. Now I check it the next day and make sure its creating and deleting snapshots properly.


If you’ve gotten this far, now its time to setup replication.

Create a passwordless ssh key for the root user. You will need to put the public key on your FreeNAS box.

Create a user that will accept the snapshots I’m using the zap user, for consistency. Make sure you define a home directory, as you will need to add an ssh public key and it needs to be stored there.

Create a dataset in your pool where you will store your snapshots

Set the ownership to the user you created above

1root@freenas[~]# zfs allow zap aclinherit,aclmode,atime,canmount,compression,create,destroy,diff,exec,mount,mountpoint,readonly,receive,release,send,setuid,userprop cess/zproxy1
2root@freenas1[~]# zfs allow cess/jailer
3---- Permissions on cess/jailer --------------------------------------
4Local+Descendent permissions:
5	user zap aclinherit,aclmode,atime,canmount,compression,create,destroy,diff,exec,mount,mountpoint,readonly,receive,release,send,setuid,userprop

In the web interface under System > Sysctls > Add sysctl:

1Variable: vfs.usermount
2Value: 1
3Enabled: yes

Back on our FreeBSD machine, we can add the “tasks” entry to prune snapshots. The command looks like this. The rest of the bits are up to you on how often you want the task to run.

1/mnt/cess/bin/zap destroy -v jailer > /var/log/zap.log

Looking at the NAS, we should be able to see snapshots showing up in the correct place:

 1root@freenas[~]# zfs list -t snapshot | grep ZAP
 2NAME                                                                                  USED  AVAIL  REFER  MOUNTPOINT
 3cess/zproxy1@ZAP_jailer_2020-07-19T00:00:00p0000--2d                                         0      -   188K  -
 4cess/zproxy1/ROOT@ZAP_jailer_2020-07-19T00:00:00p0000--2d                                    0      -   188K  -
 5cess/zproxy1/ROOT/default@ZAP_jailer_2020-07-19T00:00:00p0000--2d                            0      -  1.86G  -
 6cess/zproxy1/tmp@ZAP_jailer_2020-07-19T00:00:00p0000--2d                                     0      -   324K  -
 7cess/zproxy1/usr@ZAP_jailer_2020-07-19T00:00:00p0000--2d                                     0      -   188K  -
 8cess/zproxy1/usr/home@ZAP_jailer_2020-07-19T00:00:00p0000--2d                                0      -  1.17G  -
 9cess/zproxy1/usr/ports@ZAP_jailer_2020-07-19T00:00:00p0000--2d                               0      -  1.46G  -
10cess/zproxy1/usr/src@ZAP_jailer_2020-07-19T00:00:00p0000--2d                                 0      -  1.12G  -
11cess/zproxy1/var@ZAP_jailer_2020-07-19T00:00:00p0000--2d                                     0      -   188K  -
12cess/zproxy1/var/audit@ZAP_jailer_2020-07-19T00:00:00p0000--2d                               0      -   188K  -
13cess/zproxy1/var/crash@ZAP_jailer_2020-07-19T00:00:00p0000--2d                               0      -   188K  -
14cess/zproxy1/var/log@ZAP_jailer_2020-07-19T00:00:00p0000--2d                                 0      -  1.46M  -
15cess/zproxy1/var/mail@ZAP_jailer_2020-07-19T00:00:00p0000--2d                                0      -   469K  -
16cess/zproxy1/var/tmp@ZAP_jailer_2020-07-19T00:00:00p0000--2d                                 0      -   188K  -

Footnotes and References