systemd PrivateDevices=on breaks ZFS snapshot post-xfer exec script
Just a quick note as I was searching a lot recently to find the root cause of the issue. As nothing was found I like to document my issue here.
We are migrating from Solaris to Linux (Ubuntu 22.04 Server) for our ZFS storage servers which are mainly used with rsync. On Solaris we have a script which is (among other things like reporting, altering...) automatically taking a ZFS snapshot after a transfer has finished, keeping a maximum count and deleting the oldest ones (rotating snapshots).
When we were using that script on the Linux box, the snapshots were not created. In the logs I found messages like
Jul 09 08:00:28 ubnas1 rsync[1524567]: /dev/zfs and /proc/self/mounts are required. Jul 09 08:00:28 ubnas1 rsync[1524567]: Try running 'udevadm trigger' and 'mount -t proc proc /proc' as root.
For some reason I thought this was due to "use chroot = on" in rsyncd.conf. Maybe pure coincidence but when having "use chroot = off" it looked like to work once.
Anyhow...
Eventually I found the root cause which was the option "PrivateDevices=on" in the rsync.service file for systemd. This was introduced in rsync 3.2.0 (19 Jun 2020)
My current workaround is to use a drop-in systemd file with
[Service]
PrivateDevices=off
From a security standpoint this is not great but functionality is back.
Comments, hints etc. to improve the situation would be very welcome.
Maybe the documentation can be enhanced to put some notes in the "post-xfer exec" section, describing possible issues with "PrivateDevices=on" when file system operations, in particular snapshots, are done.
This might be not only the case for ZFS but for btrfs or any other COW filesystem as well supporting snapshots.
Older Ubuntu LTS releases, like the very common 20.04, are not affected as they dont come with the security enhanced rsync.service file. This might be the case for many other Linux distros.