Comment 5 for bug 1571761

Revision history for this message
Martin Pitt (pitti) wrote : Re: [Bug 1571761] Re: zfs-import-cache.service slows boot by 60 seconds

Hello Richard,

thanks for the explanation what these units do.

Richard Laager [2016-04-19 16:29 -0000]:
> @pitti: zfs-import-cache.service doesn't "load the ZFS cache". It
> imports zpools which are listed in the /etc/zfs/zpool.cache file. It is
> conditioned (ConditionPathExists) on the existence of
> /etc/zfs/zpool.cache. It seems to me that upstream ZoL is tending to
> deprecate the zpool.cache file.
>
> In contrast is zfs-import-scan.service, which imports pools by device
> scanning. This is conditioned on /etc/zfs/zpool.cache NOT existing. So
> one or the other service runs, depending on whether you have a
> zpool.cache file.

I'm curious, how are these actually being used? They can't be relevant
for the root file system as that already needs to be set up in the
initrd. This unit does not sort itself before local-fs-pre.target
that, i. e. you also can't rely on having these done when stuff in
/etc/fstab is being mounted. The only thing that I see is that
zfs-mount.service.in is Before=local-fs.target, i. e the zfs-import-*
bits will run in parallel with mounting fstab.

> Longer-term, I agree that we need some sort of solution that isn't "wait
> for all devices"

Indeed. The concept of "wait for all devices" hasn't been well-defined
for the past 20 years at least. Is zpool import idempotent in any way,
i. e. would it be safe (and also performant) to run it whenever a
block device is seen, instead of once ever in the boot sequence? If
not, could it be made to accept a device name, and then limit its
actions to just that? Then it would be straightforward to put this
into an udev rule and become hotpluggable.

Does this mean that ZFS isn't currently supported on hotpluggable
storage? (As there's nothing that would call zpool import on them)

> Is there any prior art with btrfs or MD that's applicable here? (I
> know MD devices are assembled by the kernel in some circumstances. I
> think mdadm is involved in other cases.)

Yes, there's plenty. E. g. mdadm ships
/lib/udev/rules.d/64-md-raid-assembly.rules which adds/removes devices
from a RAID array as they come and go. LVM is a bit more complicated,
its udev rules mostly just tell a daemon (lvmetad) about new devices
which then decides when it has enough pieces (PVs) to bring up a VG.

There's also some really simple stuff like 85-hdparm.rules which just
calls a script for every encountered block device.

Thanks,
Martin