Skip to content

Updateds zfs pool from OpenZFS release 2.2.0 and ZFSBootMenu can not boot from pool, what now?

As mentioned here, ZFSBootMenu prior version 2.2.1 is not able to boot a zpool once updated.

It took me some time and a few tries to understand that the solution is incredible simple.

Based on the official documentation, all you have to do is to:

  • Boot from a live iso like archzfs.iso
  • Mount your EFI-Partition
  • curl -LJO https://get.zfsbootmenu.org/efi
  • cp zfsbootmenu*.EFI /efi/EFI/ZBM/vmlinuz.EFI
  • Umount your EFI-Partition
  • Reboot

I've updated the archzfs iso build script to automatically create a iso file with the latest ZFSBootMenu pre-shipped.

ssd just lost its partition table

So, I've just done a regular arch linux system update. Of course without kernel updates because of issues/14622.

After rebooting, there was nothing left. I still got an ZFSBootMenu entry somewhere in the efibootmgr but that's it. After using my archzfs usb iso, I went to software/arch-linux-configuration/scripts/zfs/recover and started 01-mount.sh. What I got in return was a no pool available.

I've used them all, parted, gdisk, testdisk and fdisk but they all have told me the same "shiny new ssd you got there". By the way, the ssd is a nvme-one from Samsung. I've fired up the lenovo internal system check tools but all they say is "super fine disk you got there, almost totally new".

What I did at the end was rebuilding my partition table using sgdisk. After that, I've also mkfs.vfat the efi boot partition and recreated the zfsbootmenu as well as the efibootmg entries. Important not, you have to update your /etc/fstab for the boot efi partition after doing that. Nice to know, after creating the zfs partition via sgdisk, I was able to fully import my encrypted zfs pool.

This is a pretty strange error and I hope I don't have to fix this again in the near future. It leaves a strange feeling in your stomach

Lessons learned? sgdisk -b=sgdisk-sda.bin /dev/sda ref

Fix >>failed to setup inotify handler. Please increase inotify limits<< on truenas scale for syncthing

I am using Syncthing on my truenas scale and got the following error after adding more and more directories.

failed to setup inotify handler. Please increase inotify limits, see https://docs.syncthing.net/users/faq.html#inotify-limits

The fix is simple, just login to your truenas scale and click to System Settings -> Advanced -> Sysctl -> Add.

Add Variable fs.inotify.max_user_watches with value 204800 and that's it. Happy syncthing!