FS#71385 - [lvm2] dm-raid kernel module not added to initramfs

Attached to Project: Arch Linux
Opened by Tom Yan (tom.ty89) - Monday, 28 June 2021, 17:50 GMT
Last edited by Buggy McBugFace (bugbot) - Saturday, 25 November 2023, 20:14 GMT
Task Type Bug Report
Category Packages: Core
Status Closed
Assigned To Christian Hesse (eworm)
Giancarlo Razzolini (grazzolini)
Architecture All
Severity Low
Priority Normal
Reported Version
Due in Version Undecided
Due Date Undecided
Percent Complete 100%
Votes 1
Private No

Details

Description:
LVM raid makes use of the kernel module dm-raid. However the lvm2 install hook does not include the module to the initramfs, causing raid type LV not being able to be activated. With a modified udev rule (that is included to the initramfs) I'm able to capture the following error (the LV happens to be not for the root filesystem, so I was wondering why it was quietly skipped, well so I thought):

pvscan[254] PV /dev/sda1 online, VG pi3 is complete.
pvscan[254] VG pi3 run autoactivation.
modprobe: FATAL: Module dm-raid not found in directory /lib/modules/5.11.4-1-ARCH
/usr/bin/modprobe failed: 1
Can't process LV pi3/chia: raid1 target support missing from kernel?
3 logical volume(s) in volume group "pi3" now active
pi3: autoactivation failed.



Additional info:
* package version(s) 2.03.11-5 (https://github.com/archlinux/svntogit-packages/blob/ad603bd38d20cacf9f76168980c680c191462358/trunk/lvm2_install#L8)
* config and/or log files etc.
* link to upstream bug report, if any

Steps to reproduce:
This task depends upon

Closed by  Buggy McBugFace (bugbot)
Saturday, 25 November 2023, 20:14 GMT
Reason for closing:  Moved
Additional comments about closing:  https://gitlab.archlinux.org/archlinux/p ackaging/packages/lvm2/issues/2
Comment by Tom Yan (tom.ty89) - Monday, 28 June 2021, 18:09 GMT
So I just notice this:
https://wiki.archlinux.org/title/Install_Arch_Linux_on_LVM#Configure_mkinitcpio_for_RAID

Is there a real good reason that the install hook shouldn't just include them? (The modules do not look really large to me. On the other hand, some other dm-* modules are included even when they are not "always necessary".) The truth is, as long as the lvm2 hook is used, it does not matter whether the LV is used for root. RAID type LV will simply not be auto activated as expected.

Note that the VG will considered to be "online" even when some LVs failed to be activated, so even if udev/systemd will trigger pvscan again after switching root, it does not cause those LV to activate:
https://listman.redhat.com/archives/linux-lvm/2021-June/msg00050.html
Comment by loqs (loqs) - Monday, 28 June 2021, 18:47 GMT
 FS#70482  different module also not included by the lvm2 hook.
Comment by Tom Yan (tom.ty89) - Tuesday, 29 June 2021, 03:50 GMT
I wonder if we should use an approach that is similar to the one in mdadm: https://github.com/archlinux/svntogit-packages/blob/packages/mdadm/trunk/mdadm_udev_install#L4

Not sure what add_checked_modules does exactly though. The story here is a bit tricky, as what would be necessary does not just concern the LVs/VGs/PVs that are used for the root filesystem. As long as the LVM hook is used, ALL PVs (that are not too slow to get discovered by udev before switching root) will be scanned and hence there will be attempts to activate ALL LVs/VGs before switching root. The attempts that are doomed to fail will then prevent them from being auto activated afterwards (quietly; well the reason might be more obvious to the user if one uses the systemd hook).
Comment by Adler Jonas Gross (Betal) - Wednesday, 04 August 2021, 03:34 GMT
This issue doesn't affect dracut, but mkinitcpio + any lvmraid(7), even in non-root VGs.

As explained in the Red Hat mailing list, the first pvscan which discovers a VG is complete creates a file under /run/lvm/vgs_online/ and activates the LVs, so any later running pvscan will no longer do any activation. Given we are missing the dm-raid modules, this pvscan leave files under /run/lvm/vgs_online and the lvmraid LVs fail to get activated.

I can confirm that adding "dm-raid raid0 raid1 raid10 raid456" to MODULES in /etc/mkinitcpio.conf "fix" this issue.

I think a proper fix is changing /usr/lib/initcpio/install/lvm2, instead of
"for mod in dm-mod dm-snapshot dm-mirror dm-cache dm-cache-smq dm-thin-pool; do"
include the LVM RAID modules
"for mod in dm-mod dm-snapshot dm-mirror dm-raid raid0 raid1 raid10 raid456 dm-cache dm-cache-smq dm-thin-pool; do"

This change also imply that users don't need the archwiki section "Configure mkinitcpio for RAID" anymore.

The lvm2 hook is in "lvm2" package, this bug could be assigned to eworm.
Comment by Giancarlo Razzolini (grazzolini) - Friday, 06 August 2021, 16:26 GMT
The modules should only be loaded if raid is actually being used on lvm.
Comment by Hugo Sales (someonewithpc) - Wednesday, 04 January 2023, 16:32 GMT
I think this is an important bug. I wasted some 10 hours yesterday and today.

Comment by Buggy McBugFace (bugbot) - Tuesday, 08 August 2023, 19:11 GMT
This is an automated comment as this bug is open for more then 2 years. Please reply if you still experience this bug otherwise this issue will be closed after 1 month.

Loading...