FS#41833 - [lvm2] lvmetad in initrd hangs, blocking further lvm calls

Attached to Project: Arch Linux
Opened by Bastien Durel (bastiendurel) - Friday, 05 September 2014, 10:08 GMT
Last edited by Christian Hesse (eworm) - Wednesday, 10 February 2021, 09:34 GMT
Task Type Bug Report
Category Packages: Core
Status Closed
Assigned To Christian Hesse (eworm)
Architecture x86_64
Severity Medium
Priority Normal
Reported Version
Due in Version Undecided
Due Date Undecided
Percent Complete 100%
Votes 16
Private No

Details

Description: lvmetad in initrd hangs, preventing lvm2-lvmetad.service from starting, therefore making most lvm related tasks (such as pvscan on boot) hanging forever (pvscan at boot is backgrounded after 90s).

I described the problem first here : https://unix.stackexchange.com/questions/153705/lvmetad-hangs-on-startup

Shorter: lvmetad isn't kill by the run_cleanuphook (kill $(cat /run/lvmetad.pid)), and then it prevents systemd to run it

If I later kill (-9, or this won't work) this early process, all lvm2-lvmetad.service starts, pvscan's finished, etc.

Additional info:
* package version(s): 2.02.111-1
mkinitcpio.conf :
HOOKS="base udev autodetect modconf block lvm2 filesystems keyboard fsck"

Here is the gdb trace :
http://pastebin.com/4mpNBL8t

fd 6 is the socket :
lvmetad 60 root 6u unix 0xffff880212251f80 0t0 8269 /run/lvm/lvmetad.socket

Steps to reproduce:
I can't say, I have this problem only on one machine that have a SSD ans a HD disk ; other works but they don't have the same disk layout.
This task depends upon

Closed by  Christian Hesse (eworm)
Wednesday, 10 February 2021, 09:34 GMT
Reason for closing:  None
Additional comments about closing:  lvm2 2.03.11-3
Comment by Dave Reisner (falconindy) - Friday, 05 September 2014, 12:40 GMT
Could you recompile lvm2 with debug symbols and get a backtrace of the hanging lvmetad? I'm sure upstream would be interested in it.
Comment by Bastien Durel (bastiendurel) - Friday, 05 September 2014, 12:44 GMT Comment by Dave Reisner (falconindy) - Friday, 05 September 2014, 12:53 GMT
Whoops. So, a worker thread is blocking on a read, and the server is waiting on the thread to finish... Could you share this with upstream? There's not much for Arch to do here.
Comment by Bastien Durel (bastiendurel) - Friday, 05 September 2014, 12:57 GMT
Do you know where can I reach them ?
Comment by Dave Reisner (falconindy) - Friday, 05 September 2014, 13:22 GMT Comment by Bastien Durel (bastiendurel) - Friday, 05 September 2014, 13:22 GMT
thanks
Comment by Petr Rockai (mornfall) - Friday, 05 September 2014, 14:23 GMT
This appears to be a problem in how lvmetad is managed in the initrd. First, you spawn processes that talk to lvmetad but that can block for a long time reading block devices (pvscan), then you asynchronously with those processes kill lvmetad with a TERM signal and assume that it immediately exits. You need to compromise somewhere -- either wait for the pvscan processes to finish (so that when you send TERM to lvmetad it no longer talks to anyone), or you can do a TERM/sleep/KILL sequence (you should possibly also clean up the pvscan processes though). You could also KILL lvmetad right away I guess (in any case, the unfinished pvscan processes will eventually fail on their own).

Out of curiosity, why do you run lvmetad in the initrd?
Comment by Dave Reisner (falconindy) - Friday, 05 September 2014, 14:40 GMT
> Out of curiosity, why do you run lvmetad in the initrd?
Phrased differently, where does this pvscan call come from? It isn't called from any of the hooks you mention in your config.
Comment by Bastien Durel (bastiendurel) - Friday, 05 September 2014, 14:45 GMT
I did not change anything in initrd : it's stock.
The pvscan I talk about are run by systemd after pivot. (actually, I don't know for the pvscan with pid 117)
the pvscan that hangs on boot are launched by lvm2-pvscan@.service

lvmetad in intrd is launched by /usr/lib/initcpio/hooks/lvm2

Comment by Dave Reisner (falconindy) - Friday, 05 September 2014, 14:59 GMT
Then the sequence of events should be something like:

initramfs: lvmetad starts
initramfs: udevadm trigger, hardware setup, root is mounted, etc
initramfs: lvmetad is told to quit, and it doesn't do so immediately
--- switch_root ---
systemd: lvm-ish stuff happens (all according to the upstream units)

If this is correct, then it's upstream which is enforcing usage of lvmetad *and* pvscan at the same time. A udev rule (69-dm-lvm-metad.rules) generates a SYSTEMD_WANTS clause for devices, and these run in later userspace (nothing to do with early userspace since systemd isn't present in this user's configuration). This rule looks pretty wrong in several ways...
Comment by Bastien Durel (bastiendurel) - Friday, 05 September 2014, 14:59 GMT
actually, this process :
root 117 0.0 0.0 25924 5064 ? Ss 09:26 0:00 (lvm2) pvscan --background --cache --activate ay --major 8 --minor 21
have been launched in initrd

- the lesser pid I know is systemd-modules-load, 255.
- lsof on process 117 shows all opened libraries as deleted: http://pastebin.com/uNCuHnzY
- process starts at 09:26:16, 2 seconds before systemd
Comment by Dave Reisner (falconindy) - Friday, 05 September 2014, 15:09 GMT
Uggh, sorry. The pvscan comes from our own file which we source from /lib/initcpio/udev/69-dm-lvm-metad.rules (but my mention about upstream enforcing the two in concert still holds true).

@Thomas: why did we add this? Are we just trying to replicate the upstream systemd-run-based equivalent?
Comment by Thomas Bächler (brain0) - Friday, 05 September 2014, 17:50 GMT
We added this in order to have proper detection of lvm-based file systems. The only other way would be to run udevadm settle and hope that our block devices show up, then run vgchange. This is slow, unreliable and often will not work at all.

To clear this up, this is how things happen:

* lvm2 early hook starts lvmetad.
* udev hook triggers uevents.
* udev launches all those pvscan commands (compare /usr/lib/initcpio/udev/69-dm-lvm-metad.rules to /usr/lib/udev/rules.d/69-dm-lvm-metad.rules).
* /init finds the root device and mounts it.
* lvm2 late hook sends SIGTERM to lvmetad (lvmetad does not exit according to this bug report).

The thing is, this is all solved perfectly with systemd, since everything is guaranteed to be killed properly before it switches root - only the old style sh-based initramfs shows these symptoms. The only possible solution I see is making the lvm2 late hook go on a killing spree and nuke all pvscan and lvmetad processes with SIGKILL in a loop until none remains. This is overly complex compared to the clean systemd-based solution.
Comment by Bruno Widmann (bwid) - Friday, 05 September 2014, 19:52 GMT
I have not much clue about initrd system, so please take my apology in advance if the
following is total crap, but it seems to me lvm2 currently doesn't have a latehook, it has cleanuphook.

What if we do something like this:

run_latehook() {
kill $(cat /run/lvmetad.pid)
}


run_cleanuphook() {
sleep 0.2
if [ -e "/run/lvmetad.pid" ]; then
kill -9 $(cat /run/lvmetad.pid)
fi
}

I looked in lvmetad source.. the parent process there is waiting in a select call with a timeout-value of 0.25 seconds, so while above is ugly
and would add 0.2 sec to everyones boot-time, it should work most of the time i think..

Comment by Petr Rockai (mornfall) - Friday, 05 September 2014, 20:06 GMT
You can also kill -9 the lvmetad process right away, if you don't care about the fate of any LVM processes currently running (which I presume you don't).

PS: Sorry, seems that refreshing a flyspray page happily re-submits old comments, what the hell?
Comment by Bruno Widmann (bwid) - Friday, 05 September 2014, 22:49 GMT
Wasn't thinking that code i posted through well enough, because when the daemon would exit just in between the if and the kill -9 then i assume bad things would happen... So scratch that...
Comment by Bastien Durel (bastiendurel) - Monday, 08 September 2014, 07:32 GMT
looks like switching initrd to systemd+sd-lvm2 solves my problem (or at least workarounds it ;))
Comment by Bastien Durel (bastiendurel) - Tuesday, 09 September 2014, 07:32 GMT
Ho, it doesn't solves it actually.
Today I had to wait 1m30 during boot as early lvmetad wasn't shutdown before switching root. At last it was killed by systemd, and the service was launched, so lvm system works in session. But the swap partition that lies on the lvm was not mounted (timeout)

sept. 09 09:09:58 localhost systemd[1]: Starting LVM2 metadata daemon socket.
sept. 09 09:09:58 localhost systemd[1]: Listening on LVM2 metadata daemon socket.
sept. 09 09:09:58 localhost systemd[1]: Starting LVM2 metadata daemon...
sept. 09 09:09:58 localhost systemd[1]: Started LVM2 metadata daemon.
[...]
sept. 09 09:09:58 localhost kernel: sd 0:0:0:0: [sda] Attached SCSI disk
sept. 09 09:09:58 localhost kernel: sdb: sdb1 sdb2 < sdb5 >
sept. 09 09:09:58 localhost kernel: sd 1:0:0:0: [sdb] Attached SCSI disk
sept. 09 09:09:58 localhost systemd[1]: Starting system-lvm2\x2dpvscan.slice.
sept. 09 09:09:58 localhost systemd[1]: Created slice system-lvm2\x2dpvscan.slice.
sept. 09 09:09:58 localhost systemd[1]: Starting LVM2 PV scan on device 8:5...
sept. 09 09:09:59 localhost systemd[1]: Starting LVM2 PV scan on device 8:21...
[...]
sept. 09 09:09:59 localhost lvm[112]: 2 logical volume(s) in volume group "ubuntu" now active
sept. 09 09:09:59 localhost systemd[1]: Found device /dev/mapper/ubuntu-archroot.
sept. 09 09:09:59 localhost systemd[1]: Started LVM2 PV scan on device 8:5.
sept. 09 09:09:59 localhost systemd[1]: Starting File System Check on /dev/mapper/ubuntu-archroot...
sept. 09 09:09:59 localhost systemd-fsck[131]: /dev/mapper/ubuntu-archroot: clean, 511127/2949120 files, 6309850/11796480 blocks
sept. 09 09:09:59 localhost lvm[116]: 20 logical volume(s) in volume group "space" now active
sept. 09 09:09:59 localhost systemd[1]: Started File System Check on /dev/mapper/ubuntu-archroot.
sept. 09 09:09:59 localhost systemd[1]: Mounting /sysroot...
sept. 09 09:09:59 localhost systemd[1]: Mounted /sysroot.
sept. 09 09:09:59 localhost systemd[1]: Starting Initrd Root File System.
sept. 09 09:09:59 localhost kernel: EXT4-fs (dm-0): mounting ext3 file system using the ext4 subsystem
sept. 09 09:09:59 localhost systemd[1]: Reached target Initrd Root File System.
[...]
sept. 09 09:09:59 localhost systemd[1]: Stopping LVM2 PV scan on device 8:21...
sept. 09 09:09:59 localhost systemd[1]: Stopping LVM2 metadata daemon...
sept. 09 09:09:59 localhost systemd[1]: Stopping LVM2 PV scan on device 8:5...
[...]
sept. 09 09:10:02 localhost systemd[1]: Starting Switch Root...
sept. 09 09:10:02 localhost systemd[1]: Switching root.
[...]
sept. 09 09:11:32 data-dev4 systemd[1]: lvm2-lvmetad.service stop-sigterm timed out. Killing.
sept. 09 09:11:32 data-dev4 systemd[1]: lvm2-pvscan@8:21.service stop-sigterm timed out. Killing.
sept. 09 09:11:32 data-dev4 systemd[1]: lvm2-lvmetad.service: main process exited, code=killed, status=9/KILL
sept. 09 09:11:32 data-dev4 systemd[1]: Stopped LVM2 metadata daemon.
sept. 09 09:11:32 data-dev4 systemd[1]: Unit lvm2-lvmetad.service entered failed state.
sept. 09 09:11:32 data-dev4 systemd[1]: lvm2-pvscan@8:21.service: main process exited, code=killed, status=9/KILL
sept. 09 09:11:32 data-dev4 systemd[1]: Stopped LVM2 PV scan on device 8:21.
sept. 09 09:11:32 data-dev4 systemd[1]: Unit lvm2-pvscan@8:21.service entered failed state.
sept. 09 09:11:32 data-dev4 systemd[1]: Job dev-disk-by\x2duuid-162f9ac5\x2d33c6\x2d4d2d\x2d8e76\x2d1e2873d6a75a.device/start timed out.
sept. 09 09:11:32 data-dev4 systemd[1]: Timed out waiting for device dev-disk-by\x2duuid-162f9ac5\x2d33c6\x2d4d2d\x2d8e76\x2d1e2873d6a75a.device.
sept. 09 09:11:32 data-dev4 systemd[1]: Dependency failed for /dev/disk/by-uuid/162f9ac5-33c6-4d2d-8e76-1e2873d6a75a.
sept. 09 09:11:32 data-dev4 systemd[1]: Dependency failed for Swap.
[...]
sept. 09 09:11:32 data-dev4 systemd[1]: Started Login Service.
sept. 09 09:11:32 data-dev4 systemd[1]: Starting LVM2 metadata daemon...
sept. 09 09:11:32 data-dev4 systemd[1]: Started LVM2 metadata daemon.
Comment by Thomas Bächler (brain0) - Tuesday, 09 September 2014, 18:04 GMT
This is a good hint as to what is wrong. What does /dev/block/8:21 point to?
Comment by Bastien Durel (bastiendurel) - Wednesday, 10 September 2014, 07:43 GMT
brw-rw---- 1 root disk 8, 21 10 sept. 08:58 /dev/sdb5

sdb is a 1 Tb hard disk

sept. 10 08:58:05 localhost kernel: scsi 1:0:0:0: Direct-Access ATA WDC WD10EZRX-00A 1A01 PQ: 0 ANSI: 5
sept. 10 08:58:05 localhost kernel: sd 1:0:0:0: [sdb] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB)
sept. 10 08:58:05 localhost kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks
sept. 10 08:58:05 localhost kernel: sd 1:0:0:0: [sdb] Write Protect is off
sept. 10 08:58:05 localhost kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00
sept. 10 08:58:05 localhost kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
sept. 10 08:58:05 localhost kernel: sdb: sdb1 sdb2 < sdb5 >
sept. 10 08:58:05 localhost kernel: sd 1:0:0:0: [sdb] Attached SCSI disk
Comment by Bastien Durel (bastiendurel) - Wednesday, 10 September 2014, 12:49 GMT
One interesting thing is I got (in early systemd) the "Starting" message for both 8:5 (ssd) and 8:21 (hdd), but I only get one "Started" (for 8:5), although both PV was read (I got the message `X logical volume(s) in volume group "Y" now active')
Comment by Petr Rockai (mornfall) - Wednesday, 10 September 2014, 17:16 GMT
Well, it would help to have a trace of what pvscan for 8:21 is doing (eg. if you can get a strace into the initrd and attach the pvscan process) and the tarball you get from lvmdump -a -l -m -u (ideally from initrd, but from a running system would be a good start, too).
Comment by Bastien Durel (bastiendurel) - Thursday, 11 September 2014, 07:47 GMT
here is the live lvmdump
Comment by Bastien Durel (bastiendurel) - Friday, 12 September 2014, 07:37 GMT
I did not succeed in breaking into initrd, although I added break in grub cfg :

$ cat /proc/cmdline
BOOT_IMAGE=/arch/vmlinuz-linux root=/dev/mapper/ubuntu-archroot rw break=premount

Is there another way ?
Comment by Dave Reisner (falconindy) - Friday, 12 September 2014, 12:13 GMT
> Is there another way ?
The only way the "break" param is honored is without systemd.
Comment by Bastien Durel (bastiendurel) - Friday, 12 September 2014, 15:31 GMT
Here is the strace. I did not get the lvmdump as lvmdump is a script and not a binary : I was lacking dependencies. Do you need it ?
   lvm.out.293 (151.3 KiB)
Comment by Thomas Bächler (brain0) - Friday, 12 September 2014, 17:08 GMT
It may possibly be your custom udev rules that break LVM.

If you want to apply specific settings to LVM volumes, you must use something like this:

SUBSYSTEM=="block", ACTION=="add|change", ENV{DM_UDEV_RULES_VSN}=="?*", ENV{DM_UDEV_DISABLE_OTHER_RULES_FLAG}!="1", ENV{DM_NAME}=="space-vm*", GROUP="bastien"

The match on DM_UDEV_RULES_VSN and DM_UDEV_DISABLE_OTHER_RULES_FLAG is important.
Comment by Thomas Bächler (brain0) - Friday, 12 September 2014, 17:11 GMT
I do not see pvscan hanging in your lvm.out.293.

Try the following: Revert back to base+udev+lvm2 initramfs, but add BINARIES="strace". Then use break=postmount and look for any "pvscan" processes that did not exit. Attach to them using strace -p $PID and see if you get something useful.
Comment by Petr Rockai (mornfall) - Sunday, 14 September 2014, 10:03 GMT
The only suspicious thing I have noticed is that log/activation is set to 1 in your (live) lvm.conf, which *could* cause a deadlock (as the comment says), although it seems rather unlikely. As Thomas points out, your udev rules (especially if they leak into the initrd) may be responsible. More info is probably required at this point to get further along. Trying to boot with 99-custom-lvm.rules disabled may be a good idea.
Comment by Bastien Durel (bastiendurel) - Monday, 15 September 2014, 07:32 GMT
the log/activation was set to 1 to investigate the hang

when breaking postmount, I did not find any pvscan process in ps list (I tried twice)

editing my rules like brain0 said leads to another hang
Comment by Bastien Durel (bastiendurel) - Monday, 15 September 2014, 08:10 GMT
99-custom-lvm.rules is not in the initrd
Comment by Bastien Durel (bastiendurel) - Tuesday, 16 September 2014, 07:28 GMT
disabling 99-custom-lvm.rules does not fix the problem
Comment by Bastien Durel (bastiendurel) - Tuesday, 23 September 2014, 07:11 GMT
Is there a way to lower the "kill timeout" of pvscan & lvmetad services in initrd ?
Comment by Brian Kress (kressb) - Monday, 29 September 2014, 16:07 GMT
I see this on a VM server, occurs on about half of boots and the other half are fine. It's obviously timing related. My configuration uses the non-systemd version of the intitramfs with the mdadm_udev and lvm2 hooks. The system has several MD arrays, all of which are LVM PVs.

After a lot of poking around I'm pretty sure I can see what is going on. lvmetad is started, then udev is started and udev trigger is run. This kicks off the discovery of the disks, the MD arrays, and ultimately the LVM VGs. This is done by the following command from /lib/initcpio/udev/69-dm-lvm-metad.rules:


/usr/bin/lvm pvscan --background --cache --activate ay --major $major --minor $minor


Note the "--background" switch. This means they are started and udev goes on with its life. Udev trigger returns, and udev settle is called. If this happens to be called and finish before one of the pvscans gets to actually registering lvs, the udev hook can actually cause udev to exit WHILE it is processing lvs.

This is bad enough, however the way pvscan works, it creates and holds a semaphore across the LV activation. This semaphore is normally freed by 95-dm-notify.rules however udev is gone, this rule never fires to pvscan hangs waiting for the semaphore to be released.

Then lvmetad hangs because its waiting for pvscan to finish. The other aggravating thing about this bug is when this happens, half my LVs (I have a lot of them, the machine is a VM server) are missing.


Possible fix is just to remove "--background" from this pvscan invocation, I'm not really sure why its there, I'd think we WANT to wait for pvscan to finish.




Comment by Thomas Bächler (brain0) - Monday, 29 September 2014, 17:31 GMT
Actually, shouldn't udev finish all uevents before exiting?

Removing --background is certainly not what the lvm guys want, they even delegate pvscan to a systemd service and return instantly when systemd is running. In the systemd case, we could likely solve this problem by delaying the root mount until all pvscan@.service have finished. For the shell case, we can wait for all pvscan processes to finish in a late hook.
Comment by Brian Kress (kressb) - Monday, 29 September 2014, 17:37 GMT
As far as udev is concerned it DID finish processing the event (it started the pvscan process). The background pvscan process forks and exits and udev does not do any sort of cgroup style tracking of process trees.

Systemd won't have the issue because it kills everything before switching to the real root including the hung pvscans and the hung lvmetad. It will still have the issues of missing lvs though (one of which even theoretically could be the root).

And yes, waiting for the pvscan processes (or systemd units) to finish before stopping udev wold also solve the issue. However isn't that more or less the same as removing the background flag? Six of one, half dozen of another.
Comment by Bastien Durel (bastiendurel) - Monday, 29 September 2014, 18:02 GMT
Systemd *does not* kill everything before switching root ; see my comment https://bugs.archlinux.org/task/41833#comment127331

It trys to stop everything, but as pvscan hangs, it waits 90s to kill it
Comment by Brian Kress (kressb) - Monday, 29 September 2014, 18:06 GMT
Well technically it did, it just waited 90 seconds before doing so. :) But point taken. That's the same issue as the non-systemd version, just presents differently.
Comment by Bastien Durel (bastiendurel) - Monday, 29 September 2014, 20:42 GMT
True, but as the kill happens after switching root, it causes problems in standard (not initrd) system
Comment by Brian Kress (kressb) - Monday, 29 September 2014, 20:45 GMT
Yes, either init system (the systemd based one or the /bin/sh one) needs to wait until all the pvscan processes are done before stopping udev and switching root.
Comment by Petr Rockai (mornfall) - Tuesday, 30 September 2014, 06:52 GMT
The problem with non-background pvscan is that udev will kill it prematurely if it takes a while to finish (this does happen in practice, with slow/many block devices). So removing the --background flag will only give you a different problem.
Comment by Brian Kress (kressb) - Wednesday, 01 October 2014, 00:32 GMT
I added the following to the end of the run_hook() function in the udev hook right after the "udevadm settle" line:


while [ "$(pgrep -f pvscan)" ]; do
sleep 0.1
done


This seems to fix the issue as it waits for all the LVM processes to finish before going on to stop udev. I had to stick it in the udev hook instead of the lvm hook because initcpio doesn't seem to have a hook phase for post udev trigger but before root mount. Late hooks and cleanup hooks are too late and the main hook is possibly too early since I'm not sure how to ensure it runs after the udev hook.

I don't know what the "official" fix for this would look like (since sticking lvm stuff in the udev hook is definitely ugly), but this shows the general idea and proves that it solves the issue. For the shell script case anyway, solving the issue for the systemd case is another matter.
Comment by Petr Rockai (mornfall) - Wednesday, 01 October 2014, 08:21 GMT
Well, udev handover is always going to be messy. We have repeatedly asked for reasonable (or configurable) timeouts on things running from udev rules, but have been rejected every time. So forking into the background or using systemd-run is the only option left, with the consequent problems during handover. This still doesn't guarantee that everything pans out correctly, but it's the best we can do under the circumstances. Basically, udev is broken and there are no clean workarounds. One thing that could help you out here is only enabling auto-activation for the root VG during initrd -- but that only helps if you have multiple VGs and it's only the non-root VGs that have slow devices in them.
Comment by Petr Rockai (mornfall) - Wednesday, 01 October 2014, 09:03 GMT
The best you can do is, I think, what Brian suggests waiting for pvscan to finish before shutting down udev, as an interim solution. I have filed a bug upstream and we will be working towards a more palatable solution, which you can hopefully adopt when it's released. Cf. https://bugzilla.redhat.com/show_bug.cgi?id=1148352
Comment by Thomas Bächler (brain0) - Wednesday, 01 October 2014, 17:32 GMT
Brian, since udev and lvmetad quit in a cleanup hook, you should use a late hook for waiting for lvmetad.
Comment by Brian Kress (kressb) - Wednesday, 01 October 2014, 17:38 GMT
The only problem with the late hook is that it is after the root mount. Which means if the pvscan processes take long enough, mkinitcpio will time out waiting for the root device to show up. But I suppose that's like any other slow to show up device.

The late hook does seem fine for the udev interactions, yes.
Comment by Thomas Bächler (brain0) - Wednesday, 01 October 2014, 20:58 GMT
Looks good, then we only need to fix the systemd case, which should be easier.
Comment by Ivan Shapovalov (intelfx) - Friday, 31 October 2014, 00:03 GMT
I can't reproduce this bug, but shouldn't it suffice to order lvm2-pvscan@.service (as well as the pvscan instance started directly from 69-dm-lvm-metad.rules) After=systemd-udevd.service?
Comment by Brian Kress (kressb) - Friday, 31 October 2014, 00:21 GMT
No, because the bug isn't that pvscan is run before udev, it's that udev is made to exit before pvscan is done.
Comment by Ivan Shapovalov (intelfx) - Friday, 31 October 2014, 00:23 GMT
To quote systemd.unit(5):

"Note that when two units with an ordering dependency between them are shut down, the inverse of the start-up order
is applied. i.e. if a unit is configured with After= on another unit, the former is stopped before the latter if both are shut down."

EDIT: However, it can also be needed to use Requires=systemd-udevd.service or BindsTo=systemd-udevd.service, so that
the pvscan instance is actually made to stop in the same transaction as udevd.
Comment by Bastien Durel (bastiendurel) - Friday, 21 November 2014, 09:07 GMT
Hello,

today's upgrade removed the workaround I put in /usr/lib/initcpio/hooks/udev, and I ran into emergency shell on reboot
Comment by Marc Rechté (mrechte) - Saturday, 13 December 2014, 09:48 GMT
Hello,

Having the same problem, my config is / on SSD and /home on HD disk.

I put the fix in /usr/lib/initcpio/hooks/udev and it seems to have solved the problem.

Thanks for this (temporary) solution.
Comment by Jolan Luff (jolan) - Wednesday, 17 December 2014, 15:32 GMT
Hi,

I did my first install of Arch last night and it went ok.

With my second install on different hardware, I bumped into this problem.

I see that there's a workaround available, but it's for the non-systemd case.

I can't seem to find how to switch init systems since systemd is the only officially support init system. Can someone please tell me which init system to use to employ the workaround and how to activate it/disable systemd?

Thank you.
Comment by Bastien Durel (bastiendurel) - Wednesday, 17 December 2014, 15:53 GMT
you don't have to give up systemd in your system, only in the initrd.

In /etc/mkinitcpio.conf, you must change your HOOKS list:
HOOKS="base udev autodetect modconf block lvm2 filesystems keyboard fsck" is a hook list for script
HOOKS="base systemd autodetect modconf block sd-lvm2 filesystems keyboard fsck" is a hook list for systemd

see https://wiki.archlinux.org/index.php/mkinitcpio#HOOKS
Comment by Marc Rechté (mrechte) - Tuesday, 23 December 2014, 08:05 GMT
Warning: each systemd package update removes the work-around.
Comment by Green (The_Green_Arrow) - Tuesday, 23 December 2014, 20:36 GMT
I've noticed something very weird about this issue : when I boot into Windows 7 (which is installed on the same disk as the LVM volume group) and then reboot, I've got the timeout...even with the udev workaround (a simple reboot is enough then to boot properly). This has happened several times now, so it's not just a coincidence. Can anyone having a dual boot setup confirm this ?
Comment by Eugene (Eugene_1984) - Wednesday, 24 December 2014, 19:25 GMT
Confirming dual boot effect, with HOOKS="base udev autodetect mdadm_udev modconf block lvm2 filesystems keyboard fsck" and workaround in run_hook() pvdisplay hangs then booted after Windows 7. My system has Arch (/, /boot and /home) on SSD, some HHDs partially in mdadm RAID1 and partially in LVM and Windows 7 on a _separate_ HDD which is not a part of LVM. I've checked this effect 3 times right now and each time booting after win7 lead to timeout in Arch.
Comment by Green (The_Green_Arrow) - Sunday, 11 January 2015, 17:37 GMT
As a secondary workaround, would it be enough to set use_lvmetad = 0 in /etc/lvm/lvm.conf ?
Comment by Green (The_Green_Arrow) - Wednesday, 28 January 2015, 18:16 GMT
Answering to myself : it does not change a thing to disable lvmetad.
I don't know why, but lately, I've had this timeout issue even with the workaround...Sometimes, it won't even work after a cold boot. It really is pretty random.
Is there any other workaround to try ?....even if it slows down the entire boot process, it will be better than waiting 90 sec.


Comment by Green (The_Green_Arrow) - Saturday, 28 February 2015, 08:12 GMT
After a few weeks of testing, I can confirm that (at least for me) the actual working workaround is to override 69-dm-lvm-metad.rules and remove the --background flag for the pvscan command.
Comment by Martin Brodbeck (beedaddy) - Saturday, 07 March 2015, 15:39 GMT
I also experienced random boot problems. I've a SSD with "/" and a HDD with "/home". Both with LVM. Often, the system couldn't mount (or detect) /home. And I can confirm that The_Green_Arrow's solution (remove --background) also works in my case.
Comment by Micaël Bergeron (micaelbergeron) - Sunday, 08 March 2015, 20:16 GMT
I also experience this problem, with my "/" on a SSD and a lvm partitionned HDD, mainly used for storage. My / isn't in LVM, but my system will randomly (45-50%) hangs when rebooting, with the 90sec timeout. Could I only defer the LVM pvscan on startup instead of at boot

Edit: removing the --background flag seems to work here too, but looks to be sub-optimal, as a system update will break this.
Comment by John Doe (upah) - Wednesday, 11 March 2015, 23:23 GMT
I also experience this problem I have "/" on a SSD and some partitions on HDD. Brian's fix is not working for me, it still happens but seems not that often. I haven't tried removing --background yet.
Comment by Green (The_Green_Arrow) - Saturday, 14 March 2015, 09:25 GMT
@Micaël Bergeron @John Doe => you just have to copy /lib/initcpio/udev/69-dm-lvm-metad.rules to /etc/udev/rules.d/69-dm-lvm-metad.rules to override the rule definition and change the line from :
RUN+="/usr/bin/lvm pvscan --cache --background --activate ay --major $major --minor $minor", ENV{LVM_SCANNED}="1"
to
RUN+="/usr/bin/lvm pvscan --cache --activate ay --major $major --minor $minor", ENV{LVM_SCANNED}="1"
Comment by Micaël Bergeron (micaelbergeron) - Tuesday, 17 March 2015, 14:39 GMT
I just want to point out that in my case: my system is running on a straight ext4 part on my SSD and LVM is only used for storage needs, removing the lvm2 hook in /etc/mkinitcpio.conf did the trick.
Comment by John Doe (upah) - Wednesday, 25 March 2015, 15:25 GMT
@Green I've overwritten udev rules with /etc/udev/rules.d/69-dm-lvm-metad.rules and cut the "--background", few days everything was great, suddenly today this problem appeared again. I've checked /etc/udev/rules.d/69-dm-lvm-metad.rules and this file still have RUN without "--background"
Comment by Martin Brodbeck (beedaddy) - Thursday, 26 March 2015, 07:02 GMT
@upah: I too can confirm that the problem (for me a serious one) has reappeared. :(
Comment by Green (The_Green_Arrow) - Thursday, 26 March 2015, 16:19 GMT
@upah @beedaddy => Have any of you upgraded to linux 3.19 ?
Comment by Martin Brodbeck (beedaddy) - Thursday, 26 March 2015, 17:08 GMT
@The_Green_Arrow: Yes, I did.
Comment by Green (The_Green_Arrow) - Thursday, 26 March 2015, 18:11 GMT
I don't know if it's related...but I haven't upgraded to 3.19 yet...
Comment by Daniel Olivares (SirMyztiq) - Sunday, 29 March 2015, 00:13 GMT
I'm having the exact same issue. In fact, I've reinstalled Arch four times because I thought I was screwing up somewhere along the way. The funny thing is that it was worked just fine until I was trying to debug a dhcpcd issue(not starting on boot) and I enabled it with systemctl and rebooted. I got the error right on that reboot. I then disabled it, but now it's intermittent once again.

I finally found the right combination of keywords to search which led me to this. I have an SSD with two partitions one for the boot v32 and the other one is an LVM partition with /root and /home I also have an HDD with one big 3TB partition on LVM with /var and /data It only hangs on the HDD. At first I thought it was the size of the HDD but then I saw this.

I'm on 3.19 as well and I can confirm that the workaround mentioned above does not work either.
Comment by Martin Brodbeck (beedaddy) - Tuesday, 31 March 2015, 12:03 GMT
Well, it needs more testing here, but after a couple of reboots I tend to say that switching from udev to systemd in /etc/mkinitcpio.conf fixes the problem in my case. So my HOOKS changed from "base udev autodetect modconf block lvm2 filesystems keyboard fsck" to "systemd autodetect modconf block sd-lvm2 filesystems keyboard fsck". Perhaps someone could try that, too?
Comment by Konstantin Y (tm4ig) - Wednesday, 01 April 2015, 17:44 GMT
Issue more than six months. Is it all hopeless?
Comment by Bastien Durel (bastiendurel) - Thursday, 02 April 2015, 08:04 GMT
I'm stick with the while/pgrep/sleep workaround that works for me, that I must re-add each time udev goes updated
Comment by Daniel Olivares (SirMyztiq) - Friday, 03 April 2015, 04:06 GMT
@bastiendurel:

I can confirm that the fix you provided works. Yes, hacky, but it fixes the core issue. Thank you for that.
Comment by Juan Luis (heavymetal) - Friday, 10 April 2015, 02:28 GMT
Changing "base udev" to "systemd" like suggested by @beedaddy did not work for me, and in fact bricked my initrd. Every time it boots it keeps waiting forever for /dev/disk/by-uuid/{ some UUID}.device.

I'm going back to @kressb patch, which worked before.
Comment by Daniel Olivares (SirMyztiq) - Friday, 10 April 2015, 03:51 GMT
@heavymetal

Maybe I'm being too literal, but did you replace "base udev" with "systemd" or did you mean "base udev" with "base systemd"?
Comment by Juan Luis (heavymetal) - Friday, 10 April 2015, 10:35 GMT
@SirMyztiq

I replaced "base udev" with "systemd", as the comment suggested.
Comment by Daniel Olivares (SirMyztiq) - Friday, 10 April 2015, 13:54 GMT
@heavymetal

Did you also change the "lvm2" hook to " sd-lvm2" when using "systemd"?

Personally I modified the udev hook with

while [ "$(pgrep -f pvscan)" ]; do
sleep 0.1
done

And its been working great.
Comment by Juan Luis (heavymetal) - Friday, 10 April 2015, 13:57 GMT
@SirMyztiq

No, I haven't tried that. I rebuilt it with the udev hook patch (without systemd) and it's working.
Comment by Georg (georgnix) - Thursday, 23 April 2015, 10:12 GMT
The solution from bastiendurel works for me, i.e.

In /etc/mkinitcpio.conf

replace in HOOKS=...

udev -> systemd
lvm2 -> sd-lvm2

so that for example

HOOKS="base udev autodetect modconf block lvm2 filesystems keyboard fsck"

becomes

HOOKS="base systemd autodetect modconf block sd-lvm2 filesystems keyboard fsck"

Both have to be replaced!

my packages:
linux 3.19.3-3
systemd 219-5
Comment by Bastien Durel (bastiendurel) - Monday, 11 May 2015, 09:07 GMT
I've tried again with systemd/sd-lvm2, but my system continues to hang ; I had to return to pgrep/sleep workaround.
core/linux 4.0.1-1
core/systemd 219-6
Comment by Divan Santana (divansantana) - Wednesday, 08 July 2015, 20:52 GMT
Also been intermittently having this issue.
Fortunately switching to systemd/sd-lvm2 hooks seems to have solved it.
Comment by Green (The_Green_Arrow) - Friday, 10 July 2015, 15:55 GMT
Divan => I've just tried the sd-lvm2 solution, and had the same issue upon the next reboot.....What is weird is that all the workarounds were definitely working until linux 3.19....now, it's pretty random...
Comment by Green (The_Green_Arrow) - Wednesday, 11 May 2016, 19:30 GMT
For those still having this issue, you can try the following workaround :
On the LVM partition declared in your /etc/fstab, you can add nofail,x-systemd.device-timeout=XX
The boot will no longer wait for the nofail partition to be mounted but when it fails, you will end up with an unmounted LVM partition that you can then activate with vgchange -ay
Comment by Daniel Wendler (BMRMorph) - Tuesday, 12 July 2016, 07:44 GMT
So the issue occurred for me with the update from 2.02.157 to >=2.02.158.
So in my eyes there are two issues in this case, the first one is the "wait for pvscan" which i solved as sugested with an late hook in lvm2 (not in udev hook):

run_latehook() {
while [ "$(pgrep -f pvscan)" ]; do
sleep 0.2
done
}

The second one is an in my eyes unexpected behavior of the lvmetad.
I look into this and if the lvmetad isn't responding on the socket, all reads
(lvs, pvs, etc.) wait indefinitely.
I look at this with strace and could see an connect to the socket, an write "hello" and an read on the socket...it never returns.
I think there should be an timeout on the read from the socket...so maybe i should open an bug upstream?

@Snowman / @brain0
As this ticket is open such a long time i would ask if the suggested wait loop here is an solution and could this implemented in the lvm2 late hook?
Comment by Christian Hesse (eworm) - Tuesday, 12 July 2016, 21:32 GMT
I reworked the loop a bit to make it break after some seconds. I suppose we do not want an endless loop - even if pvscan processes remaining.

Do these packages work for you? Both are signed, so you can just install with `pacman -U`.
http://pkgbuild.com/~eworm/lvm2/device-mapper-2.02.160-1.1-x86_64.pkg.tar.xz
http://pkgbuild.com/~eworm/lvm2/lvm2-2.02.160-1.1-x86_64.pkg.tar.xz

Does anybody have objections adding something like this to the hook?
Comment by Daniel Wendler (BMRMorph) - Wednesday, 13 July 2016, 08:59 GMT
I've installed your packages, recreate initcpio and could normaly boot my system. so it works for me.
Maybe the timeout could be reduced from the actual 30s (150) to 10s (50)? (analog to the encrypt hook drive detect timeout?)
Comment by Christian Hesse (eworm) - Wednesday, 13 July 2016, 09:42 GMT
I am fine with just 10 seconds if that is sufficient. ;)

Updated to lvm2 2.02.160-2 in [testing], which includes the late hook with loop.
Comment by Sam (waxymouthfeel) - Saturday, 13 January 2018, 10:44 GMT
I'm still affected by this bug on my development xen machines. Going the systemd sd-lvm2 route doesn't work, but it *will* work if I give the lvm2 hook in the shell version more time to work. I've set it to 60s and that seems to take care of it. So, a lot of LV's and pvscan needs more time to do its job. Seems to be around 20 volumes before I hit the problem.

A "fix" could be setting the wait count in mkinitcpio.conf or something like that, but I'll fully admit to this case looking really edgy.
Comment by aqua (aqua) - Friday, 23 March 2018, 08:46 GMT
Maybe my comment in this other bug report will help you. https://bugs.archlinux.org/task/57275#comment167331
Comment by Christian Hesse (eworm) - Wednesday, 10 February 2021, 09:34 GMT
With lvm2 2.03.11-3 in [core] now lvmetad is gone.

Loading...