Arch Linux

Please read this before reporting a bug:

Do NOT report bugs when a package is just outdated, or it is in the AUR. Use the 'flag out of date' link on the package page, or the Mailing List.

REPEAT: Do NOT report bugs for outdated packages!

FS#34180 - [lvm2] Volume group not active at boot

Attached to Project: Arch Linux
Opened by David Rosenstrauch (darose) - Wednesday, 06 March 2013, 20:04 GMT
Last edited by Dave Reisner (falconindy) - Saturday, 26 October 2013, 14:04 GMT
Task Type Bug Report
Category Packages: Extra
Status Closed
Assigned To Thomas Bächler (brain0)
Architecture All
Severity High
Priority Normal
Reported Version
Due in Version Undecided
Due Date Undecided
Percent Complete 100%
Votes 1
Private No


For some reason, since the last LVM upgrade, one of my LVM volumes fails to mount, causing the boot sequence to fail.

When I boot, I see the following:

[root@darsys12 ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda4 vg1 lvm2 a-- 359.49g 319.49g
/dev/sdc vg2 lvm2 a-- 931.51g 431.51g
/dev/sdd vg3 lvm2 a-- 465.76g 165.76g
[root@darsys12 ~]# vgs
VG #PV #LV #SN Attr VSize VFree
vg1 1 2 0 wz--n- 359.49g 319.49g
vg2 1 1 0 wz--n- 931.51g 431.51g
vg3 1 1 0 wz--n- 465.76g 165.76g
[root@darsys12 ~]# lvs
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
lvabs vg1 -wi-ao--- 10.00g
lvcache vg1 -wi-ao--- 30.00g
lvshare vg2 -wi------ 500.00g
lvhome vg3 -wi-ao--- 300.00g

As a result, there's no /dev/vg2/lvshare volume found, and trying to mount
it (via fstab) fails.

I can sometimes fix this issue by removing the volume from /etc/fstab,
rebooting, and then manually issuing "vgchange -ay vg2". But even this
doesn't seem to work reliably. It usually tells me it activates 1 volume,
but sometimes /dev/vg2/lvshare still doesn't show up.

Any idea what the problem might be here / how to fix?
This task depends upon

Closed by  Dave Reisner (falconindy)
Saturday, 26 October 2013, 14:04 GMT
Reason for closing:  Not a bug
Additional comments about closing:  Closed at reporter's request
Comment by David Rosenstrauch (darose) - Thursday, 07 March 2013, 16:09 GMT
Perhaps this is a systemd issue?

I see the following messages in dmesg:

[ 6.838330] systemd[1]: Listening on LVM2 metadata daemon socket.
[ 6.838340] systemd[1]: Expecting device dev-sda3.device...
[ 6.838406] systemd[1]: Expecting device dev-sda1.device...
[ 6.838470] systemd[1]: Expecting device dev-vg1-lvcache.device...
[ 6.838539] systemd[1]: Expecting device dev-vg1-lvabs.device...
[ 6.838607] systemd[1]: Expecting device dev-vg3-lvhome.device...
[ 6.838675] systemd[1]: Expecting device dev-sdb1.device...

Note there's no entry for dev-vg2-lvshare.device .
Comment by David Rosenstrauch (darose) - Monday, 06 May 2013, 03:50 GMT
I'm still suffering from this problem. Any idea what the issue might be?
Comment by David Rosenstrauch (darose) - Wednesday, 23 October 2013, 05:47 GMT
I'm actually still (!) having this issue. I've managed to capture a verbose LVM log during a session where I booted into single user mode. It's posted at . Perhaps someone more knowledgeable about LVM than I might be able to look through it and see the issue?

The problem, as before, is that vg2 is not activated at boot, thereby making the /dev/vg2/lvshare volume not accessible.

One thing, at the least, that I notice that's odd in the log is that I never see any line saying "Opened /etc/lvm/backup/vg2", though I do see them for vg1 and vg3.

Any help greatly appreciated!
Comment by Thomas Bächler (brain0) - Wednesday, 23 October 2013, 08:12 GMT
Maybe you should send that log to the dm-devel mailing list, I really can't make sense of why it doesn't activate. It seems to find the volume group just fine.
Comment by David Rosenstrauch (darose) - Thursday, 24 October 2013, 14:49 GMT
Finally figured out what happened here.

The single PV in vg2 is a full disk device (/dev/sdc), not a partition. However, when I first was initializing the drive to add in to LVM, I accidentally partitioned the drive first. Realizing the mistake, I then deleted the partitions. However the result was a drive with an empty partition table, instead of a drive with no partition table. So what must be happening is that when LVM scans the drive, it sees that it's partitioned, but sees no partitions, and then moves on, leaving the vg inactive. When I later manually tell LVM to activate the group explicitly it works; it just can't do it automatically.

Wiping out the partition table (i.e., MBR) on the drive seems to fix the issue, and LVM can now see the VG at boot.

That said, I'm now having another issue. (When I boot into single user mode LVM works perfectly and mounts this LV from fstab; but when I boot normally it fails, telling me that mount failed / dependency failed / lvmetad timed out waiting for device.) Not sure what the issue is there (some sort of race condition?). But as it's not related to the issue here, I'll follow up separately.