FS#18153 - [lvm2] File-based locking and dmeventd error in 2.02.60-2
Attached to Project:
Arch Linux
Opened by Michael Trunner (trunneml) - Wednesday, 03 February 2010, 22:47 GMT
Last edited by Eric Belanger (Snowman) - Sunday, 13 March 2011, 05:44 GMT
Opened by Michael Trunner (trunneml) - Wednesday, 03 February 2010, 22:47 GMT
Last edited by Eric Belanger (Snowman) - Sunday, 13 March 2011, 05:44 GMT
|
Details
Description:
On start up LVM2 gives the following Error Message. File-based locking initialisation failed. Child exited with code 5 Unable to start dmeventd. System works just normal. Additional info: * lvm2 package version is 2.02.60-2 * root filesystem is on lvm2 too * There is now problem when I downgrade to the old version "lvm2 2.02.53" Steps to reproduce: Just start Arch Linux with installed lvm2 2.02.60-2. |
This task depends upon
Closed by Eric Belanger (Snowman)
Sunday, 13 March 2011, 05:44 GMT
Reason for closing: Fixed
Additional comments about closing: fixed in initscripts 2011.02.1-1
Sunday, 13 March 2011, 05:44 GMT
Reason for closing: Fixed
Additional comments about closing: fixed in initscripts 2011.02.1-1
> Child exited with code 5
> Unable to start dmeventd.
So when I remove all lvm-mirrors form the volume group it works.
It seems to be that this dmeventd is not started.
Where is the startup script?
In version 2.02.53 it was a verbose output to stdout and with "> /dev/null" nobody has seen it.
Now in version 2.02.60 it is an error output to stderr and "> /dev/null" doesn't work now for this message.
Source:
dmeventd.h:#define EXIT_OPEN_PID_FAILURE 5
If you run any lvm-commad after the root-fs is mounted read-write, dmeventd starts as it should.
I've created a patch for rc.sysinit.
As your system works normal, this is not critical, but I guess the dmeventd monitoring is there for a reason, so we should allow it to be started.
Anyway, I don't know what happens when the dmeventd is started later or even never, but in the old version of lvm2 (2.02.53) the dmeventd was deactivated, so it can't be so important for a system.
FS#18157.If we look at the lvm.conf (/etc/lvm/lvm.conf) we can notice that there's a line to where we can save the lock files, it also mentions that the lock files could be saved on the tmp file system.
I don't think for this issue only (dmeventd) we need to mount /var/{lock,run} as a different fs and make it r/w before mounting the whole file system, perhaps the best idea is to change the lockfile of the lvm into the tmpfs.
FS#18157, there is a discussion about putting /var/{lock,run} to an tmpfs.as this bug report is lvm specific, perhaps putting the lock dir for lvm specific on a tmpfs is the sulution.
they also state that it's okay to put the lock files on a tmpfs and the option is available on the lvm.conf file.
Has there been any progress in the last couple of weeks?
in the lvm.conf there's a comment that says that it's possible to put the lock file on a tmp file system.
so i've added a line in /etc/rc.sysinit which mount /tmp as rw on startup.
/bin/mount -n -t tmpfs none /tmp -o rw,mode=0755
right after the run_hook sysinit_start
it seems to work, no errors what so ever.
Now, it does not matter if the lock file gets deleted after reboot, it's only there to prevent running another lvm command session.
This also happens with the following combination:
device-mapper 2.02.69-1 + lvm2 2.02.69-1 (both were updated in July 9 2010) + kernel26-lts 2.6.32.15-1
And no symlink is created in /dev/mapper/..... and no volume group partitions can be mounted.
With kernel26 2.6.34.1-1 it works normally.
--sysinit
Indicates that vgchange(8) is being invoked from early system initialisation scripts (e.g. rc.sysinit or an initrd), before writeable filesystems are available. As such, some functionality needs to be disabled and this option acts as a shortcut which selects an appropriate set of options. Currently this is equivalent to using --ignorelockingfailure, --ignoremonitoring, --poll n and setting LVM_SUPPRESS_LOCKING_FAILURE_MESSAGES environment variable.
I do not use LVM, so I don't know what other changes have to be implemented to follow this, but at least it looks like upstream is aware of the problem and has solved it without requiring partitions to be mounted rw. I guess other distros will have solved the problem, maybe worth a look?
today I applied patches from git
- b4c804d60d6e8361db3f19bf3a2fa6fb58ee8458 (--sysinit in rc.sysinit and rc.shutdown)
- feef447b8368244525dd98582b662a369098b2f7 (monitoring in rc.sysinit and rc.shutdown)
No further error/warning messages from LVM2.
Everything looks good to me (using luks on lvm2).
With kind regards,
# kraM