FS#25158 - [mdadm] Does not honour boot raid assignment

Attached to Project: Arch Linux
Opened by Chris Bannister (Zariel) - Monday, 18 July 2011, 15:26 GMT
Last edited by Tobias Powalowski (tpowa) - Saturday, 17 September 2011, 10:37 GMT
Task Type Bug Report
Category Packages: Core
Status Closed
Assigned To Tobias Powalowski (tpowa)
Thomas Bächler (brain0)
Architecture All
Severity High
Priority Normal
Reported Version
Due in Version Undecided
Due Date Undecided
Percent Complete 100%
Votes 1
Private No

Details

Description:
The latest mdadm does not honour the raid naming given in the boot line, instead default device nodes are created in /dev/. The command line is parsed correctly giving a correct mdadm.conf in the /etc directory in the initramfs, but it seems that udev is not taking this into account when creating the device nodes.

After the rootfs is failed to be mounted I get dropped to the ramfs shell where I can see that the following raid devices have been brought up
/dev/md123
/dev/md124
/dev/md125
/dev/md126
/dev/md127

If I stop these using mdadm then execute the following I get the correct device's in /dev
mdadm -A --scan

And after that I can exit the ramfs shell and continue init.

Additional info:
* package version(s):
mdadm 3.2.2-2
udev 171-2
mkinitcpio 0.7.2-1

* Config files:
/boot/grub/menu.lst:
title Arch Linux [/boot/vmlinuz26-lts]
root (hd0,0)
kernel /vmlinuz26-lts root=/dev/md3 footfstype=ext4 ro md=3,/dev/sda3,/dev/sdb3 md=1,/dev/sda1,/dev/sdb1 md=4,/dev/sda4,/dev/sdb4 md=2,/dev/sda2,/dev/sdb2 printk.time=y
initrd /kernel26-lts.img

mdadm.conf is the the default, I have tried with a generated config with the arrays already defined but the same happens.


Steps to reproduce:
Add custom raid naming to the boot line and boot with the mdadm and udev hook.
This task depends upon

Closed by  Tobias Powalowski (tpowa)
Saturday, 17 September 2011, 10:37 GMT
Reason for closing:  Fixed
Additional comments about closing:  mdadm-3.2.2-4
Comment by Chris Bannister (Zariel) - Monday, 18 July 2011, 15:45 GMT
Using no boot md assignment and instead using mdadm.conf instead of the devices being brought up as /dev/md12X the arrays are named correctly but contain no drives, this time the init script complains that it is unable to determine the rootfstype which is understandable because the array contains no devices. Again if I stop them all and then do a mdadm -A --scan I can mount /new_root and continue to boot.
Comment by Gerardo Exequiel Pozzi (djgera) - Monday, 18 July 2011, 15:55 GMT
footfstype=ext4 -> rootfstype=ext4 ;)
Comment by Chris Bannister (Zariel) - Monday, 18 July 2011, 17:37 GMT
With or without that argument makes no difference.
Comment by Thomas Bächler (brain0) - Tuesday, 19 July 2011, 00:22 GMT
Try putting the mdadm hook before the udev hook, then boot with the md= options on the command line.

Also, try without the commandline, but with a mdadm.conf generated from 'mdadm --examine --scan'.
Comment by Chris Bannister (Zariel) - Tuesday, 19 July 2011, 08:03 GMT
This is the mdadm file I tried with that method and it didnt work, the same happened. /dev/md{1,2,3,4} are created but contain no disks and are marked as inactive, also /dev/md127 exists which contains /dev/sda but marked inactive.

ARRAY /dev/md3 metadata=0.90 UUID=61187c3e:c8103606:30208963:80a1cffd devices=/dev/sda3,/dev/sdb3
ARRAY /dev/md1 metadata=0.90 UUID=433e62d5:6043c167:83d01d45:2c456f70 devices=/dev/sda1,/dev/sdb1
ARRAY /dev/md4 metadata=0.90 UUID=6babe5be:7142d548:c9601613:be0a959d devices=/dev/sda4,/dev/sdb4
ARRAY /dev/md2 metadata=0.90 UUID=27d84331:8069300a:1344d2dd:98faa476 devices=/dev/sda2,/dev/sdb2

And this commandline didnt work either, the same thing happenend.

kernel /vmlinuz26-lts root=/dev/md3 ro md=3,/dev/sda3,/dev/sdb3 md=1,/dev/sda1,/dev/sdb1 md=4,/dev/sda4,/dev/sdb4 md=2,/dev/sda2,/dev/sdb2 printk.time=y
Comment by Thomas Bächler (brain0) - Tuesday, 19 July 2011, 10:52 GMT
What about changing the order of the HOOKS?
Comment by Chris Bannister (Zariel) - Tuesday, 19 July 2011, 10:54 GMT
Both were with mdadm before udev
Comment by Chris Bannister (Zariel) - Sunday, 31 July 2011, 13:54 GMT
Can be temporarily fixed by putting

[ -f $mdconfig ] && mdadm -A --scan

Into the mdadm hook, having mdadm before udev which causes the raid to be initialized before udev has a chance to.
Comment by tiny (tiny) - Thursday, 01 September 2011, 08:20 GMT
This temporary fix doesn't fix it for me. Only first 5 raid devices get up.
I need to bring up md5 and md6 manually and continue booting from there.
And even this manual intervention wasn't consistent over reboots. At one point
system booted and I left it in this state.

# cat /etc/mdadm.conf
ARRAY /dev/md1 level=raid0 num-devices=3 metadata=0.90 UUID=fc6fcb91:68a75a29:b88acab5:adec4669
ARRAY /dev/md2 level=raid0 num-devices=3 metadata=0.90 UUID=de4e3121:f431727a:a6faa410:2054c963
ARRAY /dev/md0 level=raid1 num-devices=2 metadata=0.90 UUID=708ad3f4:71151a9e:2886212e:c7b400f1
ARRAY /dev/md3 level=raid0 num-devices=3 metadata=0.90 UUID=8e83828a:f27d4277:52349015:4ece80f2
ARRAY /dev/md4 level=raid0 num-devices=3 metadata=0.90 UUID=24c0389f:5157f6ca:58a3bbf9:facd50e1
ARRAY /dev/md5 level=raid1 num-devices=2 metadata=0.90 UUID=b5ac45b4:5d186d3c:530cf6cd:548af09e
ARRAY /dev/md6 level=raid1 num-devices=2 metadata=0.90 UUID=78aa9709:15503fd7:35e7b8f4:67d09af7

What's the solution here? Setting udev rules for software raid devices?
Comment by Tobias Powalowski (tpowa) - Wednesday, 14 September 2011, 09:54 GMT
should be fixed with 3.2.2-4, please confirm.
Comment by Chris Bannister (Zariel) - Friday, 16 September 2011, 11:05 GMT
Confirm fixed.

Loading...