FS#25158 - [mdadm] Does not honour boot raid assignment
Attached to Project:
Arch Linux
Opened by Chris Bannister (Zariel) - Monday, 18 July 2011, 15:26 GMT
Last edited by Tobias Powalowski (tpowa) - Saturday, 17 September 2011, 10:37 GMT
Opened by Chris Bannister (Zariel) - Monday, 18 July 2011, 15:26 GMT
Last edited by Tobias Powalowski (tpowa) - Saturday, 17 September 2011, 10:37 GMT
|
Details
Description:
The latest mdadm does not honour the raid naming given in the boot line, instead default device nodes are created in /dev/. The command line is parsed correctly giving a correct mdadm.conf in the /etc directory in the initramfs, but it seems that udev is not taking this into account when creating the device nodes. After the rootfs is failed to be mounted I get dropped to the ramfs shell where I can see that the following raid devices have been brought up /dev/md123 /dev/md124 /dev/md125 /dev/md126 /dev/md127 If I stop these using mdadm then execute the following I get the correct device's in /dev mdadm -A --scan And after that I can exit the ramfs shell and continue init. Additional info: * package version(s): mdadm 3.2.2-2 udev 171-2 mkinitcpio 0.7.2-1 * Config files: /boot/grub/menu.lst: title Arch Linux [/boot/vmlinuz26-lts] root (hd0,0) kernel /vmlinuz26-lts root=/dev/md3 footfstype=ext4 ro md=3,/dev/sda3,/dev/sdb3 md=1,/dev/sda1,/dev/sdb1 md=4,/dev/sda4,/dev/sdb4 md=2,/dev/sda2,/dev/sdb2 printk.time=y initrd /kernel26-lts.img mdadm.conf is the the default, I have tried with a generated config with the arrays already defined but the same happens. Steps to reproduce: Add custom raid naming to the boot line and boot with the mdadm and udev hook. |
This task depends upon
Closed by Tobias Powalowski (tpowa)
Saturday, 17 September 2011, 10:37 GMT
Reason for closing: Fixed
Additional comments about closing: mdadm-3.2.2-4
Saturday, 17 September 2011, 10:37 GMT
Reason for closing: Fixed
Additional comments about closing: mdadm-3.2.2-4
Also, try without the commandline, but with a mdadm.conf generated from 'mdadm --examine --scan'.
ARRAY /dev/md3 metadata=0.90 UUID=61187c3e:c8103606:30208963:80a1cffd devices=/dev/sda3,/dev/sdb3
ARRAY /dev/md1 metadata=0.90 UUID=433e62d5:6043c167:83d01d45:2c456f70 devices=/dev/sda1,/dev/sdb1
ARRAY /dev/md4 metadata=0.90 UUID=6babe5be:7142d548:c9601613:be0a959d devices=/dev/sda4,/dev/sdb4
ARRAY /dev/md2 metadata=0.90 UUID=27d84331:8069300a:1344d2dd:98faa476 devices=/dev/sda2,/dev/sdb2
And this commandline didnt work either, the same thing happenend.
kernel /vmlinuz26-lts root=/dev/md3 ro md=3,/dev/sda3,/dev/sdb3 md=1,/dev/sda1,/dev/sdb1 md=4,/dev/sda4,/dev/sdb4 md=2,/dev/sda2,/dev/sdb2 printk.time=y
[ -f $mdconfig ] && mdadm -A --scan
Into the mdadm hook, having mdadm before udev which causes the raid to be initialized before udev has a chance to.
I need to bring up md5 and md6 manually and continue booting from there.
And even this manual intervention wasn't consistent over reboots. At one point
system booted and I left it in this state.
# cat /etc/mdadm.conf
ARRAY /dev/md1 level=raid0 num-devices=3 metadata=0.90 UUID=fc6fcb91:68a75a29:b88acab5:adec4669
ARRAY /dev/md2 level=raid0 num-devices=3 metadata=0.90 UUID=de4e3121:f431727a:a6faa410:2054c963
ARRAY /dev/md0 level=raid1 num-devices=2 metadata=0.90 UUID=708ad3f4:71151a9e:2886212e:c7b400f1
ARRAY /dev/md3 level=raid0 num-devices=3 metadata=0.90 UUID=8e83828a:f27d4277:52349015:4ece80f2
ARRAY /dev/md4 level=raid0 num-devices=3 metadata=0.90 UUID=24c0389f:5157f6ca:58a3bbf9:facd50e1
ARRAY /dev/md5 level=raid1 num-devices=2 metadata=0.90 UUID=b5ac45b4:5d186d3c:530cf6cd:548af09e
ARRAY /dev/md6 level=raid1 num-devices=2 metadata=0.90 UUID=78aa9709:15503fd7:35e7b8f4:67d09af7
What's the solution here? Setting udev rules for software raid devices?