FS#18440 - [initscripts] Activating RAID arrays believes to [FAIL]
Attached to Project:
Arch Linux
Opened by Carlos Mennens (carlwill) - Tuesday, 23 February 2010, 19:55 GMT
Last edited by Andrea Scarpino (BaSh) - Saturday, 19 March 2011, 15:11 GMT
Opened by Carlos Mennens (carlwill) - Tuesday, 23 February 2010, 19:55 GMT
Last edited by Andrea Scarpino (BaSh) - Saturday, 19 March 2011, 15:11 GMT
|
Details
Description: During the boot process of Arch, mdadm reports
[FAIL] during
Additional info: * package version(s) mdadm 3.1.1-2 * config and/or log files etc. * Please see the following URL on the forums: http://bbs.archlinux.org/viewtopic.php?id=90415 Steps to reproduce: I have followed the Wiki steps to use 'mdadm' on RAID1 and RAID5 systems. Every system that runs Arch Linux and mdadm all have the same problem. It doesn't matter what level of RAID. Here is my /etc/mdadm.conf: ARRAY /dev/md0 level=raid1 num-devices=2 metadata=0.90 UUID=ea294b49:0648ce97:da5c2305:2ea57b51 My /etc/rc.conf file is attached: |
This task depends upon
Closed by Andrea Scarpino (BaSh)
Saturday, 19 March 2011, 15:11 GMT
Reason for closing: Fixed
Additional comments about closing: newer kernel auto start md devices. mdadm can be disabled in initscripts to let kernel do his jog with the new upstream option USEMDADM=no
Saturday, 19 March 2011, 15:11 GMT
Reason for closing: Fixed
Additional comments about closing: newer kernel auto start md devices. mdadm can be disabled in initscripts to let kernel do his jog with the new upstream option USEMDADM=no
Might want to assign this to Tobias P (tpowa)
# If necessary, find md devices and manually assemble RAID arrays
if [ -f /etc/mdadm.conf -a "$(/bin/grep ^ARRAY /etc/mdadm.conf 2>/dev/null)" ]; then
<------>status "Activating RAID arrays" /sbin/mdadm --assemble --scan
fi
The problem arises when the root filesystem is on a raid array which leads to
having all raid arrays assembled during the initrd boot phase. After which, the
above script snippet runs and tries to assemble an already assembled array set
which then emits a failed exit code from /sbin/mdadm. The status then shows as
FAILED. This is not really a big problem, just annoying.
This step is already done via udev rules, md devices are automatically assembled in incremental way. (-I)
# If necessary, find md devices and manually assemble RAID arrays
if [ -f /etc/mdadm.conf -a "$(/bin/grep ^ARRAY /etc/mdadm.conf 2>/dev/null)" ]; then
status "Activating RAID arrays" /sbin/mdadm --assemble --scan
fi
and everything's fine. So the arrays are already active (/ and /boot by initrd, the others by something else, maybe udev?).
It it really needed?
Anyway, if it's needed we can add a check to see if some arrays are not mounted yet before running mdadm --assemble. (I have no idea how to make a patch, here's a simple diff)
140c140,143
< status "Activating RAID arrays" /sbin/mdadm --assemble --scan
---
> if [ $(/bin/grep ^ARRAY /etc/mdadm.conf | /bin/grep -o "md[[:digit:]]" | /usr/bin/sort | /bin/tr -d "\n") != \
> $(/bin/grep active /proc/mdstat | /bin/grep -o "md[[:digit:]]" | /usr/bin/sort | /bin/tr -d "\n") ]; then
> status "Activating RAID arrays" /sbin/mdadm --assemble --scan
> fi
http://projects.archlinux.org/initscripts.git/commit/?id=8b2bfa7bd0073a3a9d389242faf16483c9ec5336