FS#18440 - [initscripts] Activating RAID arrays believes to [FAIL]

Attached to Project: Arch Linux
Opened by Carlos Mennens (carlwill) - Tuesday, 23 February 2010, 19:55 GMT
Last edited by Andrea Scarpino (BaSh) - Saturday, 19 March 2011, 15:11 GMT
Task Type Bug Report
Category Packages: Core
Status Closed
Assigned To Tobias Powalowski (tpowa)
Thomas Bächler (brain0)
Architecture All
Severity Low
Priority Normal
Reported Version
Due in Version Undecided
Due Date Undecided
Percent Complete 100%
Votes 10
Private No

Details

Description: During the boot process of Arch, mdadm reports [FAIL] during


Additional info:
* package version(s) mdadm 3.1.1-2
* config and/or log files etc.

* Please see the following URL on the forums:

http://bbs.archlinux.org/viewtopic.php?id=90415


Steps to reproduce: I have followed the Wiki steps to use 'mdadm' on RAID1 and RAID5 systems. Every system that runs Arch Linux and mdadm all have the same problem. It doesn't matter what level of RAID.

Here is my /etc/mdadm.conf:
ARRAY /dev/md0 level=raid1 num-devices=2 metadata=0.90 UUID=ea294b49:0648ce97:da5c2305:2ea57b51

My /etc/rc.conf file is attached:
   rc.conf (3.1 KiB)
This task depends upon

Closed by  Andrea Scarpino (BaSh)
Saturday, 19 March 2011, 15:11 GMT
Reason for closing:  Fixed
Additional comments about closing:  newer kernel auto start md devices. mdadm can be disabled in initscripts to let kernel do his jog with the new upstream option USEMDADM=no
Comment by Matthew Gyurgyik (pyther) - Tuesday, 23 February 2010, 20:00 GMT Comment by Tony Thedford (tonythed) - Tuesday, 06 July 2010, 04:17 GMT
I believe that this bug may be caused by the following in rc.sysinit (my version is initscripts 2010.06-2):

# If necessary, find md devices and manually assemble RAID arrays
if [ -f /etc/mdadm.conf -a "$(/bin/grep ^ARRAY /etc/mdadm.conf 2>/dev/null)" ]; then
<------>status "Activating RAID arrays" /sbin/mdadm --assemble --scan
fi

The problem arises when the root filesystem is on a raid array which leads to
having all raid arrays assembled during the initrd boot phase. After which, the
above script snippet runs and tries to assemble an already assembled array set
which then emits a failed exit code from /sbin/mdadm. The status then shows as
FAILED. This is not really a big problem, just annoying.


Comment by ChrisVS (Strider) - Sunday, 25 July 2010, 21:49 GMT
I can confirm the comment from Tony : this command generates an exit code=2 and hence fails. By the way, this is annoying : I just spend 2 hours looking what was wrong with my RAID array until I stumbled on the above mentioned thread in the forum.
Comment by Gerardo Exequiel Pozzi (djgera) - Monday, 23 August 2010, 01:36 GMT
maybe this step should be skipped if udev is running or other check or removed from rc.sysinit...
This step is already done via udev rules, md devices are automatically assembled in incremental way. (-I)
Comment by Linas (Linas) - Friday, 10 September 2010, 00:24 GMT
It's not due to the root filesystem (or not just to that, it may have broadened). Unhiding the output shows it trying and complaining about busy items, wrong uuids... but it finally mounts perfectly.
Comment by Niki Kovacs (kikinovak) - Saturday, 11 December 2010, 07:28 GMT
I just happened to have the same problem. I admit it's not a "serious" bug in that it doesn't break something. But I just lost a few unnerving hours trying to figure out why my RAID array "FAILS"... before stumbling over this bug report. In the meantime, a workaround consists in uncommenting the lines related to mdadm in rc.sysinit. But that's a workaround, not a solution.
Comment by Hi (raylz) - Friday, 07 January 2011, 22:34 GMT
I can confirm this with an up to date system. RAID works fine though
Comment by Carlos Mennens (carlwill) - Saturday, 08 January 2011, 15:49 GMT
Is there any way to correct this? I think everyone understands and agrees this has no impact on the system performance or RAID but it just looks bad when a system I dedicate all my time working on as a big red FAIL listed in it. I would think there has to be some way to correct this or patch it...
Comment by silvik (silvik) - Sunday, 23 January 2011, 10:23 GMT
I'm not sure who activates raid arrays, but on my machine I can comment the section that supposed to do this

# If necessary, find md devices and manually assemble RAID arrays
if [ -f /etc/mdadm.conf -a "$(/bin/grep ^ARRAY /etc/mdadm.conf 2>/dev/null)" ]; then
status "Activating RAID arrays" /sbin/mdadm --assemble --scan
fi

and everything's fine. So the arrays are already active (/ and /boot by initrd, the others by something else, maybe udev?).
It it really needed?

Anyway, if it's needed we can add a check to see if some arrays are not mounted yet before running mdadm --assemble. (I have no idea how to make a patch, here's a simple diff)

140c140,143
< status "Activating RAID arrays" /sbin/mdadm --assemble --scan
---
> if [ $(/bin/grep ^ARRAY /etc/mdadm.conf | /bin/grep -o "md[[:digit:]]" | /usr/bin/sort | /bin/tr -d "\n") != \
> $(/bin/grep active /proc/mdstat | /bin/grep -o "md[[:digit:]]" | /usr/bin/sort | /bin/tr -d "\n") ]; then
> status "Activating RAID arrays" /sbin/mdadm --assemble --scan
> fi
Comment by Sébastien Luttringer (seblu) - Friday, 04 March 2011, 11:58 GMT
The following patch which is now upstream, let you set USEMDADM=no to fix your issue.

http://projects.archlinux.org/initscripts.git/commit/?id=8b2bfa7bd0073a3a9d389242faf16483c9ec5336

Loading...