[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <oru04wfakx.fsf@free.oliva.athome.lsd.ic.unicamp.br>
Date: Tue, 01 Aug 2006 17:46:38 -0300
From: Alexandre Oliva <aoliva@...hat.com>
To: Neil Brown <neilb@...e.de>
Cc: Andrew Morton <akpm@...l.org>, linux-kernel@...r.kernel.org,
linux-raid@...r.kernel.org
Subject: Re: let md auto-detect 128+ raid members, fix potential race condition
On Aug 1, 2006, Alexandre Oliva <aoliva@...hat.com> wrote:
>> I'll give it a try some time tomorrow, since I won't turn on that
>> noisy box today any more; my daughter is already asleep :-)
> But then, I could use my own desktop to test it :-)
But then, I wouldn't be testing quite the same scenario.
My boot-required RAID devices were all raid 1, whereas the larger,
separate volume group was all raid 6.
Using the mkinitrd patch that I posted before, the result was that
mdadm did try to bring up all raid devices but, because the raid456
module was not loaded in initrd, the raid devices were left inactive.
Then, when rc.sysinit tried to bring them up with mdadm -A -s, that
did nothing to the inactive devices, since they didn't have to be
assembled. Adding --run didn't help.
My current work-around is to add raid456 to initrd, but that's ugly.
Scanning /proc/mdstat for inactive devices in rc.sysinit and doing
mdadm --run on them is feasible, but it looks ugly and error-prone.
Would it be reasonable to change mdadm so as to, erhm, disassemble ;-)
the raid devices it tried to bring up but that, for whatever reason,
it couldn't activate? (say, missing module, not enough members,
whatever)
--
Alexandre Oliva http://www.lsd.ic.unicamp.br/~oliva/
Secretary for FSF Latin America http://www.fsfla.org/
Red Hat Compiler Engineer aoliva@...dhat.com, gcc.gnu.org}
Free Software Evangelist oliva@...d.ic.unicamp.br, gnu.org}
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists