[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200707121521.21407.LinuxKernel@jamesplace.net>
Date: Thu, 12 Jul 2007 15:21:21 -0500
From: James <LinuxKernel@...esplace.net>
To: linux-kernel@...r.kernel.org
Subject: Re: Problem recovering a failed RIAD5 array with 4-drives.
> On Thu, Jul 12, 2007 at 08:49:15AM -0500, James wrote:
> > My apologies if this is not the correct forum. If there is a better place
to
> > post this please advise.
> >
> >
> > Linux localhost.localdomain 2.6.17-1.2187_FC5 #1 Mon Sep 11 01:17:06 EDT
2006
> > i686 i686 i386 GNU/Linux
> >
> > (I was planning to upgrade to FC7 this weekend, but that is currently on
hold
> > because-)
> >
> > I've got a problem with a software RIAD5 using mdadm.
> > Drive sdc failed causing sda to appear failed. Both drives where marked
> > as 'spare'.
> >
> > What follows is a record of the steps I've taken and the results. I'm
looking
> > for some direction/advice to get the data back.
> >
> >
> > I've tried a few cautions things to bring the array back up with the three
> > good drives with no luck.
> >
> > The last thing attempted had some limited success. I was able to get all
> > drives powered up. I checked the Event count on the three good drives and
> > they were all equal. So I assumed it would be safe to do the following. I
> > hope I was not wrong. I issued the following commands to try to bring the
> > array into a usable state.
> >
> >
> >
> >
> > []#
> >
mdadm --create --verbose /dev/md0 --assume-clean --level=raid5 --raid-devices=4 --spare-devices=0 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1
>
> Don't you want assemble rather than create if it already exists?
>
> How did two drives fail at the same time? Are you running PATA drives
> with two drives on a single cable? That is a no no for raid. PATA
> drive failures often take out the bus and you never want two drives in a
> single raid to share an IDE bus.
>
> You probably want to try and assemble the non failed drives, and then
> add in the new replacement drive afterwards, since after all it is NOT
> clean. Hopefully the raid will accept back sda even though it appeared
> failed. Then you can add the new sdc to resync the raid.
>
> --
> Len Sorensen
>
I should have included more information. When I attempted to --assemble the
array I received the following:
[]# mdadm --assemble [--force --run] /dev/md0 /dev/sda1 /dev/sdb1
[/dev/sdc1] /dev/sdd1
mdadm: failed to RUN_ARRAY /dev/md0: Input/output error
>From what I read I assumed I could use the --assume-clean option with --create
to bring the array back at least in some semblance of working order.
I'd like to recover as much as possible from the RAID array. I actually have a
nice new SATA configuration sitting here waiting to receive the data. This
thing failed a day too early. I'm gnashing my teeth over this one.
I'd truly appreciate any help/advice.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists