[<prev] [next>] [day] [month] [year] [list]
Message-ID: <73e903671003191126l6c0bed69q69c32bf37922690d@mail.gmail.com>
Date: Fri, 19 Mar 2010 18:26:18 +0000
From: Kristleifur Daðason <kristleifur@...il.com>
To: linux-raid@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: BUG:write data to degrade raid5
On Fri, Mar 19, 2010 at 6:20 PM, Joachim Otahal <Jou@....net> wrote:
> jin zhencheng schrieb:
>>
>> hi;
>>
>> i use kernel is 2.6.26.2
>>
>> what i do as follow:
>>
>> 1, I create a raid5:
>> mdadm -C /dev/md5 -l 5 -n 4 /dev/sda /dev/sdb /dev/sdc /dev/sdd
>> --metadata=1.0 --assume-clean
>>
>> 2, dd if=/dev/zero of=/dev/md5 bs=1M&
>>
>> write data to this raid5
>>
>> 3, mdadm --manage /dev/md5 -f /dev/sda
>>
>> 4 mdadm --manage /dev/md5 -f /dev/sdb
>>
>> if i faild 2 disks ,then the OS kernel display OOP error and kernel down
>>
>> do somebody know why ?
>>
>> Is MD/RAID5 bug ?
>>
>
> RAID5 can only tolerate ONE drive to fail of ALL members. If you want to be
> able to fail two drives you will have to use RAID6 or RAID5 with one
> hot-spare (and give it time to rebuild before failing the second drive).
> PLEASE read the documentation on raid levels, like on wikipedia.
>
That is true,
but should we get a kernel oops and crash if two RAID5 drives are
failed? (THAT part looks like a bug!)
Jin, can you try a newer kernel, and a newer mdadm?
-- Kristleifur
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists