lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sun, 21 Mar 2010 18:29:53 +0800
From:	jin zhencheng <zhenchengjin@...il.com>
To:	Joachim Otahal <Jou@....net>
Cc:	Kristleifur Daðason <kristleifur@...il.com>,
	neilb@...e.de, linux-raid@...r.kernel.org,
	linux-kernel@...r.kernel.org
Subject: Re: BUG:write data to degrade raid5

hi Joachim Otahal:

ths for your test on "Debian 2.6.26-21lenny4".
if you want to see the oop ,you should always write to the raid5 ,and
pull 2 disks out.maybe you can see the error

i think no matter what i do ,even if i pull out all the disk , kernel
should not oop.


On Sat, Mar 20, 2010 at 2:37 AM, Joachim Otahal <Jou@....net> wrote:
> Kristleifur Dağason schrieb:
>>
>> On Fri, Mar 19, 2010 at 6:20 PM, Joachim Otahal <Jou@....net
>> <mailto:Jou@....net>> wrote:
>>
>>    jin zhencheng schrieb:
>>
>>        hi;
>>
>>        i use kernel is 2.6.26.2
>>
>>        what i do as follow:
>>
>>        1, I create a raid5:
>>        mdadm -C /dev/md5 -l 5 -n 4 /dev/sda /dev/sdb /dev/sdc  /dev/sdd
>>        --metadata=1.0 --assume-clean
>>
>>        2, dd if=/dev/zero of=/dev/md5 bs=1M&
>>
>>        write data to this raid5
>>
>>        3, mdadm --manage /dev/md5 -f /dev/sda
>>
>>        4 mdadm --manage  /dev/md5 -f /dev/sdb
>>
>>        if i faild 2 disks ,then the OS kernel display OOP error and
>>        kernel down
>>
>>        do somebody know why ?
>>
>>        Is MD/RAID5 bug ?
>>
>>
>>    RAID5 can only tolerate ONE drive to fail of ALL members. If you
>>    want to be able to fail two drives you will have to use RAID6 or
>>    RAID5 with one hot-spare (and give it time to rebuild before
>>    failing the second drive).
>>    PLEASE read the documentation on raid levels, like on wikipedia.
>>
>>
>> That is true,
>>
>> but should we get a kernel oops and crash if two RAID5 drives are failed?
>> (THAT part looks like a bug!)
>>
>> Jin, can you try a newer kernel, and a newer mdadm?
>>
>> -- Kristleifur
>
> You are probably right.
> My kernel version is "Debian 2.6.26-21lenny4", and I had no oopses during my
> hot-plug testing one the hardware I use md on. I think it may be the driver
> for his chips.
>
> Jin:
>
> Did you really use the whole drives for testing or loopback files or
> partitions on the drives? I never did my hot-plug testings with whole drives
> being in an array, only with partitions.
>
> Joachim Otahal
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ