lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.00.0908241638551.28411@asgard.lang.hm>
Date:	Mon, 24 Aug 2009 16:42:52 -0700 (PDT)
From:	david@...g.hm
To:	Zan Lynx <zlynx@....org>
cc:	Ric Wheeler <rwheeler@...hat.com>, Pavel Machek <pavel@....cz>,
	Theodore Tso <tytso@....edu>, Florian Weimer <fweimer@....de>,
	Goswin von Brederlow <goswin-v-b@....de>,
	Rob Landley <rob@...dley.net>,
	kernel list <linux-kernel@...r.kernel.org>,
	Andrew Morton <akpm@...l.org>, mtk.manpages@...il.com,
	rdunlap@...otime.net, linux-doc@...r.kernel.org,
	linux-ext4@...r.kernel.org
Subject: Re: [patch] ext2/3: document conditions when reliable operation is
 possible

On Mon, 24 Aug 2009, Zan Lynx wrote:

> Ric Wheeler wrote:
>> Pavel Machek wrote:
>>> Degraded MD RAID5 does not work by design; whole stripe will be
>>> damaged on powerfail or reset or kernel bug, and ext3 can not cope
>>> with that kind of damage. [I don't see why statistics should be
>>> neccessary for that; the same way we don't need statistics to see that
>>> ext2 needs fsck after powerfail.]
>>>                                     Pavel
>>> 
>> What you are describing is a double failure and RAID5 is not double failure 
>> tolerant regardless of the file system type....
>
> Are you sure he isn't talking about how RAID must write all the data chunks 
> to make a complete stripe and if there is a power-loss, some of the chunks 
> may be written and some may not?

q write to raid 5 doesn't need to write to all drives, but it does need to 
write to two drives (the drive you are modifying and the parity drive)

if you are not degraded and only suceed on one write you will detect the 
corruption later when you try to verify the data.

if you are degraded and only suceed on one write, then the entire stripe 
gets corrupted.

but this is a double failure (one drive + unclean shutdown)

if you have battery-backed cache you will finish the writes when you 
reboot.

if you don't have battery-backed cache (or are using software raid and 
crashed in the middle of sending the writes to the drive) you loose, but 
unless you disable write buffers and do sync writes (which nobody is going 
to do because of the performance problems) you will loose data in an 
unclean shutdown anyway.

David Lang

> As I read Pavel's point he is saying that the incomplete write can be 
> detected by the incorrect parity chunk, but degraded RAID-5 has no working 
> parity chunk so the incomplete write would go undetected.
>
> I know this is a RAID failure mode. However, I actually thought this was a 
> problem even for a intact RAID-5. AFAIK, RAID-5 does not generally read the 
> complete stripe and perform verification unless that is requested, because 
> doing so would hurt performance and lose the entire point of the RAID-5 
> rotating parity blocks.
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ