lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4A9331AA.1090905@redhat.com>
Date:	Mon, 24 Aug 2009 20:34:50 -0400
From:	Ric Wheeler <rwheeler@...hat.com>
To:	Pavel Machek <pavel@....cz>
CC:	Zan Lynx <zlynx@....org>, Ric Wheeler <rwheeler@...hat.com>,
	Theodore Tso <tytso@....edu>, Florian Weimer <fweimer@....de>,
	Goswin von Brederlow <goswin-v-b@....de>,
	Rob Landley <rob@...dley.net>,
	kernel list <linux-kernel@...r.kernel.org>,
	Andrew Morton <akpm@...l.org>, mtk.manpages@...il.com,
	rdunlap@...otime.net, linux-doc@...r.kernel.org,
	linux-ext4@...r.kernel.org
Subject: Re: [patch] ext2/3: document conditions when reliable operation is
 possible

Pavel Machek wrote:
> On Mon 2009-08-24 16:22:22, Zan Lynx wrote:
>   
>> Ric Wheeler wrote:
>>     
>>> Pavel Machek wrote:
>>>       
>>>> Degraded MD RAID5 does not work by design; whole stripe will be
>>>> damaged on powerfail or reset or kernel bug, and ext3 can not cope
>>>> with that kind of damage. [I don't see why statistics should be
>>>> neccessary for that; the same way we don't need statistics to see that
>>>> ext2 needs fsck after powerfail.]
>>>>                                     Pavel
>>>>   
>>>>         
>>> What you are describing is a double failure and RAID5 is not double  
>>> failure tolerant regardless of the file system type....
>>>       
>> Are you sure he isn't talking about how RAID must write all the data  
>> chunks to make a complete stripe and if there is a power-loss, some of  
>> the chunks may be written and some may not?
>>
>> As I read Pavel's point he is saying that the incomplete write can be  
>> detected by the incorrect parity chunk, but degraded RAID-5 has no  
>> working parity chunk so the incomplete write would go undetected.
>>     
>
> Yep.
>
>   
>> I know this is a RAID failure mode. However, I actually thought this was  
>> a problem even for a intact RAID-5. AFAIK, RAID-5 does not generally  
>> read the complete stripe and perform verification unless that is  
>> requested, because doing so would hurt performance and lose the entire  
>> point of the RAID-5 rotating parity blocks.
>>     
>
> Not sure; is not RAID expected to verify the array after unclean
> shutdown?
>
> 									Pavel
>   
 Not usually - that would take multiple hours of verification, roughly 
equivalent to doing a RAID rebuild since you have to read each sector of 
every drive (although you would do this at full speed if the array was 
offline, not throttled like we do with rebuilds).

That is part of the thing that scrubbing can do.

Note that once you find a bad bit of data, it is really useful to be 
able to map that back into a humanly understandable object/repair 
action. For example, map the bad data range back to metadata which would 
translate into a fsck run or a list of impacted files or directories....

Ric

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ