lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 26 Aug 2009 09:37:49 -0400
From:	Ric Wheeler <rwheeler@...hat.com>
To:	Theodore Tso <tytso@....edu>, Pavel Machek <pavel@....cz>,
	Florian Weimer <fweimer@....de>,
	Goswin von Brederlow <goswin-v-b@....de>,
	Rob Landley <rob@...dley.net>,
	kernel list <linux-kernel@...r.kernel.org>,
	Andrew Morton <akpm@...l.org>, mtk.manpages@...il.com,
	rdunlap@...otime.net, linux-doc@...r.kernel.org,
	linux-ext4@...r.kernel.org, corbet@....net
Subject: Re: [patch] ext2/3: document conditions when reliable operation is
 possible

On 08/25/2009 10:55 PM, Theodore Tso wrote:
> On Wed, Aug 26, 2009 at 03:16:06AM +0200, Pavel Machek wrote:
>> Hi!
>>
>>> 3) Does that mean that you shouldn't use ext3 on RAID drives?  Of
>>> course not!  First of all, Ext3 still saves you against kernel panics
>>> and hangs caused by device driver bugs or other kernel hangs.  You
>>> will lose less data, and avoid needing to run a long and painful fsck
>>> after a forced reboot, compared to if you used ext2.  You are making
>>
>> Actually... ext3 + MD RAID5 will still have a problem on kernel
>> panic. MD RAID5 is implemented in software, so if kernel panics, you
>> can still get inconsistent data in your array.
>
> Only if the MD RAID array is running in degraded mode (and again, if
> the system is in this state for a long time, the bug is in the system
> administrator).  And even then, it depends on how the kernel dies.  If
> the system hangs due to some deadlock, or we get an OOPS that kills a
> process while still holding some locks, and that leads to a deadlock,
> it's likely the low-level MD driver can still complete the stripe
> write, and no data will be lost.  If the kernel ties itself in knots
> due to running out of memory, and the OOM handler is invoked, someone
> hitting the reset button to force a reboot will also be fine.
>
> If the RAID array is degraded, and we get an oops in interrupt
> handler, such that the system is immediately halted --- then yes, data
> could get lost.  But there are many system crashes where the software
> RAID's ability to complete a stripe write would not be compromised.
>
>         	       	  	     	    	  	- Ted

Just to add some real world data, Bianca Schroeder published a really good paper 
that looks at failures in national labs which has actual measured disk failures:

http://www.cs.cmu.edu/~bianca/fast07.pdf

Her numbers showed various rates of failures, but depending on the box, drive 
type, etc, they lost between 1-6% of the install drives each year.

There is also a good paper from Google:

http://labs.google.com/papers/disk_failures.html

Both of the above are largely linux boxes.

And several other FAST papers on failures in commercial RAID boxes, most notably 
by NetApp.

If reading papers is not at the top of your list of things to do, just skim 
through and look for the tables on disk failures, etc. which have great 
measurements of what really failed in these systems...

Ric





--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ