[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <19092.28371.793339.764701@notabene.brown>
Date: Wed, 26 Aug 2009 09:08:03 +1000
From: Neil Brown <neilb@...e.de>
To: Pavel Machek <pavel@....cz>
Cc: Ric Wheeler <rwheeler@...hat.com>, Theodore Tso <tytso@....edu>,
Florian Weimer <fweimer@....de>,
Goswin von Brederlow <goswin-v-b@....de>,
Rob Landley <rob@...dley.net>,
kernel list <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...l.org>, mtk.manpages@...il.com,
rdunlap@...otime.net, linux-doc@...r.kernel.org,
linux-ext4@...r.kernel.org, corbet@....net
Subject: Re: [patch] ext2/3: document conditions when reliable operation is
possible
On Tuesday August 25, pavel@....cz wrote:
>
> You can object any way you want, but running ext3 on flash or MD RAID5
> is stupid:
>
> * ext2 would be faster
>
> * ext2 would provide better protection against powerfail.
>
> "ext3 works on flash and MD RAID5, as long as you do not have
> powerfail" seems to be the accurate statement, and if you don't need
> to protect against powerfails, you can just use ext2.
> Pavel
You are over generalising.
MD/RAID5 is only less than perfect if it is degraded. If all devices
are present before the power failure and after the power failure,
then there is no risk.
RAID5 only promises to protect against a single failure.
Power loss plus device loss equals multiple failure.
And then there is the comment Ted made about probabilities.
While you can get data corruption if a RAID5 comes back degraded after
a power fail, I believe it is a lot less likely than the metadata
being inconsistent on an ext2 after a power fail.
So ext3 is still a good choice (especially if you put your journal on
a separate device).
While I think it is, in principle, worth documenting this sort of
thing, there are an awful lot of fine details and distinctions that
would need to be considered.
NeilBrown
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists