[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090811071756.GC14368@basil.fritz.box>
Date: Tue, 11 Aug 2009 09:17:56 +0200
From: Andi Kleen <andi@...stfloor.org>
To: Hidehiro Kawai <hidehiro.kawai.ez@...achi.com>
Cc: Andi Kleen <andi@...stfloor.org>, tytso@....edu, hch@...radead.org,
mfasheh@...e.com, aia21@...tab.net, hugh.dickins@...cali.co.uk,
swhiteho@...hat.com, akpm@...ux-foundation.org, npiggin@...e.de,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
fengguang.wu@...el.com,
Satoshi OSHIMA <satoshi.oshima.fk@...achi.com>,
Taketoshi Sakuraba <taketoshi.sakuraba.hc@...achi.com>
Subject: Re: [PATCH] [16/19] HWPOISON: Enable .remove_error_page for
migration aware file systems
On Tue, Aug 11, 2009 at 12:50:59PM +0900, Hidehiro Kawai wrote:
> > And application
> > that doesn't handle current IO errors correctly will also
> > not necessarily handle hwpoison correctly (it's not better and not worse)
>
> This is my main concern. I'd like to prevent re-corruption even if
> applications don't have good manners.
I don't think there's much we can do if the application doesn't
check for IO errors properly. What would you do if it doesn't
check for IO errors at all? If it checks for IO errors it simply
has to check for them on all IO operations -- if they do
they will detect hwpoison errors correctly too.
> As for usual I/O error, ext3/4 can now do it by using data=ordered and
> data_err=abort mount options. Moreover, if you mount the ext3/4
> filesystem with the additional errors=panic option, kernel gets
> panic on write error instead of read-only remount. Customers
> who regard data integrity is very important require these features.
Well they can also set vm.memory_failure_recovery = 0 then if they don't
care about their uptime.
> That is why I suggested this:
> >>(2) merge this patch with new panic_on_dirty_page_cache_corruption
You probably mean panic_on_non_anonymous_dirty_page_cache
Normally anonymous memory is dirty.
> >> sysctl
It's unclear to me this special mode is really desirable.
Does it bring enough value to the user to justify the complexity
of another exotic option? The case is relatively exotic,
as in dirty write cache that is mapped to a file.
Try to explain it in documentation and you see how ridiculous it sounds; u
it simply doesn't have clean semantics
("In case you have applications with broken error IO handling on
your mission critical system ...")
> > I'm sure other enhancements for IO errors could be done too.
> > Some of the file systems also handle them still quite poorly (e.g. btrfs)
> >
> > But again I don't think it's a blocker for hwpoison.
>
> Unfortunately, it can be a blocker. As I stated, we can block the
> possible re-corruption caused by transient IO errors on ext3/4
> filesystems. But applying this patch (PATCH 16/19), re-corruption
> can happen even if we use data=ordered, data_err=abort and
> errors=panic mount options.
We don't corrupt data on disk. Applications
that don't check for IO errors correctly may see stale data
from the same file on disk though.
This can happen in all the cases you listed above except for panic-on-error.
If you want panic-on-error behaviour simply set vm.memory_failure_recovery = 0
> > (4) accept that hwpoison error handling is not better and not worse than normal
> > IO error handling.
> >
> > We opted for (4).
>
> Could you consider adopting (2) or (3)? Fengguang's sticky EIO
> approach (http://lkml.org/lkml/2009/6/11/294) is also OK.
I believe redesigned IO error handling does not belong in the
core hwpoison patchkit. It's big enough as it is and I consider it frozen
unless fatal bugs are found -- and frankly this is not a fatal
error in my estimation.
If you want to have improved IO error handling feel free to
submit it separately. I agree this area could use some work.
But it probably needs more design work first.
-Andi
--
ak@...ux.intel.com -- Speaking for myself only.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists