[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070206044146.GA11856@wotan.suse.de>
Date: Tue, 6 Feb 2007 05:41:46 +0100
From: Nick Piggin <npiggin@...e.de>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Linux Kernel <linux-kernel@...r.kernel.org>,
Linux Filesystems <linux-fsdevel@...r.kernel.org>,
Linux Memory Management <linux-mm@...ck.org>
Subject: Re: [patch 9/9] mm: fix pagecache write deadlocks
On Tue, Feb 06, 2007 at 03:25:49AM +0100, Nick Piggin wrote:
> On Sun, Feb 04, 2007 at 10:36:20AM -0800, Andrew Morton wrote:
> > On Sun, 4 Feb 2007 16:10:51 +0100 Nick Piggin <npiggin@...e.de> wrote:
> >
> > > They're not likely to hit the deadlocks, either. Probability gets more
> > > likely after my patch to lock the page in the fault path. But practially,
> > > we could live without that too, because the data corruption it fixes is
> > > very rare as well. Which is exactly what we've been doing quite happily
> > > for most of 2.6, including all distro kernels (I think).
> >
> > Thing is, an application which is relying on the contents of that page is
> > already unreliable (or really peculiar), because it can get indeterminate
> > results anyway.
>
> Not necessarily -- they could read from one part of a page and write to
> another. I see this as the biggest data corruption problem.
And in fact, it is not just transient errors either. This problem can
add permanent corruption into the pagecache and onto disk, and it doesn't
even require two processes to race.
After zeroing out the uncopied part of the page, and attempting to loop
again, we might bail out of the loop for any reason before completing the
rest of the copy, leaving the pagecache corrupted, which will soon go out
to disk.
Nick
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists