[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100419010805.GD2520@dastard>
Date: Mon, 19 Apr 2010 11:08:05 +1000
From: Dave Chinner <david@...morbit.com>
To: Arjan van de Ven <arjan@...radead.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Mel Gorman <mel@....ul.ie>,
KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
Chris Mason <chris.mason@...cle.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
linux-fsdevel@...r.kernel.org
Subject: Re: [PATCH] mm: disallow direct reclaim page writeback
On Sun, Apr 18, 2010 at 05:49:44PM -0700, Arjan van de Ven wrote:
> On Mon, 19 Apr 2010 10:35:56 +1000
> Dave Chinner <david@...morbit.com> wrote:
>
> > On Sat, Apr 17, 2010 at 08:32:39PM -0400, Andrew Morton wrote:
> > >
> > > There are two issues here: stack utilisation and poor IO patterns in
> > > direct reclaim. They are different.
> > >
> > > The poor IO patterns thing is a regression. Some time several years
> > > ago (around 2.6.16, perhaps), page reclaim started to do a LOT more
> > > dirty-page writeback than it used to. AFAIK nobody attempted to
> > > work out why, nor attempted to try to fix it.
> >
> > I think that part of the problem is that at roughly the same time
> > writeback started on a long down hill slide as well, and we've
> > really only fixed that in the last couple of kernel releases. Also,
> > it tends to take more that just writing a few large files to invoke
> > the LRU-based writeback code is it is generally not invoked in
> > filesystem "performance" testing. Hence my bet is on the fact that
> > the effects of LRU-based writeback are rarely noticed in common
> > testing.
>
> Would this also be the time where we started real dirty accounting, and
> started playing with the dirty page thresholds?
Yes, I think that was introduced in 2.6.16/17, so it's definitely in
the ballpark.
> Background writeback is that interesting tradeoff between writing out
> to make the VM easier (and the data safe) and the chance of someone
> either rewriting the same data (as benchmarks do regularly... not sure
> about real workloads) or deleting the temporary file.
>
> Maybe we need to do the background dirty writes a bit more aggressive...
> or play with heuristics where we get an adaptive timeout (say, if the
> file got closed by the last opener, then do a shorter timeout)
Realistically, I'm concerned about preventing the worst case
behaviour from occurring - making the background writes more
agressive without preventing writeback in LRU order simply means it
will be harder to test the VM corner case that triggers these
writeout patterns...
Cheers,
Dave.
--
Dave Chinner
david@...morbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists