[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100418174944.7b9716ad@infradead.org>
Date: Sun, 18 Apr 2010 17:49:44 -0700
From: Arjan van de Ven <arjan@...radead.org>
To: Dave Chinner <david@...morbit.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Mel Gorman <mel@....ul.ie>,
KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
Chris Mason <chris.mason@...cle.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
linux-fsdevel@...r.kernel.org
Subject: Re: [PATCH] mm: disallow direct reclaim page writeback
On Mon, 19 Apr 2010 10:35:56 +1000
Dave Chinner <david@...morbit.com> wrote:
> On Sat, Apr 17, 2010 at 08:32:39PM -0400, Andrew Morton wrote:
> >
> > There are two issues here: stack utilisation and poor IO patterns in
> > direct reclaim. They are different.
> >
> > The poor IO patterns thing is a regression. Some time several years
> > ago (around 2.6.16, perhaps), page reclaim started to do a LOT more
> > dirty-page writeback than it used to. AFAIK nobody attempted to
> > work out why, nor attempted to try to fix it.
>
> I think that part of the problem is that at roughly the same time
> writeback started on a long down hill slide as well, and we've
> really only fixed that in the last couple of kernel releases. Also,
> it tends to take more that just writing a few large files to invoke
> the LRU-based writeback code is it is generally not invoked in
> filesystem "performance" testing. Hence my bet is on the fact that
> the effects of LRU-based writeback are rarely noticed in common
> testing.
>
Would this also be the time where we started real dirty accounting, and
started playing with the dirty page thresholds?
Background writeback is that interesting tradeoff between writing out
to make the VM easier (and the data safe) and the chance of someone
either rewriting the same data (as benchmarks do regularly... not sure
about real workloads) or deleting the temporary file.
Maybe we need to do the background dirty writes a bit more aggressive...
or play with heuristics where we get an adaptive timeout (say, if the
file got closed by the last opener, then do a shorter timeout)
--
Arjan van de Ven Intel Open Source Technology Centre
For development, discussion and tips for power savings,
visit http://www.lesswatts.org
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists