[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0701161416060.3545@schroedinger.engr.sgi.com>
Date: Tue, 16 Jan 2007 14:18:28 -0800 (PST)
From: Christoph Lameter <clameter@....com>
To: Andi Kleen <ak@...e.de>
cc: akpm@...l.org, Paul Menage <menage@...gle.com>,
linux-kernel@...r.kernel.org,
Nick Piggin <nickpiggin@...oo.com.au>, linux-mm@...ck.org,
Paul Jackson <pj@....com>, Dave Chinner <dgc@....com>
Subject: Re: [RFC 0/8] Cpuset aware writeback
On Wed, 17 Jan 2007, Andi Kleen wrote:
> > Secondly we modify the dirty limit calculation to be based
> > on the acctive cpuset.
>
> The global dirty limit definitely seems to be a problem
> in several cases, but my feeling is that the cpuset is the wrong unit
> to keep track of it. Most likely it should be more fine grained.
We already have zone reclaim that can take care of smaller units but why
would we start writeback if only one zone is full of dirty pages and there
are lots of other zones (nodes) that are free?
> > If we are in a cpuset then we select only inodes for writeback
> > that have pages on the nodes of the cpuset.
>
> Is there any indication this change helps on smaller systems
> or is it purely a large system optimization?
The bigger the system the larger the problem because the ratio of dirty
pages is calculated is currently based on the percentage of dirty pages
in the system as a whole. The less percentage of a system a cpuset
contains the less effective the dirty_ratio and background_dirty_ratio
become.
> > B. We add a new counter NR_UNRECLAIMABLE that is subtracted
> > from the available pages in a node. This allows us to
> > accurately calculate the dirty ratio even if large portions
> > of the node have been allocated for huge pages or for
> > slab pages.
>
> That sounds like a useful change by itself.
I can separate that one out.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists