[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20060911073325.GA25255@wohnheim.fh-wedel.de>
Date: Mon, 11 Sep 2006 09:33:25 +0200
From: Jörn Engel <joern@...nheim.fh-wedel.de>
To: Andrew Morton <akpm@...l.org>
Cc: Andy Whitcroft <apw@...dowen.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 5/5] linear reclaim core
On Sun, 10 September 2006 17:40:51 -0700, Andrew Morton wrote:
>
> > A) With sufficient fragmentation, all inactive pages have one active
> > neighbour, so shrink_inactive_list() will never find a cluster of the
> > required order.
>
> Nope. If the clump of pages has a mix of active and inactive, the above
> design would cause the active ones to be deactivated, so now the entire
> clump is eligible for treatment by shrink_inactive_list().
Ok? More reading seems necessary before I can follow you here...
> Bear in mind that simply moving the pages to the inactive list isn't enough
> to get them reclaimed: we still do various forms of page aging and the
> pages can still be preserved due to that. IOW, we have several different
> forms of page aging, one of which is LRU-ordering. The above design
> compromises just one of those aging steps.
Are these different forms of page aging described in written form
somewhere?
> I'd be more concerned about higher-order atomic allocations. If this thing
> is to work I suspect we'll need per-zone, per-order watermarks and kswapd
> will need to maintain those.
Or simply declare higher-order atomic allocations to be undesired?
Not sure how many of those we have that make sense.
> Don't think in terms of "freeing". Think in terms of "scanning". A lot of
> page reclaim's balancing tricks are cast in terms of pages-scanned,
> slabs-scanned, etc.
There is a related problem I'm sure you are aware of. Trying to
shrink the dentry_cache or the various foofs_inode_caches we remove
tons of objects before a full slab (in most cases a page) is free and
can be returned. ext3_inode_cache has 8 objects per slab,
dentry_cache has 29. That's the equivalent of an order-3 or order-5
page allocation in terms of inefficiency.
And having just started thinking about the problem, my envisioned
solution looks fairly similar to Andy's work for high-order
allocations here. Except that I cannot think in terms of "scanning",
afaict.
Jörn
--
Anything that can go wrong, will.
-- Finagle's Law
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists