[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.1.00.0802262349090.21857@chino.kir.corp.google.com>
Date: Tue, 26 Feb 2008 23:56:39 -0800 (PST)
From: David Rientjes <rientjes@...gle.com>
To: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
cc: KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
Balbir Singh <balbir@...ux.vnet.ibm.com>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Rik van Riel <riel@...hat.com>,
Lee Schermerhorn <Lee.Schermerhorn@...com>,
Nick Piggin <npiggin@...e.de>
Subject: Re: [RFC][PATCH] page reclaim throttle take2
On Wed, 27 Feb 2008, KAMEZAWA Hiroyuki wrote:
> Hmm, but kswapd, which is main worker of page reclaiming, is per-node.
> And reclaim is done based on zone.
> per-zone/per-node throttling seems to make sense.
>
That's another argument for not introducing the sysctl; the number of
nodes and zones are a static property of the machine that cannot change
without a reboot (numa=fake, mem=, introducing movable zones, etc). We
don't have node hotplug that can suddenly introduce additional zones from
which to reclaim.
My point was that there doesn't appear to be any use case for tuning this
via a sysctl that isn't simply attempting to workaround some other reclaim
problem when the VM is stressed. If that's agreed upon, then deciding
between a config option that is either per-cpu or per-node should be based
on the benchmarks that you've run. At this time, it appears that per-node
is the more advantageous.
> I know his environment has 4cpus per node but throttle to 3 was the best
> number in his measurement. Then it seems num-per-cpu is excessive.
> (At least, ratio(%) is better.)
That seems to indicate that the NUMA topology is more important than lock
contention for the reclaim throttle.
David
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists