[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.1.00.0802262115270.1799@chino.kir.corp.google.com>
Date: Tue, 26 Feb 2008 21:19:12 -0800 (PST)
From: David Rientjes <rientjes@...gle.com>
To: Balbir Singh <balbir@...ux.vnet.ibm.com>
cc: KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Rik van Riel <riel@...hat.com>,
Lee Schermerhorn <Lee.Schermerhorn@...com>,
Nick Piggin <npiggin@...e.de>
Subject: Re: [RFC][PATCH] page reclaim throttle take2
On Wed, 27 Feb 2008, Balbir Singh wrote:
> >> CONFIG_SIMULTANEOUS_PAGE_RECLAIMERS
> >> int
> >> default 3
> >> depends on DEBUG
> >> help
> >> This value determines the number of threads which can do page reclaim
> >> in a zone simultaneously. If this is too big, performance under heavy memory
> >> pressure will decrease.
> >> If unsure, use default.
> >> ==
> >>
> >> Then, you can get performance reports from people interested in this
> >> feature in test cycle.
> >
> > hm, intersting.
> > but sysctl parameter is more better, i think.
> >
> > OK, I'll add it at next post.
>
> I think sysctl should be interesting. The config option provides good
> documentation, but it is static in nature (requires reboot to change). I wish we
> could have the best of both worlds.
>
I disagree, the config option is indeed static but so is the NUMA topology
of the machine. It represents the maximum number of page reclaim threads
that should be allowed for that specific topology; a maximum should not
need to be redefined with yet another sysctl and should remain independent
of various workloads.
However, I would recommend adding the word "MAX" to the config option.
David
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists