[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20100725184322.40CF.A69D9226@jp.fujitsu.com>
Date: Sun, 25 Jul 2010 18:48:06 +0900 (JST)
From: KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>
To: Balbir Singh <balbir@...ux.vnet.ibm.com>
Cc: kosaki.motohiro@...fujitsu.com,
LKML <linux-kernel@...r.kernel.org>,
linux-mm <linux-mm@...ck.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Mel Gorman <mel@....ul.ie>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
Nishimura Daisuke <d-nishimura@....biglobe.ne.jp>
Subject: Re: [PATCH 1/7] memcg: sc.nr_to_reclaim should be initialized
> >> 1. How far does this push pages (in terms of when limit is hit)?
> >
> > 32 pages per mem_cgroup_shrink_node_zone().
> >
> > That said, the algorithm is here.
> >
> > 1. call mem_cgroup_largest_soft_limit_node()
> > calculate largest cgroup
> > 2. call mem_cgroup_shrink_node_zone() and shrink 32 pages
> > 3. goto 1 if limit is still exceed.
> >
> > If it's not your intention, can you please your intended algorithm?
>
> We set it to 0, since we care only about a single page reclaim on
> hitting the limit. IIRC, in the past we saw an excessive pushback on
> reclaiming SWAP_CLUSTER_MAX pages, just wanted to check if you are
> seeing the same behaviour even now after your changes.
Actually, we have 32 pages reclaim batch size. (see nr_scan_try_batch() and related functions)
thus <32 value doesn't works as your intended.
But, If you run your test again, and (if there is) report any bugs. I'm very glad and fix it soon.
Thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists