[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <AANLkTikpZ8iH1oO1k84kvo2qYYS96LYuNmmw6xJL-1QV@mail.gmail.com>
Date: Sun, 25 Jul 2010 13:55:32 +0530
From: Balbir Singh <balbir@...ux.vnet.ibm.com>
To: KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>
Cc: LKML <linux-kernel@...r.kernel.org>, linux-mm <linux-mm@...ck.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Mel Gorman <mel@....ul.ie>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
Nishimura Daisuke <d-nishimura@....biglobe.ne.jp>
Subject: Re: [PATCH 1/7] memcg: sc.nr_to_reclaim should be initialized
On Fri, Jul 23, 2010 at 1:03 PM, KOSAKI Motohiro
<kosaki.motohiro@...fujitsu.com> wrote:
>> * KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com> [2010-07-16 19:13:31]:
>>
>> > Currently, mem_cgroup_shrink_node_zone() initialize sc.nr_to_reclaim as 0.
>> > It mean shrink_zone() only scan 32 pages and immediately return even if
>> > it doesn't reclaim any pages.
>> >
>> > This patch fixes it.
>> >
>> > Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>
>> > ---
>> > mm/vmscan.c | 1 +
>> > 1 files changed, 1 insertions(+), 0 deletions(-)
>> >
>> > diff --git a/mm/vmscan.c b/mm/vmscan.c
>> > index 1691ad0..bd1d035 100644
>> > --- a/mm/vmscan.c
>> > +++ b/mm/vmscan.c
>> > @@ -1932,6 +1932,7 @@ unsigned long mem_cgroup_shrink_node_zone(struct mem_cgroup *mem,
>> > struct zone *zone, int nid)
>> > {
>> > struct scan_control sc = {
>> > + .nr_to_reclaim = SWAP_CLUSTER_MAX,
>> > .may_writepage = !laptop_mode,
>> > .may_unmap = 1,
>> > .may_swap = !noswap,
>>
>> Could you please do some additional testing on
>>
>> 1. How far does this push pages (in terms of when limit is hit)?
>
> 32 pages per mem_cgroup_shrink_node_zone().
>
> That said, the algorithm is here.
>
> 1. call mem_cgroup_largest_soft_limit_node()
> calculate largest cgroup
> 2. call mem_cgroup_shrink_node_zone() and shrink 32 pages
> 3. goto 1 if limit is still exceed.
>
> If it's not your intention, can you please your intended algorithm?
We set it to 0, since we care only about a single page reclaim on
hitting the limit. IIRC, in the past we saw an excessive pushback on
reclaiming SWAP_CLUSTER_MAX pages, just wanted to check if you are
seeing the same behaviour even now after your changes.
Balbir
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists