[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <AANLkTinD8=XycQJ-yFBW_tJiE0kH70s2g2asfWtykEgL@mail.gmail.com>
Date: Sun, 25 Jul 2010 22:10:12 +0530
From: Balbir Singh <balbir@...ux.vnet.ibm.com>
To: KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>
Cc: LKML <linux-kernel@...r.kernel.org>, linux-mm <linux-mm@...ck.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Mel Gorman <mel@....ul.ie>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
Nishimura Daisuke <d-nishimura@....biglobe.ne.jp>
Subject: Re: [PATCH 1/7] memcg: sc.nr_to_reclaim should be initialized
On Sun, Jul 25, 2010 at 3:18 PM, KOSAKI Motohiro
<kosaki.motohiro@...fujitsu.com> wrote:
>> >> 1. How far does this push pages (in terms of when limit is hit)?
>> >
>> > 32 pages per mem_cgroup_shrink_node_zone().
>> >
>> > That said, the algorithm is here.
>> >
>> > 1. call mem_cgroup_largest_soft_limit_node()
>> > calculate largest cgroup
>> > 2. call mem_cgroup_shrink_node_zone() and shrink 32 pages
>> > 3. goto 1 if limit is still exceed.
>> >
>> > If it's not your intention, can you please your intended algorithm?
>>
>> We set it to 0, since we care only about a single page reclaim on
>> hitting the limit. IIRC, in the past we saw an excessive pushback on
>> reclaiming SWAP_CLUSTER_MAX pages, just wanted to check if you are
>> seeing the same behaviour even now after your changes.
>
> Actually, we have 32 pages reclaim batch size. (see nr_scan_try_batch() and related functions)
> thus <32 value doesn't works as your intended.
>
> But, If you run your test again, and (if there is) report any bugs. I'm very glad and fix it soon.
>
I understand that, the point is when to do stop the reclaim (do we
really need 32 pages to stop the reclaim, when we hit the limit,
something as low as a single page can help). This is quite a subtle
thing, I'd mark it as low priority. I'll definitely come back if I see
unexpected behaviour.
Balbir
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists