[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20110906183358.0a305900.kamezawa.hiroyu@jp.fujitsu.com>
Date: Tue, 6 Sep 2011 18:33:58 +0900
From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
To: Johannes Weiner <jweiner@...hat.com>
Cc: Minchan Kim <minchan.kim@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Rik van Riel <riel@...hat.com>,
KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
Daisuke Nishimura <nishimura@....nes.nec.co.jp>,
Balbir Singh <bsingharora@...il.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [patch] memcg: skip scanning active lists based on individual
size
On Mon, 5 Sep 2011 20:25:14 +0200
Johannes Weiner <jweiner@...hat.com> wrote:
> On Thu, Sep 01, 2011 at 03:31:48PM +0900, KAMEZAWA Hiroyuki wrote:
> > On Thu, 1 Sep 2011 08:15:40 +0200
> > Johannes Weiner <jweiner@...hat.com> wrote:
> > Old implemenation was supporsed to make vmscan to see only memcg and
> > ignore zones. memcg doesn't take care of any zones. Then, it uses
> > global numbers rather than zones.
> >
> > Assume a system with 2 nodes and the whole memcg's inactive/active ratio
> > is unbalaned.
> >
> > Node 0 1
> > Active 800M 30M
> > Inactive 100M 200M
> >
> > If we judge 'unbalance' based on zones, Node1's Active will not rotate
> > even if it's not accessed for a while.
> > If we judge unbalance based on total stat, Both of Node0 and Node 1
> > will be rotated.
>
> But why should we deactivate on Node 1? We have good reasons not to
> on the global level, why should memcgs silently behave differently?
>
One reason was I thought that memcg should behave as to have one LRU list,
which is not devided by zones and wanted to ignore zones as much
as possible. Second reason was that I don't want to increase swap-out
caused by memcg limit.
> I mostly don't understand it on a semantic level. vmscan needs to
> know whether a certain inactive LRU list has enough reclaim candidates
> to skip scanning its corresponding active list. The global state is
> not useful to find out if a single inactive list has enough pages.
>
Ok, I agree to this. I should add other logic to do what I want.
In my series,
- passing nodemask
- avoid overscan
- calculating node weight
These will allow me to see what I want.
> > Hmm, old one doesn't work as I expexted ?
> >
> > But okay, as time goes, I think Node1's inactive will decreased
> > and then, rotate will happen even with zone based ones.
>
> Yes, that's how the mechanism is intended to work: with a constant
> influx of used-once pages, we don't want to touch the active list.
> But when the workload changes and inactive pages get either activated
> or all reclaimed, the ratio changes and eventually we fall back to
> deactivating pages again.
>
> That's reclaim behaviour that has been around for a while and it
> shouldn't make a difference if your workload is running in
> root_mem_cgroup or another memcg.
>
ok.
> > > > But, hmm, this change may be good for softlimit and your work.
> > >
> > > Yes, I noticed those paths showing up in a profile with my patches.
> > > Lots of memcgs on a multi-node machine will trigger it too. But it's
> > > secondary, my primary reasoning was: this does not make sense at all.
> >
> > your word sounds always too strong to me ;) please be soft.
>
> Sorry, I'll try to be less harsh. Please don't take it personally :)
>
> What I meant was that the computational overhead was not the primary
> reason for this patch. Although a reduction there is very welcome,
> it's that deciding to skip the list based on the list size seems more
> correct than deciding based on the overall state of the memcg, which
> can only by accident show the same proportion of inactive/active.
>
> It's a correctness fix for existing code, not an optimization or
> preparation for future changes.
>
ok.
> > > > I'll ack when you add performance numbers in changelog.
> > >
> > > It's not exactly a performance optimization but I'll happily run some
> > > workloads. Do you have suggestions what to test for? I.e. where
> > > would you expect regressions?
> > >
> > Some comparison about amount of swap-out before/after change will be good.
> >
> > Hm. If I do...
> > - set up x86-64 NUMA box. (fake numa is ok.)
> > - create memcg with 500M limit.
> > - running kernel make with make -j 6(or more)
> >
> > see time of make and amount of swap-out.
>
> 4G ram, 500M swap on SSD, numa=fake=16, 10 runs of make -j11 in 500M
> memcg, standard deviation in parens:
>
> seconds pswpin pswpout
> vanilla: 175.359(0.106) 6906.900(1779.135) 8913.200(1917.369)
> patched: 176.144(0.243) 8581.500(1833.432) 10872.400(2124.104)
>
Hmm. swapin/out seems increased. But hmm...stddev is large.
Is this expected ? reason ?
Anyway, I don't want to disturb you more. Thanks.
-Kame
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists