[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20110830162050.f6c13c0c.kamezawa.hiroyu@jp.fujitsu.com>
Date: Tue, 30 Aug 2011 16:20:50 +0900
From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
To: Johannes Weiner <jweiner@...hat.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Daisuke Nishimura <nishimura@....nes.nec.co.jp>,
Balbir Singh <bsingharora@...il.com>,
Andrew Brestic <abrestic@...gle.com>,
Ying Han <yinghan@...gle.com>, Michal Hocko <mhocko@...e.cz>,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [patch] Revert "memcg: add memory.vmscan_stat"
On Tue, 30 Aug 2011 09:04:24 +0200
Johannes Weiner <jweiner@...hat.com> wrote:
> On Tue, Aug 30, 2011 at 10:12:33AM +0900, KAMEZAWA Hiroyuki wrote:
> > On Mon, 29 Aug 2011 17:51:13 +0200
> > Johannes Weiner <jweiner@...hat.com> wrote:
> >
> > > On Tue, Aug 09, 2011 at 08:33:45AM +0900, KAMEZAWA Hiroyuki wrote:
> > > > On Mon, 8 Aug 2011 14:43:33 +0200
> > > > Johannes Weiner <jweiner@...hat.com> wrote:
> > > >
> > > > > On Fri, Jul 22, 2011 at 05:15:40PM +0900, KAMEZAWA Hiroyuki wrote:
> > > > > > +When under_hierarchy is added in the tail, the number indicates the
> > > > > > +total memcg scan of its children and itself.
> > > > >
> > > > > In your implementation, statistics are only accounted to the memcg
> > > > > triggering the limit and the respectively scanned memcgs.
> > > > >
> > > > > Consider the following setup:
> > > > >
> > > > > A
> > > > > / \
> > > > > B C
> > > > > /
> > > > > D
> > > > >
> > > > > If D tries to charge but hits the limit of A, then B's hierarchy
> > > > > counters do not reflect the reclaim activity resulting in D.
> > > > >
> > > > yes, as I expected.
> > >
> > > Andrew,
> > >
> > > with a flawed design, the author unwilling to fix it, and two NAKs,
> > > can we please revert this before the release?
> >
> > How about this ?
>
> > @@ -1710,11 +1711,18 @@ static void mem_cgroup_record_scanstat(s
> > spin_lock(&memcg->scanstat.lock);
> > __mem_cgroup_record_scanstat(memcg->scanstat.stats[context], rec);
> > spin_unlock(&memcg->scanstat.lock);
> > -
> > - memcg = rec->root;
> > - spin_lock(&memcg->scanstat.lock);
> > - __mem_cgroup_record_scanstat(memcg->scanstat.rootstats[context], rec);
> > - spin_unlock(&memcg->scanstat.lock);
> > + cgroup = memcg->css.cgroup;
> > + do {
> > + spin_lock(&memcg->scanstat.lock);
> > + __mem_cgroup_record_scanstat(
> > + memcg->scanstat.hierarchy_stats[context], rec);
> > + spin_unlock(&memcg->scanstat.lock);
> > + if (!cgroup->parent)
> > + break;
> > + cgroup = cgroup->parent;
> > + memcg = mem_cgroup_from_cont(cgroup);
> > + } while (memcg->use_hierarchy && memcg != rec->root);
>
> Okay, so this looks correct, but it sums up all parents after each
> memcg scanned, which could have a performance impact. Usually,
> hierarchy statistics are only summed up when a user reads them.
>
Hmm. But sum-at-read doesn't work.
Assume 3 cgroups in a hierarchy.
A
/
B
/
C
C's scan contains 3 causes.
C's scan caused by limit of A.
C's scan caused by limit of B.
C's scan caused by limit of C.
If we make hierarchy sum at read, we think
B's scan_stat = B's scan_stat + C's scan_stat
But in precice, this is
B's scan_stat = B's scan_stat caused by B +
B's scan_stat caused by A +
C's scan_stat caused by C +
C's scan_stat caused by B +
C's scan_stat caused by A.
In orignal version.
B's scan_stat = B's scan_stat caused by B +
C's scan_stat caused by B +
After this patch,
B's scan_stat = B's scan_stat caused by B +
B's scan_stat caused by A +
C's scan_stat caused by C +
C's scan_stat caused by B +
C's scan_stat caused by A.
Hmm...removing hierarchy part completely seems fine to me.
> I don't get why this has to be done completely different from the way
> we usually do things, without any justification, whatsoever.
>
> Why do you want to pass a recording structure down the reclaim stack?
Just for reducing number of passed variables.
> Why not make it per-cpu counters that are only summed up, together
> with the hierarchy values, when someone is actually interested in
> them? With an interface like mem_cgroup_count_vm_event(), or maybe
> even an extension of that function?
percpu counter seems overkill to me because there is no heavy lock contention.
Thanks,
-Kame
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists