[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200515082955.GJ29153@dhcp22.suse.cz>
Date: Fri, 15 May 2020 10:29:55 +0200
From: Michal Hocko <mhocko@...nel.org>
To: Shakeel Butt <shakeelb@...gle.com>
Cc: Johannes Weiner <hannes@...xchg.org>, Mel Gorman <mgorman@...e.de>,
Roman Gushchin <guro@...com>,
Andrew Morton <akpm@...ux-foundation.org>,
Yafang Shao <laoar.shao@...il.com>,
Linux MM <linux-mm@...ck.org>,
Cgroups <cgroups@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] memcg: expose root cgroup's memory.stat
On Sat 09-05-20 07:06:38, Shakeel Butt wrote:
> On Fri, May 8, 2020 at 2:44 PM Johannes Weiner <hannes@...xchg.org> wrote:
> >
> > On Fri, May 08, 2020 at 10:06:30AM -0700, Shakeel Butt wrote:
> > > One way to measure the efficiency of memory reclaim is to look at the
> > > ratio (pgscan+pfrefill)/pgsteal. However at the moment these stats are
> > > not updated consistently at the system level and the ratio of these are
> > > not very meaningful. The pgsteal and pgscan are updated for only global
> > > reclaim while pgrefill gets updated for global as well as cgroup
> > > reclaim.
> > >
> > > Please note that this difference is only for system level vmstats. The
> > > cgroup stats returned by memory.stat are actually consistent. The
> > > cgroup's pgsteal contains number of reclaimed pages for global as well
> > > as cgroup reclaim. So, one way to get the system level stats is to get
> > > these stats from root's memory.stat, so, expose memory.stat for the root
> > > cgroup.
> > >
> > > from Johannes Weiner:
> > > There are subtle differences between /proc/vmstat and
> > > memory.stat, and cgroup-aware code that wants to watch the full
> > > hierarchy currently has to know about these intricacies and
> > > translate semantics back and forth.
Can we have those subtle differences documented please?
> > >
> > > Generally having the fully recursive memory.stat at the root
> > > level could help a broader range of usecases.
> >
> > The changelog begs the question why we don't just "fix" the
> > system-level stats. It may be useful to include the conclusions from
> > that discussion, and why there is value in keeping the stats this way.
> >
>
> Right. Andrew, can you please add the following para to the changelog?
>
> Why not fix the stats by including both the global and cgroup reclaim
> activity instead of exposing root cgroup's memory.stat? The reason is
> the benefit of having metrics exposing the activity that happens
> purely due to machine capacity rather than localized activity that
> happens due to the limits throughout the cgroup tree. Additionally
> there are userspace tools like sysstat(sar) which reads these stats to
> inform about the system level reclaim activity. So, we should not
> break such use-cases.
>
> > > Signed-off-by: Shakeel Butt <shakeelb@...gle.com>
> > > Suggested-by: Johannes Weiner <hannes@...xchg.org>
> >
> > Acked-by: Johannes Weiner <hannes@...xchg.org>
>
> Thanks a lot.
I was quite surprised that the patch is so simple TBH. For some reason
I've still had memories that we do not account for root memcg (likely
because mem_cgroup_is_root(memcg) bail out in the try_charge. But stats
are slightly different here. I have started looking at different stat
counters because they are not really all the same. E.g.
- mem_cgroup_charge_statistics accounts for each memcg
- memcg_charge_kernel_stack relies on pages being associated with a
memcg and that in turn relies on __memcg_kmem_charge_page which bails
out on root memcg
- memcg_charge_slab (NR_SLAB*) skips over root memcg as well
- __mod_lruvec_page_state relies on page->mem_cgroup as well but this
one is ok for paths which go through commit_charge path.
That being said we should really double check which stats are
accounted properly. At least MEMCG_KERNEL_STACK_KB won't unless I am
misreading the code.
I do not mind displaying the root's memcg stats but a) a closer look had
to be done for each counter and b) a clarification of differences from
the global vmstat counters would be really handy.
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists