[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200322163738.GA3898@carbon.dhcp.thefacebook.com>
Date: Sun, 22 Mar 2020 09:37:38 -0700
From: Roman Gushchin <guro@...com>
To: Andrew Morton <akpm@...ux-foundation.org>
CC: Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...nel.org>, <linux-mm@...ck.org>,
<kernel-team@...com>, <linux-kernel@...r.kernel.org>,
Bharata B Rao <bharata@...ux.ibm.com>,
<stable@...r.kernel.org>
Subject: Re: [PATCH] mm: fork: fix kernel_stack memcg stats for various stack
implementations
On Sat, Mar 21, 2020 at 04:48:56PM -0700, Andrew Morton wrote:
> On Tue, 3 Mar 2020 15:35:50 -0800 Roman Gushchin <guro@...com> wrote:
>
> > Depending on CONFIG_VMAP_STACK and the THREAD_SIZE / PAGE_SIZE ratio
> > the space for task stacks can be allocated using __vmalloc_node_range(),
> > alloc_pages_node() and kmem_cache_alloc_node(). In the first and the
> > second cases page->mem_cgroup pointer is set, but in the third it's
> > not: memcg membership of a slab page should be determined using the
> > memcg_from_slab_page() function, which looks at
> > page->slab_cache->memcg_params.memcg . In this case, using
> > mod_memcg_page_state() (as in account_kernel_stack()) is incorrect:
> > page->mem_cgroup pointer is NULL even for pages charged to a non-root
> > memory cgroup.
> >
> > It can lead to kernel_stack per-memcg counters permanently showing 0
> > on some architectures (depending on the configuration).
> >
> > In order to fix it, let's introduce a mod_memcg_obj_state() helper,
> > which takes a pointer to a kernel object as a first argument, uses
> > mem_cgroup_from_obj() to get a RCU-protected memcg pointer and
> > calls mod_memcg_state(). It allows to handle all possible
> > configurations (CONFIG_VMAP_STACK and various THREAD_SIZE/PAGE_SIZE
> > values) without spilling any memcg/kmem specifics into fork.c .
> >
> > Note: this patch has been first posted as a part of the new slab
> > controller patchset. This is a slightly updated version: the fixes
> > tag has been added and the commit log was extended by the advice
> > of Johannes Weiner. Because it's a fix that makes sense by itself,
> > I'm re-posting it as a standalone patch.
>
> Actually, it isn't a standalone patch.
It's true. I only meant it doesn't have to be a part of the slab accounting
rework patchset.
>
> > --- a/mm/memcontrol.c
> > +++ b/mm/memcontrol.c
> > @@ -776,6 +776,17 @@ void __mod_lruvec_slab_state(void *p, enum node_stat_item idx, int val)
> > rcu_read_unlock();
> > }
> >
> > +void mod_memcg_obj_state(void *p, int idx, int val)
> > +{
> > + struct mem_cgroup *memcg;
> > +
> > + rcu_read_lock();
> > + memcg = mem_cgroup_from_obj(p);
> > + if (memcg)
> > + mod_memcg_state(memcg, idx, val);
> > + rcu_read_unlock();
> > +}
>
> mem_cgroup_from_obj() is later added by
> http://lkml.kernel.org/r/20200117203609.3146239-1-guro@fb.com
>
> We could merge both mm-memcg-slab-introduce-mem_cgroup_from_obj.patch
> and this patch, but that's a whole lot of stuff to backport into
> -stable.
>
> Are you able to come up with a simpler suitable-for-stable fix?
I'll try.
Thank you!
Powered by blists - more mailing lists