[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200622203739.GD301338@carbon.dhcp.thefacebook.com>
Date: Mon, 22 Jun 2020 13:37:39 -0700
From: Roman Gushchin <guro@...com>
To: Shakeel Butt <shakeelb@...gle.com>
CC: Andrew Morton <akpm@...ux-foundation.org>,
Christoph Lameter <cl@...ux.com>,
Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...nel.org>,
Linux MM <linux-mm@...ck.org>,
Vlastimil Babka <vbabka@...e.cz>,
Kernel Team <kernel-team@...com>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v6 17/19] mm: memcg/slab: use a single set of kmem_caches
for all allocations
On Mon, Jun 22, 2020 at 12:21:28PM -0700, Shakeel Butt wrote:
> On Mon, Jun 8, 2020 at 4:07 PM Roman Gushchin <guro@...com> wrote:
> >
> > Instead of having two sets of kmem_caches: one for system-wide and
> > non-accounted allocations and the second one shared by all accounted
> > allocations, we can use just one.
> >
> > The idea is simple: space for obj_cgroup metadata can be allocated
> > on demand and filled only for accounted allocations.
> >
> > It allows to remove a bunch of code which is required to handle
> > kmem_cache clones for accounted allocations. There is no more need
> > to create them, accumulate statistics, propagate attributes, etc.
> > It's a quite significant simplification.
> >
> > Also, because the total number of slab_caches is reduced almost twice
> > (not all kmem_caches have a memcg clone), some additional memory
> > savings are expected. On my devvm it additionally saves about 3.5%
> > of slab memory.
> >
> > Suggested-by: Johannes Weiner <hannes@...xchg.org>
> > Signed-off-by: Roman Gushchin <guro@...com>
> > Reviewed-by: Vlastimil Babka <vbabka@...e.cz>
> > ---
> [snip]
> > static inline void memcg_slab_post_alloc_hook(struct kmem_cache *s,
> > struct obj_cgroup *objcg,
> > - size_t size, void **p)
> > + gfp_t flags, size_t size,
> > + void **p)
> > {
> > struct page *page;
> > unsigned long off;
> > size_t i;
> >
> > + if (!objcg)
> > + return;
> > +
> > + flags &= ~__GFP_ACCOUNT;
> > for (i = 0; i < size; i++) {
> > if (likely(p[i])) {
> > page = virt_to_head_page(p[i]);
> > +
> > + if (!page_has_obj_cgroups(page) &&
>
> The page is already linked into the kmem_cache, don't you need
> synchronization for memcg_alloc_page_obj_cgroups().
Hm, yes, in theory we need it. I guess the reason behind why I've never seen any issues
here is the SLUB percpu partial list.
So in theory we need something like:
diff --git a/mm/slab.h b/mm/slab.h
index 0a31600a0f5c..44bf57815816 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -237,7 +237,10 @@ static inline int memcg_alloc_page_obj_cgroups(struct page *page,
if (!vec)
return -ENOMEM;
- page->obj_cgroups = (struct obj_cgroup **) ((unsigned long)vec | 0x1UL);
+ if (cmpxchg(&page->obj_cgroups, 0,
+ (struct obj_cgroup **) ((unsigned long)vec | 0x1UL)))
+ kfree(vec);
+
return 0;
}
But I wonder if we might put it under #ifdef CONFIG_SLAB?
Or any other ideas how to make it less expensive?
> What's the reason to remove this from charge_slab_page()?
Because at charge_slab_page() we don't know if we'll ever need
page->obj_cgroups. Some caches might have only few or even zero
accounted objects.
>
> > + memcg_alloc_page_obj_cgroups(page, s, flags)) {
> > + obj_cgroup_uncharge(objcg, obj_full_size(s));
> > + continue;
> > + }
> > +
> > off = obj_to_index(s, page, p[i]);
> > obj_cgroup_get(objcg);
> > page_obj_cgroups(page)[off] = objcg;
Powered by blists - more mailing lists