[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210106164242.GB1110904@carbon.dhcp.thefacebook.com>
Date: Wed, 6 Jan 2021 08:42:42 -0800
From: Roman Gushchin <guro@...com>
To: Shakeel Butt <shakeelb@...gle.com>
CC: Andrew Morton <akpm@...ux-foundation.org>,
Linux MM <linux-mm@...ck.org>,
Michal Hocko <mhocko@...nel.org>,
Johannes Weiner <hannes@...xchg.org>,
LKML <linux-kernel@...r.kernel.org>,
Kernel Team <kernel-team@...com>,
Imran Khan <imran.f.khan@...cle.com>
Subject: Re: [PATCH] mm: memcg/slab: optimize objcg stock draining
On Tue, Jan 05, 2021 at 10:05:20PM -0800, Shakeel Butt wrote:
> On Tue, Jan 5, 2021 at 8:22 PM Roman Gushchin <guro@...com> wrote:
> >
> > Imran Khan reported a regression in hackbench results caused by the
> > commit f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects
> > instead of pages"). The regression is noticeable in the case of
> > a consequent allocation of several relatively large slab objects,
> > e.g. skb's. As soon as the amount of stocked bytes exceeds PAGE_SIZE,
> > drain_obj_stock() and __memcg_kmem_uncharge() are called, and it leads
> > to a number of atomic operations in page_counter_uncharge().
> >
> > The corresponding call graph is below (provided by Imran Khan):
> > |__alloc_skb
> > | |
> > | |__kmalloc_reserve.isra.61
> > | | |
> > | | |__kmalloc_node_track_caller
> > | | | |
> > | | | |slab_pre_alloc_hook.constprop.88
> > | | | obj_cgroup_charge
> > | | | | |
> > | | | | |__memcg_kmem_charge
> > | | | | | |
> > | | | | | |page_counter_try_charge
> > | | | | |
> > | | | | |refill_obj_stock
> > | | | | | |
> > | | | | | |drain_obj_stock.isra.68
> > | | | | | | |
> > | | | | | | |__memcg_kmem_uncharge
> > | | | | | | | |
> > | | | | | | | |page_counter_uncharge
> > | | | | | | | | |
> > | | | | | | | | |page_counter_cancel
> > | | | |
> > | | | |
> > | | | |__slab_alloc
> > | | | | |
> > | | | | |___slab_alloc
> > | | | | |
> > | | | |slab_post_alloc_hook
> >
> > Instead of directly uncharging the accounted kernel memory, it's
> > possible to refill the generic page-sized per-cpu stock instead.
> > It's a much faster operation, especially on a default hierarchy.
> > As a bonus, __memcg_kmem_uncharge_page() will also get faster,
> > so the freeing of page-sized kernel allocations (e.g. large kmallocs)
> > will become faster.
> >
> > A similar change has been done earlier for the socket memory by
> > the commit 475d0487a2ad ("mm: memcontrol: use per-cpu stocks for
> > socket memory uncharging").
> >
> > Signed-off-by: Roman Gushchin <guro@...com>
> > Reported-by: Imran Khan <imran.f.khan@...cle.com>
>
> I remember seeing this somewhere
> https://lore.kernel.org/linux-mm/20190423154405.259178-1-shakeelb@google.com/
Yes, we've discussed it a couple of times, as I remember. Looks like now
we finally have a good reasoning/benchmark, thanks to Imran.
>
> Reviewed-by: Shakeel Butt <shakeelb@...gle.com>
Thank you for the review!
Powered by blists - more mailing lists