[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAOJsxLHnTDjY3X69zVVaZ0MbpRab-1+Rk3L5AnZD3oQEceHD7Q@mail.gmail.com>
Date: Thu, 20 Aug 2020 16:58:31 +0300
From: Pekka Enberg <penberg@...il.com>
To: Christopher Lameter <cl@...ux.com>
Cc: Vlastimil Babka <vbabka@...e.cz>,
Xunlei Pang <xlpang@...ux.alibaba.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Wen Yang <wenyang@...ux.alibaba.com>,
Yang Shi <yang.shi@...ux.alibaba.com>,
Roman Gushchin <guro@...com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>,
Konstantin Khlebnikov <khlebnikov@...dex-team.ru>,
David Rientjes <rientjes@...gle.com>
Subject: Re: [PATCH 1/2] mm/slub: Introduce two counters for the partial objects
Hi Christopher,
On Tue, Aug 11, 2020 at 3:52 PM Christopher Lameter <cl@...ux.com> wrote:
>
> On Fri, 7 Aug 2020, Pekka Enberg wrote:
>
> > Why do you consider this to be a fast path? This is all partial list
> > accounting when we allocate/deallocate a slab, no? Just like
> > ___slab_alloc() says, I assumed this to be the slow path... What am I
> > missing?
>
> I thought these were per object counters? If you just want to count the
> number of slabs then you do not need the lock at all. We already have a
> counter for the number of slabs.
The patch attempts to speed up count_partial(), which holds on to the
"n->list_lock" (with IRQs off) for the whole duration it takes to walk
the partial slab list:
spin_lock_irqsave(&n->list_lock, flags);
list_for_each_entry(page, &n->partial, slab_list)
x += get_count(page);
spin_unlock_irqrestore(&n->list_lock, flags);
It's counting the number of *objects*, but the counters are only
updated in bulk when we add/remove a slab to/from the partial list.
The counter updates are therefore *not* in the fast-path AFAICT.
Xunlei, please correct me if I'm reading your patches wrong.
On Tue, Aug 11, 2020 at 3:52 PM Christopher Lameter <cl@...ux.com> wrote:
> > No objections to alternative fixes, of course, but wrapping the
> > counters under CONFIG_DEBUG seems like just hiding the actual issue...
>
> CONFIG_DEBUG is on by default. It just compiles in the debug code and
> disables it so we can enable it with a kernel boot option. This is because
> we have had numerous issues in the past with "production" kernels that
> could not be recompiled with debug options. So just running the prod
> kernel with another option will allow you to find hard to debug issues in
> a full scale producton deployment with potentially proprietary modules
> etc.
Yeah, it's been too long since I last looked at the code and did not
realize even count_partial() is wrapped in CONFIG_DEBUG. So by all
means, let's also wrap the counters with that too.
- Pekka
Powered by blists - more mailing lists