[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAOJsxLHX62P0yvHZcXdje41zm_2demzTraqvHXAvfhVPp2HKsA@mail.gmail.com>
Date: Fri, 7 Aug 2020 10:25:59 +0300
From: Pekka Enberg <penberg@...il.com>
To: Vlastimil Babka <vbabka@...e.cz>
Cc: Xunlei Pang <xlpang@...ux.alibaba.com>,
Christoph Lameter <cl@...ux.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Wen Yang <wenyang@...ux.alibaba.com>,
Yang Shi <yang.shi@...ux.alibaba.com>,
Roman Gushchin <guro@...com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>,
Konstantin Khlebnikov <khlebnikov@...dex-team.ru>,
David Rientjes <rientjes@...gle.com>
Subject: Re: [PATCH 1/2] mm/slub: Introduce two counters for the partial objects
On Thu, Aug 6, 2020 at 3:42 PM Vlastimil Babka <vbabka@...e.cz> wrote:
>
> On 7/2/20 10:32 AM, Xunlei Pang wrote:
> > The node list_lock in count_partial() spend long time iterating
> > in case of large amount of partial page lists, which can cause
> > thunder herd effect to the list_lock contention, e.g. it cause
> > business response-time jitters when accessing "/proc/slabinfo"
> > in our production environments.
> >
> > This patch introduces two counters to maintain the actual number
> > of partial objects dynamically instead of iterating the partial
> > page lists with list_lock held.
> >
> > New counters of kmem_cache_node are: pfree_objects, ptotal_objects.
> > The main operations are under list_lock in slow path, its performance
> > impact is minimal.
> >
> > Co-developed-by: Wen Yang <wenyang@...ux.alibaba.com>
> > Signed-off-by: Xunlei Pang <xlpang@...ux.alibaba.com>
>
> This or similar things seem to be reported every few months now, last time was
> here [1] AFAIK. The solution was to just stop counting at some point.
>
> Shall we perhaps add these counters under CONFIG_SLUB_DEBUG then and be done
> with it? If anyone needs the extreme performance and builds without
> CONFIG_SLUB_DEBUG, I'd assume they also don't have userspace programs reading
> /proc/slabinfo periodically anyway?
I think we can just default to the counters. After all, if I
understood correctly, we're talking about up to 100 ms time period
with IRQs disabled when count_partial() is called. As this is
triggerable from user space, that's a performance bug whatever way you
look at it.
Whoever needs to eliminate these counters from fast-path, can wrap
them in a CONFIG_MAKE_SLABINFO_EXTREMELY_SLOW option.
So for this patch, with updated information about the severity of the
problem, and the hackbench numbers:
Acked-by: Pekka Enberg <penberg@...nel.org>
Christoph, others, any objections?
- Pekka
Powered by blists - more mailing lists