[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8813897d-4a52-37a0-fe44-a9157716be9b@google.com>
Date: Mon, 3 Jul 2023 13:17:53 -0700 (PDT)
From: David Rientjes <rientjes@...gle.com>
To: Julian Pidancet <julian.pidancet@...cle.com>
cc: Christoph Lameter <cl@...ux.com>,
"Lameter, Christopher" <cl@...amperecomputing.com>,
Pekka Enberg <penberg@...nel.org>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Andrew Morton <akpm@...ux-foundation.org>,
Vlastimil Babka <vbabka@...e.cz>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Hyeonggon Yoo <42.hyeyoo@...il.com>, linux-mm@...ck.org,
Jonathan Corbet <corbet@....net>, linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org, Matthew Wilcox <willy@...radead.org>,
Kees Cook <keescook@...omium.org>,
Rafael Aquini <aquini@...hat.com>
Subject: Re: [PATCH v2] mm/slub: disable slab merging in the default
configuration
On Mon, 3 Jul 2023, Julian Pidancet wrote:
> On Mon Jul 3, 2023 at 02:09, David Rientjes wrote:
> > I think we need more data beyond just kernbench. Christoph's point about
> > different page sizes is interesting. In the above results, I don't know
> > the page orders for the various slab caches that this workload will
> > stress. I think the memory overhead data may be different depending on
> > how slab_max_order is being used, if at all.
> >
> > We should be able to run this through a variety of different benchmarks
> > and measure peak slab usage at the same time for due diligence. I support
> > the change in the default, I would just prefer to know what the
> > implications of it is.
> >
> > Is it possible to collect data for other microbenchmarks and real-world
> > workloads? And perhaps also with different page sizes where this will
> > impact memory overhead more? I can help running more workloads once we
> > have the next set of data.
> >
>
> David,
>
> I agree about the need to perform those tests on hardware using larger
> pages. I will collect data if I have the chance to get my hands on one
> of these systems.
>
Thanks. I think arm64 should suffice for things like 64KB pages that
Christoph was referring to.
We also may want to play around with slub_min_order on the kernel command
line since that will inflate the size of slab pages and we may see some
different results because of the increased page size.
> Do you have specific tests or workload in mind ? Compiling the kernel
> with files sitting on an XFS partition is not exhaustive but it is the
> only test I could think of that is both easy to set up and can be
> reproduced while keeping external interferences as little as possible.
>
The ones that Binder, cc'd, used to evaluate SLAB vs SLUB memory overhead:
hackbench
netperf
redis
specjbb2015
unixbench
will-it-scale
And Vlastimil had also suggested a few XFS specific benchmarks.
I can try to help run benchmarks that you're not able to run or if you
can't get your hands on an arm64 system.
Additionally, I wouldn't consider this to be super urgent: slab cache
merging has been this way for several years, we have some time to do an
assessment of the implications of changing an important aspect of kernel
memory allocation that will affect everybody. I agree with the patch if
we can make it work, I'd just like to study the effect of it more fully
beyond some kernbench runs.
Powered by blists - more mailing lists