[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <r3qfus4j6awmixdbcopgva3lx2l3lrvlvuoqqns64q6qp33qep@2hsrrvfsojsm>
Date: Tue, 27 Jan 2026 11:34:35 -0500
From: "Liam R. Howlett" <Liam.Howlett@...cle.com>
To: Vlastimil Babka <vbabka@...e.cz>
Cc: Harry Yoo <harry.yoo@...cle.com>, Petr Tesarik <ptesarik@...e.com>,
Christoph Lameter <cl@...two.org>,
David Rientjes <rientjes@...gle.com>,
Roman Gushchin <roman.gushchin@...ux.dev>, Hao Li <hao.li@...ux.dev>,
Andrew Morton <akpm@...ux-foundation.org>,
Uladzislau Rezki <urezki@...il.com>,
Suren Baghdasaryan <surenb@...gle.com>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Alexei Starovoitov <ast@...nel.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, linux-rt-devel@...ts.linux.dev,
bpf@...r.kernel.org, kasan-dev@...glegroups.com
Subject: Re: [PATCH v4 06/22] slab: add sheaves to most caches
* Vlastimil Babka <vbabka@...e.cz> [260123 01:53]:
> In the first step to replace cpu (partial) slabs with sheaves, enable
> sheaves for almost all caches. Treat args->sheaf_capacity as a minimum,
> and calculate sheaf capacity with a formula that roughly follows the
> formula for number of objects in cpu partial slabs in set_cpu_partial().
>
> This should achieve roughly similar contention on the barn spin lock as
> there's currently for node list_lock without sheaves, to make
> benchmarking results comparable. It can be further tuned later.
>
> Don't enable sheaves for bootstrap caches as that wouldn't work. In
> order to recognize them by SLAB_NO_OBJ_EXT, make sure the flag exists
> even for !CONFIG_SLAB_OBJ_EXT.
>
> This limitation will be lifted for kmalloc caches after the necessary
> bootstrapping changes.
>
> Also do not enable sheaves for SLAB_NOLEAKTRACE caches to avoid
> recursion with kmemleak tracking (thanks to Breno Leitao).
>
> Reviewed-by: Suren Baghdasaryan <surenb@...gle.com>
> Reviewed-by: Harry Yoo <harry.yoo@...cle.com>
> Signed-off-by: Vlastimil Babka <vbabka@...e.cz>
Is there a way to force a specific limit to the sheaf capacity if you
want a lower number than what is calculated in
calculate_sheaf_capacity()? That is, it seems your code always decides
if the specified sheaf number is smaller right now. I'm not sure it's
practical to want a smaller number though.
Reviewed-by: Liam R. Howlett <Liam.Howlett@...cle.com>
> ---
> include/linux/slab.h | 6 ------
> mm/slub.c | 56 ++++++++++++++++++++++++++++++++++++++++++++++++----
> 2 files changed, 52 insertions(+), 10 deletions(-)
>
> diff --git a/include/linux/slab.h b/include/linux/slab.h
> index 2482992248dc..2682ee57ec90 100644
> --- a/include/linux/slab.h
> +++ b/include/linux/slab.h
> @@ -57,9 +57,7 @@ enum _slab_flag_bits {
> #endif
> _SLAB_OBJECT_POISON,
> _SLAB_CMPXCHG_DOUBLE,
> -#ifdef CONFIG_SLAB_OBJ_EXT
> _SLAB_NO_OBJ_EXT,
> -#endif
> _SLAB_FLAGS_LAST_BIT
> };
>
> @@ -238,11 +236,7 @@ enum _slab_flag_bits {
> #define SLAB_TEMPORARY SLAB_RECLAIM_ACCOUNT /* Objects are short-lived */
>
> /* Slab created using create_boot_cache */
> -#ifdef CONFIG_SLAB_OBJ_EXT
> #define SLAB_NO_OBJ_EXT __SLAB_FLAG_BIT(_SLAB_NO_OBJ_EXT)
> -#else
> -#define SLAB_NO_OBJ_EXT __SLAB_FLAG_UNUSED
> -#endif
>
> /*
> * ZERO_SIZE_PTR will be returned for zero sized kmalloc requests.
> diff --git a/mm/slub.c b/mm/slub.c
> index 9d86c0505dcd..594f5fac39b3 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -7880,6 +7880,53 @@ static void set_cpu_partial(struct kmem_cache *s)
> #endif
> }
>
> +static unsigned int calculate_sheaf_capacity(struct kmem_cache *s,
> + struct kmem_cache_args *args)
> +
> +{
> + unsigned int capacity;
> + size_t size;
> +
> +
> + if (IS_ENABLED(CONFIG_SLUB_TINY) || s->flags & SLAB_DEBUG_FLAGS)
> + return 0;
> +
> + /*
> + * Bootstrap caches can't have sheaves for now (SLAB_NO_OBJ_EXT).
> + * SLAB_NOLEAKTRACE caches (e.g., kmemleak's object_cache) must not
> + * have sheaves to avoid recursion when sheaf allocation triggers
> + * kmemleak tracking.
> + */
> + if (s->flags & (SLAB_NO_OBJ_EXT | SLAB_NOLEAKTRACE))
> + return 0;
> +
> + /*
> + * For now we use roughly similar formula (divided by two as there are
> + * two percpu sheaves) as what was used for percpu partial slabs, which
> + * should result in similar lock contention (barn or list_lock)
> + */
> + if (s->size >= PAGE_SIZE)
> + capacity = 4;
> + else if (s->size >= 1024)
> + capacity = 12;
> + else if (s->size >= 256)
> + capacity = 26;
> + else
> + capacity = 60;
> +
> + /* Increment capacity to make sheaf exactly a kmalloc size bucket */
> + size = struct_size_t(struct slab_sheaf, objects, capacity);
> + size = kmalloc_size_roundup(size);
> + capacity = (size - struct_size_t(struct slab_sheaf, objects, 0)) / sizeof(void *);
> +
> + /*
> + * Respect an explicit request for capacity that's typically motivated by
> + * expected maximum size of kmem_cache_prefill_sheaf() to not end up
> + * using low-performance oversize sheaves
> + */
> + return max(capacity, args->sheaf_capacity);
> +}
> +
> /*
> * calculate_sizes() determines the order and the distribution of data within
> * a slab object.
> @@ -8014,6 +8061,10 @@ static int calculate_sizes(struct kmem_cache_args *args, struct kmem_cache *s)
> if (s->flags & SLAB_RECLAIM_ACCOUNT)
> s->allocflags |= __GFP_RECLAIMABLE;
>
> + /* kmalloc caches need extra care to support sheaves */
> + if (!is_kmalloc_cache(s))
> + s->sheaf_capacity = calculate_sheaf_capacity(s, args);
> +
> /*
> * Determine the number of objects per slab
> */
> @@ -8618,15 +8669,12 @@ int do_kmem_cache_create(struct kmem_cache *s, const char *name,
>
> set_cpu_partial(s);
>
> - if (args->sheaf_capacity && !IS_ENABLED(CONFIG_SLUB_TINY)
> - && !(s->flags & SLAB_DEBUG_FLAGS)) {
> + if (s->sheaf_capacity) {
> s->cpu_sheaves = alloc_percpu(struct slub_percpu_sheaves);
> if (!s->cpu_sheaves) {
> err = -ENOMEM;
> goto out;
> }
> - // TODO: increase capacity to grow slab_sheaf up to next kmalloc size?
> - s->sheaf_capacity = args->sheaf_capacity;
> }
>
> #ifdef CONFIG_NUMA
>
> --
> 2.52.0
>
>
Powered by blists - more mailing lists