[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <mxrcthlqj6rbecg5z33lc7oqnbicr5fn5lmvni2tjo2dc3oe76@u5vettfyypl4>
Date: Tue, 20 Jan 2026 10:55:25 +0800
From: Hao Li <hao.li@...ux.dev>
To: Vlastimil Babka <vbabka@...e.cz>
Cc: Harry Yoo <harry.yoo@...cle.com>, Petr Tesarik <ptesarik@...e.com>,
Christoph Lameter <cl@...two.org>, David Rientjes <rientjes@...gle.com>,
Roman Gushchin <roman.gushchin@...ux.dev>, Andrew Morton <akpm@...ux-foundation.org>,
Uladzislau Rezki <urezki@...il.com>, "Liam R. Howlett" <Liam.Howlett@...cle.com>,
Suren Baghdasaryan <surenb@...gle.com>, Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Alexei Starovoitov <ast@...nel.org>, linux-mm@...ck.org, linux-kernel@...r.kernel.org,
linux-rt-devel@...ts.linux.dev, bpf@...r.kernel.org, kasan-dev@...glegroups.com
Subject: Re: [PATCH v3 09/21] slab: add optimized sheaf refill from partial
list
On Fri, Jan 16, 2026 at 03:40:29PM +0100, Vlastimil Babka wrote:
> At this point we have sheaves enabled for all caches, but their refill
> is done via __kmem_cache_alloc_bulk() which relies on cpu (partial)
> slabs - now a redundant caching layer that we are about to remove.
>
> The refill will thus be done from slabs on the node partial list.
> Introduce new functions that can do that in an optimized way as it's
> easier than modifying the __kmem_cache_alloc_bulk() call chain.
>
> Extend struct partial_context so it can return a list of slabs from the
> partial list with the sum of free objects in them within the requested
> min and max.
>
> Introduce get_partial_node_bulk() that removes the slabs from freelist
> and returns them in the list.
>
> Introduce get_freelist_nofreeze() which grabs the freelist without
> freezing the slab.
>
> Introduce alloc_from_new_slab() which can allocate multiple objects from
> a newly allocated slab where we don't need to synchronize with freeing.
> In some aspects it's similar to alloc_single_from_new_slab() but assumes
> the cache is a non-debug one so it can avoid some actions.
>
> Introduce __refill_objects() that uses the functions above to fill an
> array of objects. It has to handle the possibility that the slabs will
> contain more objects that were requested, due to concurrent freeing of
> objects to those slabs. When no more slabs on partial lists are
> available, it will allocate new slabs. It is intended to be only used
> in context where spinning is allowed, so add a WARN_ON_ONCE check there.
>
> Finally, switch refill_sheaf() to use __refill_objects(). Sheaves are
> only refilled from contexts that allow spinning, or even blocking.
>
> Signed-off-by: Vlastimil Babka <vbabka@...e.cz>
> ---
> mm/slub.c | 284 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++-----
> 1 file changed, 264 insertions(+), 20 deletions(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index 9bea8a65e510..dce80463f92c 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -246,6 +246,9 @@ struct partial_context {
> gfp_t flags;
> unsigned int orig_size;
> void *object;
> + unsigned int min_objects;
> + unsigned int max_objects;
> + struct list_head slabs;
> };
>
...
> +static unsigned int alloc_from_new_slab(struct kmem_cache *s, struct slab *slab,
> + void **p, unsigned int count, bool allow_spin)
> +{
> + unsigned int allocated = 0;
> + struct kmem_cache_node *n;
> + unsigned long flags;
> + void *object;
> +
> + if (!allow_spin && (slab->objects - slab->inuse) > count) {
I was wondering - given that slab->inuse is 0 for a newly allocated slab, is
there a reason to use "slab->objects - slab->inuse" instead of simply
slab->objects.
> +
> + n = get_node(s, slab_nid(slab));
> +
> + if (!spin_trylock_irqsave(&n->list_lock, flags)) {
> + /* Unlucky, discard newly allocated slab */
> + defer_deactivate_slab(slab, NULL);
> + return 0;
> + }
> + }
> +
> + object = slab->freelist;
> + while (object && allocated < count) {
> + p[allocated] = object;
> + object = get_freepointer(s, object);
> + maybe_wipe_obj_freeptr(s, p[allocated]);
> +
> + slab->inuse++;
> + allocated++;
> + }
> + slab->freelist = object;
> +
> + if (slab->freelist) {
> +
> + if (allow_spin) {
> + n = get_node(s, slab_nid(slab));
> + spin_lock_irqsave(&n->list_lock, flags);
> + }
> + add_partial(n, slab, DEACTIVATE_TO_HEAD);
> + spin_unlock_irqrestore(&n->list_lock, flags);
> + }
> +
> + inc_slabs_node(s, slab_nid(slab), slab->objects);
> + return allocated;
> +}
> +
...
Powered by blists - more mailing lists