[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a3e1d8cc-f7f1-40bc-91e2-1ce5c5b53eaf@suse.cz>
Date: Wed, 21 Jan 2026 11:52:51 +0100
From: Vlastimil Babka <vbabka@...e.cz>
To: Suren Baghdasaryan <surenb@...gle.com>
Cc: Harry Yoo <harry.yoo@...cle.com>, Petr Tesarik <ptesarik@...e.com>,
Christoph Lameter <cl@...two.org>, David Rientjes <rientjes@...gle.com>,
Roman Gushchin <roman.gushchin@...ux.dev>, Hao Li <hao.li@...ux.dev>,
Andrew Morton <akpm@...ux-foundation.org>,
Uladzislau Rezki <urezki@...il.com>,
"Liam R. Howlett" <Liam.Howlett@...cle.com>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Alexei Starovoitov <ast@...nel.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, linux-rt-devel@...ts.linux.dev,
bpf@...r.kernel.org, kasan-dev@...glegroups.com
Subject: Re: [PATCH v3 06/21] slab: introduce percpu sheaves bootstrap
On 1/17/26 03:11, Suren Baghdasaryan wrote:
>> @@ -7379,7 +7405,7 @@ void kmem_cache_free_bulk(struct kmem_cache *s, size_t size, void **p)
>> * freeing to sheaves is so incompatible with the detached freelist so
>> * once we go that way, we have to do everything differently
>> */
>> - if (s && s->cpu_sheaves) {
>> + if (s && cache_has_sheaves(s)) {
>> free_to_pcs_bulk(s, size, p);
>> return;
>> }
>> @@ -7490,8 +7516,7 @@ int kmem_cache_alloc_bulk_noprof(struct kmem_cache *s, gfp_t flags, size_t size,
>> size--;
>> }
>>
>> - if (s->cpu_sheaves)
>> - i = alloc_from_pcs_bulk(s, size, p);
>> + i = alloc_from_pcs_bulk(s, size, p);
>
> Doesn't the above change make this fastpath a bit longer? IIUC,
> instead of bailing out right here we call alloc_from_pcs_bulk() and
> bail out from there because pcs->main->size is 0.
But only for caches with no sheaves, and that should be the exception. So
the fast path avoids checks. We're making the slowpath longer. The strategy
is the same with single-object alloc, and described in the changelog. Or
what am I missing?
Powered by blists - more mailing lists