[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8a8271f1-a695-4eeb-9a98-3d6268ed0d45@suse.cz>
Date: Wed, 29 Oct 2025 18:46:08 +0100
From: Vlastimil Babka <vbabka@...e.cz>
To: Alexei Starovoitov <alexei.starovoitov@...il.com>,
 Chris Mason <clm@...a.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
 Christoph Lameter <cl@...two.org>, David Rientjes <rientjes@...gle.com>,
 Roman Gushchin <roman.gushchin@...ux.dev>, Harry Yoo <harry.yoo@...cle.com>,
 Uladzislau Rezki <urezki@...il.com>,
 "Liam R. Howlett" <Liam.Howlett@...cle.com>,
 Suren Baghdasaryan <surenb@...gle.com>,
 Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
 Alexei Starovoitov <ast@...nel.org>, linux-mm <linux-mm@...ck.org>,
 LKML <linux-kernel@...r.kernel.org>, linux-rt-devel@...ts.linux.dev,
 bpf <bpf@...r.kernel.org>, kasan-dev <kasan-dev@...glegroups.com>
Subject: Re: [PATCH RFC 07/19] slab: make percpu sheaves compatible with
 kmalloc_nolock()/kfree_nolock()
On 10/24/25 21:43, Alexei Starovoitov wrote:
> On Thu, Oct 23, 2025 at 6:53 AM Vlastimil Babka <vbabka@...e.cz> wrote:
>>
>> Before we enable percpu sheaves for kmalloc caches, we need to make sure
>> kmalloc_nolock() and kfree_nolock() will continue working properly and
>> not spin when not allowed to.
>>
>> Percpu sheaves themselves use local_trylock() so they are already
>> compatible. We just need to be careful with the barn->lock spin_lock.
>> Pass a new allow_spin parameter where necessary to use
>> spin_trylock_irqsave().
>>
>> In kmalloc_nolock_noprof() we can now attempt alloc_from_pcs() safely,
>> for now it will always fail until we enable sheaves for kmalloc caches
>> next. Similarly in kfree_nolock() we can attempt free_to_pcs().
>>
>> Signed-off-by: Vlastimil Babka <vbabka@...e.cz>
...>> @@ -5720,6 +5735,13 @@ void *kmalloc_nolock_noprof(size_t size, gfp_t
gfp_flags, int node)
>>                  */
>>                 return NULL;
>>
>> +       ret = alloc_from_pcs(s, alloc_gfp, node);
>> +
> 
> I would remove the empty line here.
Ack.
>> @@ -6093,6 +6117,11 @@ __pcs_replace_full_main(struct kmem_cache *s, struct slub_percpu_sheaves *pcs)
>>                 return pcs;
>>         }
>>
>> +       if (!allow_spin) {
>> +               local_unlock(&s->cpu_sheaves->lock);
>> +               return NULL;
>> +       }
> 
> and would add a comment here to elaborate that the next
> steps like sheaf_flush_unused() and alloc_empty_sheaf()
> cannot handle !allow_spin.
Will do.
>> +
>>         if (PTR_ERR(empty) == -E2BIG) {
>>                 /* Since we got here, spare exists and is full */
>>                 struct slab_sheaf *to_flush = pcs->spare;
>> @@ -6160,7 +6189,7 @@ __pcs_replace_full_main(struct kmem_cache *s, struct slub_percpu_sheaves *pcs)
>>   * The object is expected to have passed slab_free_hook() already.
>>   */
>>  static __fastpath_inline
>> -bool free_to_pcs(struct kmem_cache *s, void *object)
>> +bool free_to_pcs(struct kmem_cache *s, void *object, bool allow_spin)
>>  {
>>         struct slub_percpu_sheaves *pcs;
>>
>> @@ -6171,7 +6200,7 @@ bool free_to_pcs(struct kmem_cache *s, void *object)
>>
>>         if (unlikely(pcs->main->size == s->sheaf_capacity)) {
>>
>> -               pcs = __pcs_replace_full_main(s, pcs);
>> +               pcs = __pcs_replace_full_main(s, pcs, allow_spin);
>>                 if (unlikely(!pcs))
>>                         return false;
>>         }
>> @@ -6278,7 +6307,7 @@ bool __kfree_rcu_sheaf(struct kmem_cache *s, void *obj)
>>                         goto fail;
>>                 }
>>
>> -               empty = barn_get_empty_sheaf(barn);
>> +               empty = barn_get_empty_sheaf(barn, true);
>>
>>                 if (empty) {
>>                         pcs->rcu_free = empty;
>> @@ -6398,7 +6427,7 @@ static void free_to_pcs_bulk(struct kmem_cache *s, size_t size, void **p)
>>                 goto no_empty;
>>
>>         if (!pcs->spare) {
>> -               empty = barn_get_empty_sheaf(barn);
>> +               empty = barn_get_empty_sheaf(barn, true);
> 
> I'm allergic to booleans in arguments. They make callsites
> hard to read. Especially if there are multiple bools.
> We have horrendous lines in the verifier that we still need
> to clean up due to bools:
> check_load_mem(env, insn, true, false, false, "atomic_load");
> 
> barn_get_empty_sheaf(barn, true); looks benign,
> but I would still use enum { DONT_SPIN, ALLOW_SPIN }
> and use that in all functions instead of 'bool allow_spin'.
I'll put it on the TODO list. But I think it's just following the pattern of
what you did in all the work leading to kmalloc_nolock() :)
And it's a single bool and for internal function with limited exposure, so
might be an overkill. Will see.
> Aside from that I got worried that sheaves fast path
> may be not optimized well by the compiler:
> if (unlikely(pcs->main->size == 0)) ...
> object = pcs->main->objects[pcs->main->size - 1];
> // object is accessed here
only by virt_to_folio() which takes a const void *x and is probably inlined
anyway...
> pcs->main->size--;
> 
> since object may alias into pcs->main and the compiler
> may be tempted to reload 'main'.
Interesting, it wouldn't have thought about the possibility.
> Looks like it's fine, since object point is not actually read or written.
Wonder if it figures that out or just assumes it would be an undefined
behavior (or would we need strict aliasing to allow the assumption?). But
good to know it looks ok, thanks!
> gcc15 asm looks good:
>         movq    8(%rbx), %rdx   # _68->main, _69
>         movl    24(%rdx), %eax  # _69->size, _70
> # ../mm/slub.c:5129:    if (unlikely(pcs->main->size == 0)) {
>         testl   %eax, %eax      # _70
>         je      .L2076  #,
> .L1953:
> # ../mm/slub.c:5135:    object = pcs->main->objects[pcs->main->size - 1];
>         leal    -1(%rax), %esi  #,
> # ../mm/slub.c:5135:    object = pcs->main->objects[pcs->main->size - 1];
>         movq    32(%rdx,%rsi,8), %rdi   # prephitmp_309->objects[_81], object
> # ../mm/slub.c:5135:    object = pcs->main->objects[pcs->main->size - 1];
>         movq    %rsi, %rax      #,
> # ../mm/slub.c:5137:    if (unlikely(node_requested)) {
>         testb   %r15b, %r15b    # node_requested
>         jne     .L2077  #,
> .L1954:
> # ../mm/slub.c:5149:    pcs->main->size--;
>         movl    %eax, 24(%rdx)  # _81, prephitmp_30->size
Powered by blists - more mailing lists
 
