lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <gv3ixsxai47hjv2pzpnptcjeqw7ikt5nnds22hkxlbtk7wgnfd@rzzcijtth6f6>
Date: Mon, 19 Jan 2026 20:06:35 +0800
From: Hao Li <hao.li@...ux.dev>
To: Vlastimil Babka <vbabka@...e.cz>
Cc: Harry Yoo <harry.yoo@...cle.com>, Petr Tesarik <ptesarik@...e.com>, 
	Christoph Lameter <cl@...two.org>, David Rientjes <rientjes@...gle.com>, 
	Roman Gushchin <roman.gushchin@...ux.dev>, Andrew Morton <akpm@...ux-foundation.org>, 
	Uladzislau Rezki <urezki@...il.com>, "Liam R. Howlett" <Liam.Howlett@...cle.com>, 
	Suren Baghdasaryan <surenb@...gle.com>, Sebastian Andrzej Siewior <bigeasy@...utronix.de>, 
	Alexei Starovoitov <ast@...nel.org>, linux-mm@...ck.org, linux-kernel@...r.kernel.org, 
	linux-rt-devel@...ts.linux.dev, bpf@...r.kernel.org, kasan-dev@...glegroups.com
Subject: Re: [PATCH v3 07/21] slab: make percpu sheaves compatible with
 kmalloc_nolock()/kfree_nolock()

On Mon, Jan 19, 2026 at 11:23:04AM +0100, Vlastimil Babka wrote:
> On 1/19/26 11:09, Vlastimil Babka wrote:
> > On 1/19/26 05:31, Harry Yoo wrote:
> >> On Fri, Jan 16, 2026 at 03:40:27PM +0100, Vlastimil Babka wrote:
> >>> Before we enable percpu sheaves for kmalloc caches, we need to make sure
> >>> kmalloc_nolock() and kfree_nolock() will continue working properly and
> >>> not spin when not allowed to.
> >>> 
> >>> Percpu sheaves themselves use local_trylock() so they are already
> >>> compatible. We just need to be careful with the barn->lock spin_lock.
> >>> Pass a new allow_spin parameter where necessary to use
> >>> spin_trylock_irqsave().
> >>> 
> >>> In kmalloc_nolock_noprof() we can now attempt alloc_from_pcs() safely,
> >>> for now it will always fail until we enable sheaves for kmalloc caches
> >>> next. Similarly in kfree_nolock() we can attempt free_to_pcs().
> >>> 
> >>> Signed-off-by: Vlastimil Babka <vbabka@...e.cz>
> >>> ---
> >> 
> >> Looks good to me,
> >> Reviewed-by: Harry Yoo <harry.yoo@...cle.com>
> > 
> > Thanks.
> > 
> >> 
> >> with a nit below.
> >> 
> >>>  mm/slub.c | 79 ++++++++++++++++++++++++++++++++++++++++++++-------------------
> >>>  1 file changed, 56 insertions(+), 23 deletions(-)
> >>> 
> >>> diff --git a/mm/slub.c b/mm/slub.c
> >>> index 706cb6398f05..b385247c219f 100644
> >>> --- a/mm/slub.c
> >>> +++ b/mm/slub.c
> >>> @@ -6703,7 +6735,7 @@ void slab_free(struct kmem_cache *s, struct slab *slab, void *object,
> >>>  
> >>>  	if (likely(!IS_ENABLED(CONFIG_NUMA) || slab_nid(slab) == numa_mem_id())
> >>>  	    && likely(!slab_test_pfmemalloc(slab))) {
> >>> -		if (likely(free_to_pcs(s, object)))
> >>> +		if (likely(free_to_pcs(s, object, true)))
> >>>  			return;
> >>>  	}
> >>>  
> >>> @@ -6964,7 +6996,8 @@ void kfree_nolock(const void *object)
> >>>  	 * since kasan quarantine takes locks and not supported from NMI.
> >>>  	 */
> >>>  	kasan_slab_free(s, x, false, false, /* skip quarantine */true);
> >>> -	do_slab_free(s, slab, x, x, 0, _RET_IP_);
> >>> +	if (!free_to_pcs(s, x, false))
> >>> +		do_slab_free(s, slab, x, x, 0, _RET_IP_);
> >>>  }
> >> 
> >> nit: Maybe it's not that common but should we bypass sheaves if
> >> it's from remote NUMA node just like slab_free()?
> > 
> > Right, will do.
> 
> However that means sheaves will help less with the defer_free() avoidance
> here. It becomes more obvious after "slab: remove the do_slab_free()
> fastpath". All remote object frees will be deferred. Guess we can revisit
> later if we see there are too many and have no better solution...

This makes sense to me, and the commit looks good as well. Thanks!

Reviewed-by: Hao Li <hao.li@...ux.dev>

> 
> >>>  EXPORT_SYMBOL_GPL(kfree_nolock);
> >>>  
> >>> @@ -7516,7 +7549,7 @@ int kmem_cache_alloc_bulk_noprof(struct kmem_cache *s, gfp_t flags, size_t size,
> >>>  		size--;
> >>>  	}
> >>>  
> >>> -	i = alloc_from_pcs_bulk(s, size, p);
> >>> +	i = alloc_from_pcs_bulk(s, flags, size, p);
> >>>  
> >>>  	if (i < size) { >  		/*
> >>> 
> >> 
> > 
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ