lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <938fa0c7-86c3-44d4-b583-0612458aed98@suse.cz>
Date: Thu, 15 May 2025 10:59:14 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: Harry Yoo <harry.yoo@...cle.com>
Cc: Suren Baghdasaryan <surenb@...gle.com>,
 "Liam R. Howlett" <Liam.Howlett@...cle.com>, Christoph Lameter
 <cl@...ux.com>, David Rientjes <rientjes@...gle.com>,
 Roman Gushchin <roman.gushchin@...ux.dev>,
 Uladzislau Rezki <urezki@...il.com>, linux-mm@...ck.org,
 linux-kernel@...r.kernel.org, rcu@...r.kernel.org,
 maple-tree@...ts.infradead.org
Subject: Re: [PATCH v4 9/9] mm, slub: skip percpu sheaves for remote object
 freeing

On 5/7/25 12:39, Harry Yoo wrote:
> On Fri, Apr 25, 2025 at 10:27:29AM +0200, Vlastimil Babka wrote:
>> Since we don't control the NUMA locality of objects in percpu sheaves,
>> allocations with node restrictions bypass them. Allocations without
>> restrictions may however still expect to get local objects with high
>> probability, and the introduction of sheaves can decrease it due to
>> freed object from a remote node ending up in percpu sheaves.
>> 
>> The fraction of such remote frees seems low (5% on an 8-node machine)
>> but it can be expected that some cache or workload specific corner cases
>> exist. We can either conclude that this is not a problem due to the low
>> fraction, or we can make remote frees bypass percpu sheaves and go
>> directly to their slabs. This will make the remote frees more expensive,
>> but if if's only a small fraction, most frees will still benefit from
>> the lower overhead of percpu sheaves.
>> 
>> This patch thus makes remote object freeing bypass percpu sheaves,
>> including bulk freeing, and kfree_rcu() via the rcu_free sheaf. However
>> it's not intended to be 100% guarantee that percpu sheaves will only
>> contain local objects. The refill from slabs does not provide that
>> guarantee in the first place, and there might be cpu migrations
>> happening when we need to unlock the local_lock. Avoiding all that could
>> be possible but complicated so we can leave it for later investigation
>> whether it would be worth it. It can be expected that the more selective
>> freeing will itself prevent accumulation of remote objects in percpu
>> sheaves so any such violations would have only short-term effects.
>> 
>> Another possible optimization to investigate is whether it would be
>> beneficial for node-restricted or strict_numa allocations to attempt to
>> obtain an object from percpu sheaves if the node or mempolicy (i.e.
>> MPOL_LOCAL) happens to want the local node of the allocating cpu. Right
>> now such allocations bypass sheaves, but they could probably look first
>> whether the first available object in percpu sheaves is local, and with
>> high probability succeed - and only bypass the sheaves in cases it's
>> not local.
>> 
>> Signed-off-by: Vlastimil Babka <vbabka@...e.cz>
>> ---
>>  mm/slab_common.c |  7 +++++--
>>  mm/slub.c        | 43 +++++++++++++++++++++++++++++++++++++------
>>  2 files changed, 42 insertions(+), 8 deletions(-)
>> 
>> diff --git a/mm/slub.c b/mm/slub.c
>> index cc273cc45f632e16644355831132cdc391219cec..2bf83e2b85b23f4db2b311edaded4bef6b7d01de 100644
>> --- a/mm/slub.c
>> +++ b/mm/slub.c
>> @@ -5924,8 +5948,15 @@ void slab_free(struct kmem_cache *s, struct slab *slab, void *object,
>>  	if (unlikely(!slab_free_hook(s, object, slab_want_init_on_free(s), false)))
>>  		return;
>>  
>> -	if (!s->cpu_sheaves || !free_to_pcs(s, object))
>> -		do_slab_free(s, slab, object, object, 1, addr);
>> +	if (s->cpu_sheaves) {
>> +		if (likely(!IS_ENABLED(CONFIG_NUMA) ||
>> +			   slab_nid(slab) == numa_node_id())) {
>> +			free_to_pcs(s, object);
> 
> Shouldn't it call do_slab_free() when free_to_pcs() failed?

Oops yes, thanks!

> 
>> +			return;
>> +		}
>> +	}
>> +
>> +	do_slab_free(s, slab, object, object, 1, addr);
>>  }
>>  
>>  #ifdef CONFIG_MEMCG
>> 
>> -- 
>> 2.49.0
>> 
>> 
> 


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ