lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250915001139.7101-1-hdanton@sina.com>
Date: Mon, 15 Sep 2025 08:11:37 +0800
From: Hillf Danton <hdanton@...a.com>
To: Vlastimil Babka <vbabka@...e.cz>
Cc: linux-mm@...ck.org,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH v5 01/14] slab: add opt-in caching layer of percpu sheaves

On Sun, 14 Sep 2025 22:24:19 +0200 Vlastimil Babka wrote:
>On 9/14/25 04:22, Hillf Danton wrote:
>> On Wed, 23 Jul 2025 15:34:34 +0200 Vlastimil Babka wrote:
>>> +
>>> +/*
>>> + * Caller needs to make sure migration is disabled in order to fully flush
>>> + * single cpu's sheaves
>>> + *
>> This misguides, see below for a workqueue case.
>> 
>>> + * must not be called from an irq
>>> + *
>>> + * flushing operations are rare so let's keep it simple and flush to slabs
>>> + * directly, skipping the barn
>>> + */
>>> +static void pcs_flush_all(struct kmem_cache *s)
>>> +{
>>> +	struct slub_percpu_sheaves *pcs;
>>> +	struct slab_sheaf *spare;
>>> +
>>> +	local_lock(&s->cpu_sheaves->lock);
>>> +	pcs = this_cpu_ptr(s->cpu_sheaves);
>>> +
>>> +	spare = pcs->spare;
>>> +	pcs->spare = NULL;
>>> +
>>> +	local_unlock(&s->cpu_sheaves->lock);
>>> +
>>> +	if (spare) {
>>> +		sheaf_flush_unused(s, spare);
>>> +		free_empty_sheaf(s, spare);
>>> +	}
>>> +
>>> +	sheaf_flush_main(s);
>>> +}
>>> +
>>> @@ -3326,30 +3755,18 @@ struct slub_flush_work {
>>>  static void flush_cpu_slab(struct work_struct *w)
>>>  {
>>>  	struct kmem_cache *s;
>>> -	struct kmem_cache_cpu *c;
>>>  	struct slub_flush_work *sfw;
>>>  
>>>  	sfw = container_of(w, struct slub_flush_work, work);
>>>  
>>>  	s = sfw->s;
>>> -	c = this_cpu_ptr(s->cpu_slab);
>>>  
>>> -	if (c->slab)
>>> -		flush_slab(s, c);
>>> +	if (s->cpu_sheaves)
>>> +		pcs_flush_all(s);
>>>  
>> Migration is not disabled.
>
> Can you elaborate how it's not? There's a comment above the function saying
> "Called from CPU work handler with migration disabled." and we have relied
> on this before sheaves. queue_work_on() says it will run on the specific
> cpu. AFAIK the workqueue workers are bound which is effectively disabled
> migration (we hold the cpu hotplug lock).
>
CPU affinity and migration_disable() are two different things regardless of
queue_work_on(), no?
I think you are right in accident rather than by design in this case.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ