lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 24 Aug 2022 15:54:54 +0200
From:   Vlastimil Babka <vbabka@...e.cz>
To:     Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Cc:     linux-kernel@...r.kernel.org, Thomas Gleixner <tglx@...utronix.de>,
        Peter Zijlstra <peterz@...radead.org>,
        Steven Rostedt <rostedt@...dmis.org>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Matthew Wilcox <willy@...radead.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Christoph Lameter <cl@...ux.com>,
        David Rientjes <rientjes@...gle.com>,
        Joonsoo Kim <iamjoonsoo.kim@....com>,
        Pekka Enberg <penberg@...nel.org>, linux-mm@...ck.org
Subject: Re: [PATCH 1/9] slub: Make PREEMPT_RT support less convoluted

On 8/24/22 15:25, Sebastian Andrzej Siewior wrote:
> On 2022-08-23 19:15:43 [+0200], Vlastimil Babka wrote:
>>> +#define slub_local_irq_save(flags)	local_irq_save(flags)
>>> +#define slub_local_irq_restore(flags)	local_irq_restore(flags)
>>
>> Note these won't be neccessary anymore after
>> https://lore.kernel.org/linux-mm/20220823170400.26546-6-vbabka@suse.cz/T/#u
> 
> Okay, let me postpone that one and rebase what is left on top.
> 
>>> @@ -482,7 +488,7 @@ static inline bool __cmpxchg_double_slab(struct kmem_cache *s, struct slab *slab
>>>   		void *freelist_new, unsigned long counters_new,
>>>   		const char *n)
>>>   {
>>> -	if (!IS_ENABLED(CONFIG_PREEMPT_RT))
>>> +	if (use_lockless_fast_path())
>>>   		lockdep_assert_irqs_disabled();
>>
>> This test would stay after the patch I referenced above. But while this
>> change will keep testing the technically correct thing, the name would be
>> IMHO misleading here, as this is semantically not about the lockless fast
>> path, but whether we need to have irqs disabled to avoid a deadlock due to
>> irq incoming when we hold the bit_spin_lock() and its handler trying to
>> acquire it as well.
> 
> Color me confused. Memory is never allocated in-IRQ context on
> PREEMPT_RT. Therefore I don't understand why interrupts must be disabled
> for the fast path (unless that comment only applied to !RT).

Yes that only applied to !RT. Hence why the assert is there only for !RT.

> It could be about preemption since spinlock, local_lock don't disable
> preemption and so another allocation on the same CPU is possible. But
> then you say "we hold the bit_spin_lock()" and this one disables
> preemption. This means nothing can stop the bit_spin_lock() owner from
> making progress and since there is no memory allocation in-IRQ, we can't
> block on the same bit_spin_lock() on the same CPU.

Yeah, realizing that this is true on RT led to the recent patch I 
referenced. Initially when converting SLUB to RT last year I didn't 
realize this detail, and instead replaced the irq disabling done (only 
on !RT) by e.g. local_lock_irqsave with the manual local_irq_save().

> Sebastian

Powered by blists - more mailing lists