lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ffb4012b-e038-a92e-c84c-bb1d061fe29f@suse.cz>
Date:   Tue, 23 Aug 2022 19:15:43 +0200
From:   Vlastimil Babka <vbabka@...e.cz>
To:     Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
        linux-kernel@...r.kernel.org
Cc:     Thomas Gleixner <tglx@...utronix.de>,
        Peter Zijlstra <peterz@...radead.org>,
        Steven Rostedt <rostedt@...dmis.org>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Matthew Wilcox <willy@...radead.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Christoph Lameter <cl@...ux.com>,
        David Rientjes <rientjes@...gle.com>,
        Joonsoo Kim <iamjoonsoo.kim@....com>,
        Pekka Enberg <penberg@...nel.org>, linux-mm@...ck.org
Subject: Re: [PATCH 1/9] slub: Make PREEMPT_RT support less convoluted

On 8/17/22 18:26, Sebastian Andrzej Siewior wrote:
> From: Thomas Gleixner <tglx@...utronix.de>
> 
> The slub code already has a few helpers depending on PREEMPT_RT. Add a few
> more and get rid of the CONFIG_PREEMPT_RT conditionals all over the place.
> 
> No functional change.
> 
> Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
> Cc: Andrew Morton <akpm@...ux-foundation.org>
> Cc: Christoph Lameter <cl@...ux.com>
> Cc: David Rientjes <rientjes@...gle.com>
> Cc: Joonsoo Kim <iamjoonsoo.kim@....com>
> Cc: Pekka Enberg <penberg@...nel.org>
> Cc: Vlastimil Babka <vbabka@...e.cz>
> Cc: linux-mm@...ck.org
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
> ---
>  mm/slub.c | 66 +++++++++++++++++++++++++------------------------------
>  1 file changed, 30 insertions(+), 36 deletions(-)
> 
> diff --git a/mm/slub.c b/mm/slub.c
> index 862dbd9af4f52..5f7c5b5bd49f9 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -100,9 +100,11 @@
>   *   except the stat counters. This is a percpu structure manipulated only by
>   *   the local cpu, so the lock protects against being preempted or interrupted
>   *   by an irq. Fast path operations rely on lockless operations instead.
> - *   On PREEMPT_RT, the local lock does not actually disable irqs (and thus
> - *   prevent the lockless operations), so fastpath operations also need to take
> - *   the lock and are no longer lockless.
> + *
> + *   On PREEMPT_RT, the local lock neither disables interrupts nor preemption
> + *   which means the lockless fastpath cannot be used as it might interfere with
> + *   an in-progress slow path operations. In this case the local lock is always
> + *   taken but it still utilizes the freelist for the common operations.
>   *
>   *   lockless fastpaths
>   *
> @@ -163,8 +165,11 @@
>   * function call even on !PREEMPT_RT, use inline preempt_disable() there.
>   */
>  #ifndef CONFIG_PREEMPT_RT
> -#define slub_get_cpu_ptr(var)	get_cpu_ptr(var)
> -#define slub_put_cpu_ptr(var)	put_cpu_ptr(var)
> +#define slub_get_cpu_ptr(var)		get_cpu_ptr(var)
> +#define slub_put_cpu_ptr(var)		put_cpu_ptr(var)
> +#define use_lockless_fast_path()	(true)

> +#define slub_local_irq_save(flags)	local_irq_save(flags)
> +#define slub_local_irq_restore(flags)	local_irq_restore(flags)

Note these won't be neccessary anymore after
https://lore.kernel.org/linux-mm/20220823170400.26546-6-vbabka@suse.cz/T/#u

>  #else
>  #define slub_get_cpu_ptr(var)		\
>  ({					\
> @@ -176,6 +181,9 @@ do {					\
>  	(void)(var);			\
>  	migrate_enable();		\
>  } while (0)
> +#define use_lockless_fast_path()	(false)
> +#define slub_local_irq_save(flags)	do { } while (0)
> +#define slub_local_irq_restore(flags)	do { } while (0)
>  #endif
>  
>  #ifdef CONFIG_SLUB_DEBUG
> @@ -460,16 +468,14 @@ static __always_inline void __slab_unlock(struct slab *slab)
>  
>  static __always_inline void slab_lock(struct slab *slab, unsigned long *flags)
>  {
> -	if (IS_ENABLED(CONFIG_PREEMPT_RT))
> -		local_irq_save(*flags);
> +	slub_local_irq_save(*flags);
>  	__slab_lock(slab);
>  }
>  
>  static __always_inline void slab_unlock(struct slab *slab, unsigned long *flags)
>  {
>  	__slab_unlock(slab);
> -	if (IS_ENABLED(CONFIG_PREEMPT_RT))
> -		local_irq_restore(*flags);
> +	slub_local_irq_restore(*flags);
>  }

Ditto.

>  /*
> @@ -482,7 +488,7 @@ static inline bool __cmpxchg_double_slab(struct kmem_cache *s, struct slab *slab
>  		void *freelist_new, unsigned long counters_new,
>  		const char *n)
>  {
> -	if (!IS_ENABLED(CONFIG_PREEMPT_RT))
> +	if (use_lockless_fast_path())
>  		lockdep_assert_irqs_disabled();

This test would stay after the patch I referenced above. But while this
change will keep testing the technically correct thing, the name would be
IMHO misleading here, as this is semantically not about the lockless fast
path, but whether we need to have irqs disabled to avoid a deadlock due to
irq incoming when we hold the bit_spin_lock() and its handler trying to
acquire it as well.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ