[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <09abc75e-2ffb-1ab5-d0fc-1c15c943948d@redhat.com>
Date: Tue, 28 Jun 2022 14:13:39 -0400
From: Waiman Long <longman@...hat.com>
To: guoren@...nel.org, palmer@...osinc.com, arnd@...db.de,
mingo@...hat.com, will@...nel.org, boqun.feng@...il.com
Cc: linux-riscv@...ts.infradead.org, linux-arch@...r.kernel.org,
linux-kernel@...r.kernel.org, Guo Ren <guoren@...ux.alibaba.com>,
Peter Zijlstra <peterz@...radead.org>
Subject: Re: [PATCH V7 4/5] asm-generic: spinlock: Add combo spinlock (ticket
& queued)
On 6/28/22 04:17, guoren@...nel.org wrote:
> From: Guo Ren <guoren@...ux.alibaba.com>
>
> Some architecture has a flexible requirement on the type of spinlock.
> Some LL/SC architectures of ISA don't force micro-arch to give a strong
> forward guarantee. Thus different kinds of memory model micro-arch would
> come out in one ISA. The ticket lock is suitable for exclusive monitor
> designed LL/SC micro-arch with limited cores and "!NUMA". The
> queue-spinlock could deal with NUMA/large-scale scenarios with a strong
> forward guarantee designed LL/SC micro-arch.
>
> So, make the spinlock a combo with feature.
>
> Signed-off-by: Guo Ren <guoren@...ux.alibaba.com>
> Signed-off-by: Guo Ren <guoren@...nel.org>
> Cc: Peter Zijlstra (Intel) <peterz@...radead.org>
> Cc: Arnd Bergmann <arnd@...db.de>
> Cc: Palmer Dabbelt <palmer@...osinc.com>
> ---
> include/asm-generic/spinlock.h | 43 ++++++++++++++++++++++++++++++++--
> kernel/locking/qspinlock.c | 2 ++
> 2 files changed, 43 insertions(+), 2 deletions(-)
>
> diff --git a/include/asm-generic/spinlock.h b/include/asm-generic/spinlock.h
> index f41dc7c2b900..a9b43089bf99 100644
> --- a/include/asm-generic/spinlock.h
> +++ b/include/asm-generic/spinlock.h
> @@ -28,34 +28,73 @@
> #define __ASM_GENERIC_SPINLOCK_H
>
> #include <asm-generic/ticket_spinlock.h>
> +#ifdef CONFIG_ARCH_USE_QUEUED_SPINLOCKS
> +#include <linux/jump_label.h>
> +#include <asm-generic/qspinlock.h>
> +
> +DECLARE_STATIC_KEY_TRUE(use_qspinlock_key);
> +#endif
> +
> +#undef arch_spin_is_locked
> +#undef arch_spin_is_contended
> +#undef arch_spin_value_unlocked
> +#undef arch_spin_lock
> +#undef arch_spin_trylock
> +#undef arch_spin_unlock
>
> static __always_inline void arch_spin_lock(arch_spinlock_t *lock)
> {
> - ticket_spin_lock(lock);
> +#ifdef CONFIG_ARCH_USE_QUEUED_SPINLOCKS
> + if (static_branch_likely(&use_qspinlock_key))
> + queued_spin_lock(lock);
> + else
> +#endif
> + ticket_spin_lock(lock);
> }
Why do you use a static key to control whether to use qspinlock or
ticket lock? In the next patch, you have
+#if !defined(CONFIG_NUMA) && defined(CONFIG_QUEUED_SPINLOCKS)
+ static_branch_disable(&use_qspinlock_key);
+#endif
So the current config setting determines if qspinlock will be used, not
some boot time parameter that user needs to specify. This patch will
just add useless code to lock/unlock sites. I don't see any benefit of
doing that.
Cheers,
Longman
Powered by blists - more mailing lists