[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YXFwNJHHBydbZYtM@antec>
Date: Thu, 21 Oct 2021 22:50:44 +0900
From: Stafford Horne <shorne@...il.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Will Deacon <will@...nel.org>, Boqun Feng <boqun.feng@...il.com>,
Ingo Molnar <mingo@...nel.org>,
Waiman Long <longman@...hat.com>,
Arnd Bergmann <arnd@...db.de>, linux-arch@...r.kernel.org,
linux-kernel@...r.kernel.org, Guo Ren <guoren@...nel.org>,
Palmer Dabbelt <palmerdabbelt@...gle.com>,
Anup Patel <anup@...infault.org>,
linux-riscv <linux-riscv@...ts.infradead.org>,
Christoph Müllner <christophm30@...il.com>
Subject: Re: [PATCH] locking: Generic ticket lock
On Thu, Oct 21, 2021 at 03:12:25PM +0200, Peter Zijlstra wrote:
> On Thu, Oct 21, 2021 at 03:05:15PM +0200, Peter Zijlstra wrote:
> >
> > There's currently a number of architectures that want/have graduated
> > from test-and-set locks and are looking at qspinlock.
> >
> > *HOWEVER* qspinlock is very complicated and requires a lot of an
> > architecture to actually work correctly. Specifically it requires
> > forward progress between a fair number of atomic primitives, including
> > an xchg16 operation, which I've seen a fair number of fundamentally
> > broken implementations of in the tree (specifically for qspinlock no
> > less).
> >
> > The benefit of qspinlock over ticket lock is also non-obvious, esp.
> > at low contention (the vast majority of cases in the kernel), and it
> > takes a fairly large number of CPUs (typically also NUMA) to make
> > qspinlock beat ticket locks.
> >
> > Esp. things like ARM64's WFE can move the balance a lot in favour of
> > simpler locks by reducing the cacheline pressure due to waiters (see
> > their smp_cond_load_acquire() implementation for details).
> >
> > Unless you've audited qspinlock for your architecture and found it
> > sound *and* can show actual benefit, simpler is better.
For OpenRISC originally we had a custom ticket locking mechanism, but it was
suggested to use qspinlocks as the genric implementation meant less code.
Changed here:
https://yhbt.net/lore/all/86vaix5fmr.fsf@arm.com/T/
I think moving to qspinlocks was suggested by you. But now that we have this
generic infrastructure, I am good to switch.
> > Therefore provide ticket locks, which depend on a single atomic
> > operation (fetch_add) while still providing fairness.
> >
> > Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
> > ---
> > include/asm-generic/qspinlock.h | 30 +++++++++
> > include/asm-generic/ticket_lock_types.h | 11 +++
> > include/asm-generic/ticket_lock.h | 97 ++++++++++++++++++++++++++++++++
> > 3 files changed, 138 insertions(+)
>
> A few notes...
>
> > + * It relies on smp_store_release() + atomic_*_acquire() to be RCsc (or no
> > + * weaker than RCtso if you're Power, also see smp_mb__after_unlock_lock()),
>
> This should hold true to RISC-V in its current form, AFAICT
> atomic_fetch_add ends up using AMOADD, and therefore the argument made
> in the unlock+lock thread [1], gives that this results in RW,RW
> ordering.
>
> [1] https://lore.kernel.org/lkml/5412ab37-2979-5717-4951-6a61366df0f2@nvidia.com/
>
>
> I've compile tested on openrisc/simple_smp_defconfig using the below.
>
> --- a/arch/openrisc/Kconfig
> +++ b/arch/openrisc/Kconfig
> @@ -30,7 +30,6 @@ config OPENRISC
> select HAVE_DEBUG_STACKOVERFLOW
> select OR1K_PIC
> select CPU_NO_EFFICIENT_FFS if !OPENRISC_HAVE_INST_FF1
> - select ARCH_USE_QUEUED_SPINLOCKS
> select ARCH_USE_QUEUED_RWLOCKS
> select OMPIC if SMP
> select ARCH_WANT_FRAME_POINTERS
> --- a/arch/openrisc/include/asm/Kbuild
> +++ b/arch/openrisc/include/asm/Kbuild
> @@ -1,9 +1,8 @@
> # SPDX-License-Identifier: GPL-2.0
> generic-y += extable.h
> generic-y += kvm_para.h
> -generic-y += mcs_spinlock.h
> -generic-y += qspinlock_types.h
> -generic-y += qspinlock.h
> +generic-y += ticket_lock_types.h
> +generic-y += ticket_lock.h
> generic-y += qrwlock_types.h
> generic-y += qrwlock.h
> generic-y += user.h
> --- a/arch/openrisc/include/asm/spinlock.h
> +++ b/arch/openrisc/include/asm/spinlock.h
> @@ -15,7 +15,7 @@
> #ifndef __ASM_OPENRISC_SPINLOCK_H
> #define __ASM_OPENRISC_SPINLOCK_H
>
> -#include <asm/qspinlock.h>
> +#include <asm/ticket_lock.h>
>
> #include <asm/qrwlock.h>
>
> --- a/arch/openrisc/include/asm/spinlock_types.h
> +++ b/arch/openrisc/include/asm/spinlock_types.h
> @@ -1,7 +1,7 @@
> #ifndef _ASM_OPENRISC_SPINLOCK_TYPES_H
> #define _ASM_OPENRISC_SPINLOCK_TYPES_H
>
> -#include <asm/qspinlock_types.h>
> +#include <asm/ticket_lock_types.h>
> #include <asm/qrwlock_types.h>
>
> #endif /* _ASM_OPENRISC_SPINLOCK_TYPES_H */
This looks good to me. Do you want to commit along with the
generic ticket lock patch? Otherwise I can queue it after it is
upstreamed. Another option is I can help merge the generic ticket
lock code via the OpenRISC branch.
Let me know what works.
-Stafford
Powered by blists - more mailing lists