lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Sat, 1 Jun 2024 14:18:30 +0800
From: Guo Ren <guoren@...nel.org>
To: Andrea Parri <parri.andrea@...il.com>
Cc: Alexandre Ghiti <alex@...ti.fr>, Alexandre Ghiti <alexghiti@...osinc.com>, 
	Jonathan Corbet <corbet@....net>, Paul Walmsley <paul.walmsley@...ive.com>, 
	Palmer Dabbelt <palmer@...belt.com>, Albert Ou <aou@...s.berkeley.edu>, 
	Peter Zijlstra <peterz@...radead.org>, Ingo Molnar <mingo@...hat.com>, Will Deacon <will@...nel.org>, 
	Waiman Long <longman@...hat.com>, Boqun Feng <boqun.feng@...il.com>, Arnd Bergmann <arnd@...db.de>, 
	Leonardo Bras <leobras@...hat.com>, linux-doc@...r.kernel.org, 
	linux-kernel@...r.kernel.org, linux-riscv@...ts.infradead.org, 
	linux-arch@...r.kernel.org
Subject: Re: [PATCH 7/7] riscv: Add qspinlock support based on Zabha extension

On Fri, May 31, 2024 at 11:52 PM Andrea Parri <parri.andrea@...il.com> wrote:
>
> > > > + select ARCH_USE_QUEUED_SPINLOCKS if TOOLCHAIN_HAS_ZABHA
> > > IIUC, we should make sure qspinlocks run with ARCH_WEAK_RELEASE_ACQUIRE,
> > > perhaps a similar select for the latter?  (not a kconfig expert)
> >
> >
> > Where did you see this dependency? And if that is really a dependency of
> > qspinlocks, shouldn't this be under CONFIG_QUEUED_SPINLOCKS? (not a Kconfig
> > expert too).
>
> The comment on smp_mb__after_unlock_lock() in include/linux/rcupdate.h
> (the barrier is currently only used by the RCU subsystem) recalls:
>
>   /*
>    * Place this after a lock-acquisition primitive to guarantee that
>    * an UNLOCK+LOCK pair acts as a full barrier.  This guarantee applies
>    * if the UNLOCK and LOCK are executed by the same CPU or if the
>    * UNLOCK and LOCK operate on the same lock variable.
>    */
>   #ifdef CONFIG_ARCH_WEAK_RELEASE_ACQUIRE
>   #define smp_mb__after_unlock_lock()   smp_mb()  /* Full ordering for lock. */
>   #else /* #ifdef CONFIG_ARCH_WEAK_RELEASE_ACQUIRE */
>   #define smp_mb__after_unlock_lock()   do { } while (0)
>   #endif /* #else #ifdef CONFIG_ARCH_WEAK_RELEASE_ACQUIRE */
>
> Architectures whose UNLOCK+LOCK implementation does not (already) meet
> the required "full barrier" ordering property (currently, only powerpc)
> can overwrite the "default"/common #define for this barrier (NOP) and
> meet the ordering by opting in for ARCH_WEAK_RELEASE_ACQUIRE.
>
> The (current) "generic" ticket lock implementation provides "the full
> barrier" in its LOCK operations (hence in part. in UNLOCK+LOCK), cf.
>
>   arch_spin_trylock() -> atomic_try_cmpxchg()
>   arch_spin_lock() -> atomic_fetch_add()
>                    -> atomic_cond_read_acquire(); smp_mb()
>
> but the "UNLOCK+LOCK pairs act as a full barrier" property doesn't hold
> true for riscv (and powerpc) when switching over to queued spinlock.
Yes.

> OTOH, I see no particular reason for other "users" of queued spinlocks
> (notably, x86 and arm64) for selecting ARCH_WEAK_RELEASE_ACQUIRE.
I looked at the riscv-unprivileged ppo section, seems RISC-V .rl ->
.aq has RCsc annotations.
ref:
Explicit Synchronization
 5. has an acquire annotation
 6. has a release annotation
 7. a and b both have RCsc annotations

And for qspinlock:
unlock:
        smp_store_release(&lock->locked, 0);

lock:
        if (likely(atomic_try_cmpxchg_acquire(&lock->val, &val, _Q_LOCKED_VAL)))

If the hardware has Store-Release and CAS instructions, they all obey
Explicit Synchronization rules. Then RISC-V "UNLOCK+LOCK" pairs act as
a full barrier, right?

>
> But does this address your concern?  Let me know if I misunderstood it.
>
>   Andrea



-- 
Best Regards
 Guo Ren

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ