lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Fri, 31 May 2024 17:52:09 +0200
From: Andrea Parri <parri.andrea@...il.com>
To: Alexandre Ghiti <alex@...ti.fr>
Cc: Alexandre Ghiti <alexghiti@...osinc.com>,
	Jonathan Corbet <corbet@....net>,
	Paul Walmsley <paul.walmsley@...ive.com>,
	Palmer Dabbelt <palmer@...belt.com>,
	Albert Ou <aou@...s.berkeley.edu>,
	Peter Zijlstra <peterz@...radead.org>,
	Ingo Molnar <mingo@...hat.com>, Will Deacon <will@...nel.org>,
	Waiman Long <longman@...hat.com>, Boqun Feng <boqun.feng@...il.com>,
	Arnd Bergmann <arnd@...db.de>, Leonardo Bras <leobras@...hat.com>,
	Guo Ren <guoren@...nel.org>, linux-doc@...r.kernel.org,
	linux-kernel@...r.kernel.org, linux-riscv@...ts.infradead.org,
	linux-arch@...r.kernel.org
Subject: Re: [PATCH 7/7] riscv: Add qspinlock support based on Zabha extension

> > > +	select ARCH_USE_QUEUED_SPINLOCKS if TOOLCHAIN_HAS_ZABHA
> > IIUC, we should make sure qspinlocks run with ARCH_WEAK_RELEASE_ACQUIRE,
> > perhaps a similar select for the latter?  (not a kconfig expert)
> 
> 
> Where did you see this dependency? And if that is really a dependency of
> qspinlocks, shouldn't this be under CONFIG_QUEUED_SPINLOCKS? (not a Kconfig
> expert too).

The comment on smp_mb__after_unlock_lock() in include/linux/rcupdate.h
(the barrier is currently only used by the RCU subsystem) recalls:

  /*
   * Place this after a lock-acquisition primitive to guarantee that
   * an UNLOCK+LOCK pair acts as a full barrier.  This guarantee applies
   * if the UNLOCK and LOCK are executed by the same CPU or if the
   * UNLOCK and LOCK operate on the same lock variable.
   */
  #ifdef CONFIG_ARCH_WEAK_RELEASE_ACQUIRE
  #define smp_mb__after_unlock_lock()	smp_mb()  /* Full ordering for lock. */
  #else /* #ifdef CONFIG_ARCH_WEAK_RELEASE_ACQUIRE */
  #define smp_mb__after_unlock_lock()	do { } while (0)
  #endif /* #else #ifdef CONFIG_ARCH_WEAK_RELEASE_ACQUIRE */

Architectures whose UNLOCK+LOCK implementation does not (already) meet
the required "full barrier" ordering property (currently, only powerpc)
can overwrite the "default"/common #define for this barrier (NOP) and
meet the ordering by opting in for ARCH_WEAK_RELEASE_ACQUIRE.

The (current) "generic" ticket lock implementation provides "the full
barrier" in its LOCK operations (hence in part. in UNLOCK+LOCK), cf.

  arch_spin_trylock() -> atomic_try_cmpxchg()
  arch_spin_lock() -> atomic_fetch_add()
                   -> atomic_cond_read_acquire(); smp_mb()

but the "UNLOCK+LOCK pairs act as a full barrier" property doesn't hold
true for riscv (and powerpc) when switching over to queued spinlocks.
OTOH, I see no particular reason for other "users" of queued spinlocks
(notably, x86 and arm64) for selecting ARCH_WEAK_RELEASE_ACQUIRE.

But does this address your concern?  Let me know if I misunderstood it.

  Andrea

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ