lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 13 Oct 2015 22:47:52 +0200
From:	Peter Zijlstra <peterz@...radead.org>
To:	Waiman Long <waiman.long@....com>
Cc:	Ingo Molnar <mingo@...hat.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	"H. Peter Anvin" <hpa@...or.com>, x86@...nel.org,
	linux-kernel@...r.kernel.org,
	Scott J Norton <scott.norton@....com>,
	Douglas Hatch <doug.hatch@....com>,
	Davidlohr Bueso <dave@...olabs.net>,
	Will Deacon <will.deacon@....com>,
	Paul McKenney <paulmck@...ux.vnet.ibm.com>,
	boqun.feng@...il.com
Subject: Re: [PATCH v7 1/5] locking/qspinlock: relaxes cmpxchg & xchg ops in
 native code

On Tue, Oct 13, 2015 at 04:38:19PM -0400, Waiman Long wrote:
> On 10/13/2015 02:02 PM, Peter Zijlstra wrote:
> >On Tue, Sep 22, 2015 at 04:50:40PM -0400, Waiman Long wrote:
> >>This patch replaces the cmpxchg() and xchg() calls in the native
> >>qspinlock code with more relaxed versions of those calls to enable
> >>other architectures to adopt queued spinlocks with less performance
> >>overhead.
> >>@@ -62,7 +63,7 @@ static __always_inline int queued_spin_is_contended(struct qspinlock *lock)
> >>  static __always_inline int queued_spin_trylock(struct qspinlock *lock)
> >>  {
> >>  	if (!atomic_read(&lock->val)&&
> >>-	   (atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL) == 0))
> >>+	   (atomic_cmpxchg_acquire(&lock->val, 0, _Q_LOCKED_VAL) == 0))
> >>  		return 1;
> >>  	return 0;
> >>  }
> >>@@ -77,7 +78,7 @@ static __always_inline void queued_spin_lock(struct qspinlock *lock)
> >>  {
> >>  	u32 val;
> >>
> >>-	val = atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL);
> >>+	val = atomic_cmpxchg_acquire(&lock->val, 0, _Q_LOCKED_VAL);
> >>  	if (likely(val == 0))
> >>  		return;
> >>  	queued_spin_lock_slowpath(lock, val);
> >>@@ -319,7 +329,7 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
> >>  		if (val == new)
> >>  			new |= _Q_PENDING_VAL;
> >>
> >>-		old = atomic_cmpxchg(&lock->val, val, new);
> >>+		old = atomic_cmpxchg_acquire(&lock->val, val, new);
> >>  		if (old == val)
> >>  			break;
> >>
> >So given recent discussion, all this _release/_acquire stuff is starting
> >to worry me.
> >
> >So we've not declared if they should be RCsc or RCpc, and given this
> >patch (and the previous ones) these lock primitives turn into RCpc if
> >the atomic primitives are RCpc.
> >
> >So far only the proposed PPC implementation is RCpc -- and their current
> >spinlock implementation is also RCpc, but that is a point of discussion.
> >
> >Just saying..
> 
> Davidlohr's patches to make similar changes in other locking code will also
> have this issue. 

Yes, I only fully appreciated the RCpc pain last week :/

> Anyway, the goal of this patch is to make the generic
> qspinlock code less costly when ported to other architectures. 

As long as we stay away from PPC this will be fine ;-) Luckily they are
unlikely to start using it since their LPAR hypervisor thingy isn't
really co-operative.

> This change
> will have no effect on the x86 architecture which is the only one using

I know ARM and ARGH64 will want to start using this, luckily both are
RCsc so no worries there.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ