lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 13 Oct 2015 20:02:25 +0200
From:	Peter Zijlstra <peterz@...radead.org>
To:	Waiman Long <Waiman.Long@....com>
Cc:	Ingo Molnar <mingo@...hat.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	"H. Peter Anvin" <hpa@...or.com>, x86@...nel.org,
	linux-kernel@...r.kernel.org,
	Scott J Norton <scott.norton@....com>,
	Douglas Hatch <doug.hatch@....com>,
	Davidlohr Bueso <dave@...olabs.net>,
	Will Deacon <will.deacon@....com>,
	Paul McKenney <paulmck@...ux.vnet.ibm.com>,
	boqun.feng@...il.com
Subject: Re: [PATCH v7 1/5] locking/qspinlock: relaxes cmpxchg & xchg ops in
 native code

On Tue, Sep 22, 2015 at 04:50:40PM -0400, Waiman Long wrote:
> This patch replaces the cmpxchg() and xchg() calls in the native
> qspinlock code with more relaxed versions of those calls to enable
> other architectures to adopt queued spinlocks with less performance
> overhead.

> @@ -62,7 +63,7 @@ static __always_inline int queued_spin_is_contended(struct qspinlock *lock)
>  static __always_inline int queued_spin_trylock(struct qspinlock *lock)
>  {
>  	if (!atomic_read(&lock->val) &&
> -	   (atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL) == 0))
> +	   (atomic_cmpxchg_acquire(&lock->val, 0, _Q_LOCKED_VAL) == 0))
>  		return 1;
>  	return 0;
>  }
> @@ -77,7 +78,7 @@ static __always_inline void queued_spin_lock(struct qspinlock *lock)
>  {
>  	u32 val;
>  
> -	val = atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL);
> +	val = atomic_cmpxchg_acquire(&lock->val, 0, _Q_LOCKED_VAL);
>  	if (likely(val == 0))
>  		return;
>  	queued_spin_lock_slowpath(lock, val);

> @@ -319,7 +329,7 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
>  		if (val == new)
>  			new |= _Q_PENDING_VAL;
>  
> -		old = atomic_cmpxchg(&lock->val, val, new);
> +		old = atomic_cmpxchg_acquire(&lock->val, val, new);
>  		if (old == val)
>  			break;
>  

So given recent discussion, all this _release/_acquire stuff is starting
to worry me.

So we've not declared if they should be RCsc or RCpc, and given this
patch (and the previous ones) these lock primitives turn into RCpc if
the atomic primitives are RCpc.

So far only the proposed PPC implementation is RCpc -- and their current
spinlock implementation is also RCpc, but that is a point of discussion.

Just saying..

Also, I think we should annotate the control dependencies in these
things.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ