lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 9 Jun 2015 14:30:24 +0200
From:	Peter Zijlstra <peterz@...radead.org>
To:	Vineet Gupta <Vineet.Gupta1@...opsys.com>
Cc:	linux-arch@...r.kernel.org, linux-kernel@...r.kernel.org,
	arnd@...db.de, arc-linux-dev@...opsys.com,
	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Subject: Re: [PATCH 18/28] ARC: add smp barriers around atomics per
 memory-barrriers.txt

On Tue, Jun 09, 2015 at 05:18:18PM +0530, Vineet Gupta wrote:

Please try and provide at least _some_ Changelog body.

<snip all atomic ops that return values>

> diff --git a/arch/arc/include/asm/spinlock.h b/arch/arc/include/asm/spinlock.h
> index b6a8c2dfbe6e..8af8eaad4999 100644
> --- a/arch/arc/include/asm/spinlock.h
> +++ b/arch/arc/include/asm/spinlock.h
> @@ -22,24 +22,32 @@ static inline void arch_spin_lock(arch_spinlock_t *lock)
>  {
>  	unsigned int tmp = __ARCH_SPIN_LOCK_LOCKED__;
>  
> +	smp_mb();
> +
>  	__asm__ __volatile__(
>  	"1:	ex  %0, [%1]		\n"
>  	"	breq  %0, %2, 1b	\n"
>  	: "+&r" (tmp)
>  	: "r"(&(lock->slock)), "ir"(__ARCH_SPIN_LOCK_LOCKED__)
>  	: "memory");
> +
> +	smp_mb();
>  }
>  
>  static inline int arch_spin_trylock(arch_spinlock_t *lock)
>  {
>  	unsigned int tmp = __ARCH_SPIN_LOCK_LOCKED__;
>  
> +	smp_mb();
> +
>  	__asm__ __volatile__(
>  	"1:	ex  %0, [%1]		\n"
>  	: "+r" (tmp)
>  	: "r"(&(lock->slock))
>  	: "memory");
>  
> +	smp_mb();
> +
>  	return (tmp == __ARCH_SPIN_LOCK_UNLOCKED__);
>  }
>  

Both these are only required to provide an ACQUIRE barrier, if all you
have is smp_mb(), the second is sufficient.

Also note that a failed trylock is not required to provide _any_ barrier
at all.

> @@ -47,6 +55,8 @@ static inline void arch_spin_unlock(arch_spinlock_t *lock)
>  {
>  	unsigned int tmp = __ARCH_SPIN_LOCK_UNLOCKED__;
>  
> +	smp_mb();
> +
>  	__asm__ __volatile__(
>  	"	ex  %0, [%1]		\n"
>  	: "+r" (tmp)

This requires a RELEASE barrier, again, if all you have is smp_mb(),
this is indeed correct.

Describing some of this would make for a fine Changelog body :-)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ