lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <C2D7FE5348E1B147BCA15975FBA23075F4E9EF11@us01wembx1.internal.synopsys.com>
Date:	Fri, 22 Apr 2016 10:50:41 +0000
From:	Vineet Gupta <Vineet.Gupta1@...opsys.com>
To:	Peter Zijlstra <peterz@...radead.org>,
	"torvalds@...ux-foundation.org" <torvalds@...ux-foundation.org>,
	"mingo@...nel.org" <mingo@...nel.org>,
	"tglx@...utronix.de" <tglx@...utronix.de>,
	"will.deacon@....com" <will.deacon@....com>,
	"paulmck@...ux.vnet.ibm.com" <paulmck@...ux.vnet.ibm.com>,
	"boqun.feng@...il.com" <boqun.feng@...il.com>,
	"waiman.long@....com" <waiman.long@....com>,
	"fweisbec@...il.com" <fweisbec@...il.com>
CC:	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"linux-arch@...r.kernel.org" <linux-arch@...r.kernel.org>,
	"rth@...ddle.net" <rth@...ddle.net>,
	"Vineet.Gupta1@...opsys.com" <Vineet.Gupta1@...opsys.com>,
	"linux@....linux.org.uk" <linux@....linux.org.uk>,
	"egtvedt@...fundet.no" <egtvedt@...fundet.no>,
	"realmz6@...il.com" <realmz6@...il.com>,
	"ysato@...rs.sourceforge.jp" <ysato@...rs.sourceforge.jp>,
	"rkuo@...eaurora.org" <rkuo@...eaurora.org>,
	"tony.luck@...el.com" <tony.luck@...el.com>,
	"geert@...ux-m68k.org" <geert@...ux-m68k.org>,
	"james.hogan@...tec.com" <james.hogan@...tec.com>,
	"ralf@...ux-mips.org" <ralf@...ux-mips.org>,
	"dhowells@...hat.com" <dhowells@...hat.com>,
	"jejb@...isc-linux.org" <jejb@...isc-linux.org>,
	"mpe@...erman.id.au" <mpe@...erman.id.au>,
	"schwidefsky@...ibm.com" <schwidefsky@...ibm.com>,
	"dalias@...c.org" <dalias@...c.org>,
	"davem@...emloft.net" <davem@...emloft.net>,
	"cmetcalf@...lanox.com" <cmetcalf@...lanox.com>,
	"jcmvbkbc@...il.com" <jcmvbkbc@...il.com>,
	"arnd@...db.de" <arnd@...db.de>, "dbueso@...e.de" <dbueso@...e.de>,
	"fengguang.wu@...el.com" <fengguang.wu@...el.com>
Subject: Re: [RFC][PATCH 03/31] locking,arc: Implement
 atomic_fetch_{add,sub,and,andnot,or,xor}()

On Friday 22 April 2016 03:13 PM, Peter Zijlstra wrote:
> Implement FETCH-OP atomic primitives, these are very similar to the
> existing OP-RETURN primitives we already have, except they return the
> value of the atomic variable _before_ modification.
>
> This is especially useful for irreversible operations -- such as
> bitops (because it becomes impossible to reconstruct the state prior
> to modification).
>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
> ---
>  arch/arc/include/asm/atomic.h |   69 ++++++++++++++++++++++++++++++++++++++----
>  1 file changed, 64 insertions(+), 5 deletions(-)
>
> --- a/arch/arc/include/asm/atomic.h
> +++ b/arch/arc/include/asm/atomic.h
> @@ -102,6 +102,38 @@ static inline int atomic_##op##_return(i
>  	return val;							\
>  }
>  
> +#define ATOMIC_FETCH_OP(op, c_op, asm_op)				\
> +static inline int atomic_fetch_##op(int i, atomic_t *v)			\
> +{									\
> +	unsigned int val, result;			                \
> +	SCOND_FAIL_RETRY_VAR_DEF                                        \
> +									\
> +	/*								\
> +	 * Explicit full memory barrier needed before/after as		\
> +	 * LLOCK/SCOND thmeselves don't provide any such semantics	\
> +	 */								\
> +	smp_mb();							\
> +									\
> +	__asm__ __volatile__(						\
> +	"1:	llock   %[val], [%[ctr]]		\n"		\
> +	"	mov %[result], %[val]			\n"		\

Calling it result could be a bit confusing, this is meant to be the "orig" value.
So it indeed "result" of the API, but for atomic operation it is pristine value.

Also we can optimize away that MOV - given there are plenty of regs, so

> +	"	" #asm_op " %[val], %[val], %[i]	\n"		\
> +	"	scond   %[val], [%[ctr]]		\n"		\

Instead have

+	"	" #asm_op " %[result], %[val], %[i]	\n"		\
+	"	scond   %[result], [%[ctr]]		\n"		\



> +	"						\n"		\
> +	SCOND_FAIL_RETRY_ASM						\
> +									\
> +	: [val]	"=&r"	(val),						\
> +	  [result] "=&r" (result)					\
> +	  SCOND_FAIL_RETRY_VARS						\
> +	: [ctr]	"r"	(&v->counter),					\
> +	  [i]	"ir"	(i)						\
> +	: "cc");							\
> +									\
> +	smp_mb();							\
> +									\
> +	return result;							\

This needs to be

+	return val;							\



> +}
> +
>  #else	/* !CONFIG_ARC_HAS_LLSC */
>  
>  #ifndef CONFIG_SMP
> @@ -164,23 +196,50 @@ static inline int atomic_##op##_return(i
>  	return temp;							\
>  }
>  
> +#define ATOMIC_FETCH_OP(op, c_op, asm_op)				\
> +static inline int atomic_fetch_##op(int i, atomic_t *v)			\
> +{									\
> +	unsigned long flags;						\
> +	unsigned long temp, result;					\

Same s above, I wouldn't call it result here !

Also per your other comment/patches, converting ARC to _relaxed atomics sounds
trivial, I can provide a fixup patch once your series is stable'ish and u point me
to ur git tree or some such .

Thx,
-Vineet

> +									\
> +	/*								\
> +	 * spin lock/unlock provides the needed smp_mb() before/after	\
> +	 */								\
> +	atomic_ops_lock(flags);						\
> +	result = temp = v->counter;					\
> +	temp c_op i;							\
> +	v->counter = temp;						\
> +	atomic_ops_unlock(flags);					\
> +									\
> +	return result;							\
> +}
> +
>  #endif /* !CONFIG_ARC_HAS_LLSC */
>  
>  #define ATOMIC_OPS(op, c_op, asm_op)					\
>  	ATOMIC_OP(op, c_op, asm_op)					\
> -	ATOMIC_OP_RETURN(op, c_op, asm_op)
> +	ATOMIC_OP_RETURN(op, c_op, asm_op)				\
> +	ATOMIC_FETCH_OP(op, c_op, asm_op)
>  
>  ATOMIC_OPS(add, +=, add)
>  ATOMIC_OPS(sub, -=, sub)
>  
>  #define atomic_andnot atomic_andnot
>  
> -ATOMIC_OP(and, &=, and)
> -ATOMIC_OP(andnot, &= ~, bic)
> -ATOMIC_OP(or, |=, or)
> -ATOMIC_OP(xor, ^=, xor)
> +#define atomic_fetch_or atomic_fetch_or
> +
> +#undef ATOMIC_OPS
> +#define ATOMIC_OPS(op, c_op, asm_op)					\
> +	ATOMIC_OP(op, c_op, asm_op)					\
> +	ATOMIC_FETCH_OP(op, c_op, asm_op)
> +
> +ATOMIC_OPS(and, &=, and)
> +ATOMIC_OPS(andnot, &= ~, bic)
> +ATOMIC_OPS(or, |=, or)
> +ATOMIC_OPS(xor, ^=, xor)
>  
>  #undef ATOMIC_OPS
> +#undef ATOMIC_FETCH_OP
>  #undef ATOMIC_OP_RETURN
>  #undef ATOMIC_OP
>  #undef SCOND_FAIL_RETRY_VAR_DEF

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ