[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <C2D7FE5348E1B147BCA15975FBA23075F4E9F138@us01wembx1.internal.synopsys.com>
Date: Mon, 25 Apr 2016 04:26:54 +0000
From: Vineet Gupta <Vineet.Gupta1@...opsys.com>
To: Peter Zijlstra <peterz@...radead.org>
CC: "torvalds@...ux-foundation.org" <torvalds@...ux-foundation.org>,
"mingo@...nel.org" <mingo@...nel.org>,
"tglx@...utronix.de" <tglx@...utronix.de>,
"will.deacon@....com" <will.deacon@....com>,
"paulmck@...ux.vnet.ibm.com" <paulmck@...ux.vnet.ibm.com>,
"boqun.feng@...il.com" <boqun.feng@...il.com>,
"waiman.long@....com" <waiman.long@....com>,
"fweisbec@...il.com" <fweisbec@...il.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-arch@...r.kernel.org" <linux-arch@...r.kernel.org>,
"rth@...ddle.net" <rth@...ddle.net>,
"linux@....linux.org.uk" <linux@....linux.org.uk>,
"egtvedt@...fundet.no" <egtvedt@...fundet.no>,
"realmz6@...il.com" <realmz6@...il.com>,
"ysato@...rs.sourceforge.jp" <ysato@...rs.sourceforge.jp>,
"rkuo@...eaurora.org" <rkuo@...eaurora.org>,
"tony.luck@...el.com" <tony.luck@...el.com>,
"geert@...ux-m68k.org" <geert@...ux-m68k.org>,
"james.hogan@...tec.com" <james.hogan@...tec.com>,
"ralf@...ux-mips.org" <ralf@...ux-mips.org>,
"dhowells@...hat.com" <dhowells@...hat.com>,
"jejb@...isc-linux.org" <jejb@...isc-linux.org>,
"mpe@...erman.id.au" <mpe@...erman.id.au>,
"schwidefsky@...ibm.com" <schwidefsky@...ibm.com>,
"dalias@...c.org" <dalias@...c.org>,
"davem@...emloft.net" <davem@...emloft.net>,
"cmetcalf@...lanox.com" <cmetcalf@...lanox.com>,
"jcmvbkbc@...il.com" <jcmvbkbc@...il.com>,
"arnd@...db.de" <arnd@...db.de>, "dbueso@...e.de" <dbueso@...e.de>,
"fengguang.wu@...el.com" <fengguang.wu@...el.com>
Subject: Re: [RFC][PATCH 03/31] locking,arc: Implement
atomic_fetch_{add,sub,and,andnot,or,xor}()
On Friday 22 April 2016 07:46 PM, Peter Zijlstra wrote:
> On Fri, Apr 22, 2016 at 10:50:41AM +0000, Vineet Gupta wrote:
>
>>> > > +#define ATOMIC_FETCH_OP(op, c_op, asm_op) \
>>> > > +static inline int atomic_fetch_##op(int i, atomic_t *v) \
>>> > > +{ \
>>> > > + unsigned int val, result; \
>>> > > + SCOND_FAIL_RETRY_VAR_DEF \
>>> > > + \
>>> > > + /* \
>>> > > + * Explicit full memory barrier needed before/after as \
>>> > > + * LLOCK/SCOND thmeselves don't provide any such semantics \
>>> > > + */ \
>>> > > + smp_mb(); \
>>> > > + \
>>> > > + __asm__ __volatile__( \
>>> > > + "1: llock %[val], [%[ctr]] \n" \
>>> > > + " mov %[result], %[val] \n" \
>> >
>> > Calling it result could be a bit confusing, this is meant to be the "orig" value.
>> > So it indeed "result" of the API, but for atomic operation it is pristine value.
>> >
>> > Also we can optimize away that MOV - given there are plenty of regs, so
>> >
>>> > > + " " #asm_op " %[val], %[val], %[i] \n" \
>>> > > + " scond %[val], [%[ctr]] \n" \
>> >
>> > Instead have
>> >
>> > + " " #asm_op " %[result], %[val], %[i] \n" \
>> > + " scond %[result], [%[ctr]] \n" \
>> >
>> >
> Indeed, how about something like so?
>
> ---
> Subject: locking,arc: Implement atomic_fetch_{add,sub,and,andnot,or,xor}()
> From: Peter Zijlstra <peterz@...radead.org>
> Date: Mon Apr 18 01:16:09 CEST 2016
>
> Implement FETCH-OP atomic primitives, these are very similar to the
> existing OP-RETURN primitives we already have, except they return the
> value of the atomic variable _before_ modification.
>
> This is especially useful for irreversible operations -- such as
> bitops (because it becomes impossible to reconstruct the state prior
> to modification).
>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Acked-by: Vineet Gupta <vgupta@...opsys.com>
Powered by blists - more mailing lists