lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Fri, 22 Apr 2016 12:35:27 +0100 From: Will Deacon <will.deacon@....com> To: Peter Zijlstra <peterz@...radead.org> Cc: torvalds@...ux-foundation.org, mingo@...nel.org, tglx@...utronix.de, paulmck@...ux.vnet.ibm.com, boqun.feng@...il.com, waiman.long@....com, fweisbec@...il.com, linux-kernel@...r.kernel.org, linux-arch@...r.kernel.org, rth@...ddle.net, vgupta@...opsys.com, linux@....linux.org.uk, egtvedt@...fundet.no, realmz6@...il.com, ysato@...rs.sourceforge.jp, rkuo@...eaurora.org, tony.luck@...el.com, geert@...ux-m68k.org, james.hogan@...tec.com, ralf@...ux-mips.org, dhowells@...hat.com, jejb@...isc-linux.org, mpe@...erman.id.au, schwidefsky@...ibm.com, dalias@...c.org, davem@...emloft.net, cmetcalf@...lanox.com, jcmvbkbc@...il.com, arnd@...db.de, dbueso@...e.de, fengguang.wu@...el.com Subject: Re: [RFC][PATCH 04/31] locking,arm: Implement atomic{,64}_fetch_{add,sub,and,andnot,or,xor}{,_relaxed,_acquire,_release}() On Fri, Apr 22, 2016 at 11:04:17AM +0200, Peter Zijlstra wrote: > Implement FETCH-OP atomic primitives, these are very similar to the > existing OP-RETURN primitives we already have, except they return the > value of the atomic variable _before_ modification. > > This is especially useful for irreversible operations -- such as > bitops (because it becomes impossible to reconstruct the state prior > to modification). > > Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org> > --- > arch/arm/include/asm/atomic.h | 108 ++++++++++++++++++++++++++++++++++++++---- > 1 file changed, 98 insertions(+), 10 deletions(-) > > --- a/arch/arm/include/asm/atomic.h > +++ b/arch/arm/include/asm/atomic.h > @@ -77,8 +77,36 @@ static inline int atomic_##op##_return_r > return result; \ > } [...] > +static inline long long \ > +atomic64_fetch_##op##_relaxed(long long i, atomic64_t *v) \ > +{ \ > + long long result, val; \ > + unsigned long tmp; \ > + \ > + prefetchw(&v->counter); \ > + \ > + __asm__ __volatile__("@ atomic64_fetch_" #op "\n" \ > +"1: ldrexd %0, %H0, [%4]\n" \ > +" " #op1 " %Q1, %Q0, %Q5\n" \ > +" " #op2 " %R1, %R0, %R5\n" \ > +" strexd %2, %1, %H0, [%4]\n" \ You want %H1 here. With that: Acked-by: Will Deacon <will.deacon@....com> Will
Powered by blists - more mailing lists