[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160422142316.GI10289@arm.com>
Date: Fri, 22 Apr 2016 15:23:16 +0100
From: Will Deacon <will.deacon@....com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: torvalds@...ux-foundation.org, mingo@...nel.org,
tglx@...utronix.de, paulmck@...ux.vnet.ibm.com,
boqun.feng@...il.com, waiman.long@....com, fweisbec@...il.com,
linux-kernel@...r.kernel.org, linux-arch@...r.kernel.org,
rth@...ddle.net, vgupta@...opsys.com, linux@....linux.org.uk,
egtvedt@...fundet.no, realmz6@...il.com,
ysato@...rs.sourceforge.jp, rkuo@...eaurora.org,
tony.luck@...el.com, geert@...ux-m68k.org, james.hogan@...tec.com,
ralf@...ux-mips.org, dhowells@...hat.com, jejb@...isc-linux.org,
mpe@...erman.id.au, schwidefsky@...ibm.com, dalias@...c.org,
davem@...emloft.net, cmetcalf@...lanox.com, jcmvbkbc@...il.com,
arnd@...db.de, dbueso@...e.de, fengguang.wu@...el.com
Subject: Re: [RFC][PATCH 05/31] locking,arm64: Implement
atomic{,64}_fetch_{add,sub,and,andnot,or,xor}{,_relaxed,_acquire,_release}()
On Fri, Apr 22, 2016 at 11:04:18AM +0200, Peter Zijlstra wrote:
> Implement FETCH-OP atomic primitives, these are very similar to the
> existing OP-RETURN primitives we already have, except they return the
> value of the atomic variable _before_ modification.
>
> This is especially useful for irreversible operations -- such as
> bitops (because it becomes impossible to reconstruct the state prior
> to modification).
The LSE bits will take me some time, but you're also missing some stuff
for the LL/SC variants. Fixup below.
Will
--->8
>From ff2863445fb2a11dcd0cab4aaaeebe28aa5c9937 Mon Sep 17 00:00:00 2001
From: Will Deacon <will.deacon@....com>
Date: Fri, 22 Apr 2016 14:30:54 +0100
Subject: [PATCH] fixup! locking,arm64: Implement
atomic{,64}_fetch_{add,sub,and,andnot,or,xor}{,_relaxed,_acquire,_release}()
Get the ll/sc stuff building and working
Signed-off-by: Will Deacon <will.deacon@....com>
---
arch/arm64/include/asm/atomic.h | 30 ++++++++++++++++++++++++++++++
arch/arm64/include/asm/atomic_ll_sc.h | 8 ++++----
2 files changed, 34 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/include/asm/atomic.h b/arch/arm64/include/asm/atomic.h
index 83b74b67c04b..c0235e0ff849 100644
--- a/arch/arm64/include/asm/atomic.h
+++ b/arch/arm64/include/asm/atomic.h
@@ -155,6 +155,36 @@
#define atomic64_dec_return_release(v) atomic64_sub_return_release(1, (v))
#define atomic64_dec_return(v) atomic64_sub_return(1, (v))
+#define atomic64_fetch_add_relaxed atomic64_fetch_add_relaxed
+#define atomic64_fetch_add_acquire atomic64_fetch_add_acquire
+#define atomic64_fetch_add_release atomic64_fetch_add_release
+#define atomic64_fetch_add atomic64_fetch_add
+
+#define atomic64_fetch_sub_relaxed atomic64_fetch_sub_relaxed
+#define atomic64_fetch_sub_acquire atomic64_fetch_sub_acquire
+#define atomic64_fetch_sub_release atomic64_fetch_sub_release
+#define atomic64_fetch_sub atomic64_fetch_sub
+
+#define atomic64_fetch_and_relaxed atomic64_fetch_and_relaxed
+#define atomic64_fetch_and_acquire atomic64_fetch_and_acquire
+#define atomic64_fetch_and_release atomic64_fetch_and_release
+#define atomic64_fetch_and atomic64_fetch_and
+
+#define atomic64_fetch_andnot_relaxed atomic64_fetch_andnot_relaxed
+#define atomic64_fetch_andnot_acquire atomic64_fetch_andnot_acquire
+#define atomic64_fetch_andnot_release atomic64_fetch_andnot_release
+#define atomic64_fetch_andnot atomic64_fetch_andnot
+
+#define atomic64_fetch_or_relaxed atomic64_fetch_or_relaxed
+#define atomic64_fetch_or_acquire atomic64_fetch_or_acquire
+#define atomic64_fetch_or_release atomic64_fetch_or_release
+#define atomic64_fetch_or atomic64_fetch_or
+
+#define atomic64_fetch_xor_relaxed atomic64_fetch_xor_relaxed
+#define atomic64_fetch_xor_acquire atomic64_fetch_xor_acquire
+#define atomic64_fetch_xor_release atomic64_fetch_xor_release
+#define atomic64_fetch_xor atomic64_fetch_xor
+
#define atomic64_xchg_relaxed atomic_xchg_relaxed
#define atomic64_xchg_acquire atomic_xchg_acquire
#define atomic64_xchg_release atomic_xchg_release
diff --git a/arch/arm64/include/asm/atomic_ll_sc.h b/arch/arm64/include/asm/atomic_ll_sc.h
index f92806390c9a..2b29db9593c7 100644
--- a/arch/arm64/include/asm/atomic_ll_sc.h
+++ b/arch/arm64/include/asm/atomic_ll_sc.h
@@ -127,6 +127,7 @@ ATOMIC_OPS(or, orr)
ATOMIC_OPS(xor, eor)
#undef ATOMIC_OPS
+#undef ATOMIC_FETCH_OP
#undef ATOMIC_OP_RETURN
#undef ATOMIC_OP
@@ -195,11 +196,10 @@ __LL_SC_EXPORT(atomic64_##op##_return##name);
#define ATOMIC64_OPS(...) \
ATOMIC64_OP(__VA_ARGS__) \
ATOMIC64_OP_RETURN(, dmb ish, , l, "memory", __VA_ARGS__) \
- ATOMIC64_FETCH_OP (, dmb ish, , l, "memory", __VA_ARGS__) \
- ATOMIC64_OPS(__VA_ARGS__) \
ATOMIC64_OP_RETURN(_relaxed,, , , , __VA_ARGS__) \
ATOMIC64_OP_RETURN(_acquire,, a, , "memory", __VA_ARGS__) \
ATOMIC64_OP_RETURN(_release,, , l, "memory", __VA_ARGS__) \
+ ATOMIC64_FETCH_OP (, dmb ish, , l, "memory", __VA_ARGS__) \
ATOMIC64_FETCH_OP (_relaxed,, , , , __VA_ARGS__) \
ATOMIC64_FETCH_OP (_acquire,, a, , "memory", __VA_ARGS__) \
ATOMIC64_FETCH_OP (_release,, , l, "memory", __VA_ARGS__)
@@ -207,11 +207,10 @@ __LL_SC_EXPORT(atomic64_##op##_return##name);
ATOMIC64_OPS(add, add)
ATOMIC64_OPS(sub, sub)
-#undef ATOMIC_OPS
+#undef ATOMIC64_OPS
#define ATOMIC64_OPS(...) \
ATOMIC64_OP(__VA_ARGS__) \
ATOMIC64_FETCH_OP (, dmb ish, , l, "memory", __VA_ARGS__) \
- ATOMIC64_OPS(__VA_ARGS__) \
ATOMIC64_FETCH_OP (_relaxed,, , , , __VA_ARGS__) \
ATOMIC64_FETCH_OP (_acquire,, a, , "memory", __VA_ARGS__) \
ATOMIC64_FETCH_OP (_release,, , l, "memory", __VA_ARGS__)
@@ -222,6 +221,7 @@ ATOMIC64_OPS(or, orr)
ATOMIC64_OPS(xor, eor)
#undef ATOMIC64_OPS
+#undef ATOMIC64_FETCH_OP
#undef ATOMIC64_OP_RETURN
#undef ATOMIC64_OP
--
2.1.4
Powered by blists - more mailing lists