[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1508792849-3115-18-git-send-email-paulmck@linux.vnet.ibm.com>
Date: Mon, 23 Oct 2017 14:07:28 -0700
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: mingo@...nel.org
Cc: linux-kernel@...r.kernel.org, linux-arch@...r.kernel.org,
will.deacon@....com, mark.rutland@....com, snitzer@...hat.com,
thor.thayer@...ux.intel.com, viro@...iv.linux.org.uk,
davem@...emloft.net, shuah@...nel.org, mpe@...erman.id.au,
tj@...nel.org, torvalds@...ux-foundation.org,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Subject: [PATCH 18/19] alpha: atomics: Add smp_read_barrier_depends() to release/relaxed atomics
From: Will Deacon <will.deacon@....com>
As part of the fight against smp_read_barrier_depends(), we require
dependency ordering to be preserved when a dependency is headed by a load
performed using an atomic operation.
This patch adds smp_read_barrier_depends() to the _release and _relaxed
atomics on alpha, which otherwise lack anything to enforce dependency
ordering.
Signed-off-by: Will Deacon <will.deacon@....com>
Signed-off-by: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
---
arch/alpha/include/asm/atomic.h | 13 +++++++++++++
1 file changed, 13 insertions(+)
diff --git a/arch/alpha/include/asm/atomic.h b/arch/alpha/include/asm/atomic.h
index 498933a7df97..16961a3f45ba 100644
--- a/arch/alpha/include/asm/atomic.h
+++ b/arch/alpha/include/asm/atomic.h
@@ -13,6 +13,15 @@
* than regular operations.
*/
+/*
+ * To ensure dependency ordering is preserved for the _relaxed and
+ * _release atomics, an smp_read_barrier_depends() is unconditionally
+ * inserted into the _relaxed variants, which are used to build the
+ * barriered versions. To avoid redundant back-to-back fences, we can
+ * define the _acquire and _fence versions explicitly.
+ */
+#define __atomic_op_acquire(op, args...) op##_relaxed(args)
+#define __atomic_op_fence __atomic_op_release
#define ATOMIC_INIT(i) { (i) }
#define ATOMIC64_INIT(i) { (i) }
@@ -60,6 +69,7 @@ static inline int atomic_##op##_return_relaxed(int i, atomic_t *v) \
".previous" \
:"=&r" (temp), "=m" (v->counter), "=&r" (result) \
:"Ir" (i), "m" (v->counter) : "memory"); \
+ smp_read_barrier_depends(); \
return result; \
}
@@ -77,6 +87,7 @@ static inline int atomic_fetch_##op##_relaxed(int i, atomic_t *v) \
".previous" \
:"=&r" (temp), "=m" (v->counter), "=&r" (result) \
:"Ir" (i), "m" (v->counter) : "memory"); \
+ smp_read_barrier_depends(); \
return result; \
}
@@ -111,6 +122,7 @@ static __inline__ long atomic64_##op##_return_relaxed(long i, atomic64_t * v) \
".previous" \
:"=&r" (temp), "=m" (v->counter), "=&r" (result) \
:"Ir" (i), "m" (v->counter) : "memory"); \
+ smp_read_barrier_depends(); \
return result; \
}
@@ -128,6 +140,7 @@ static __inline__ long atomic64_fetch_##op##_relaxed(long i, atomic64_t * v) \
".previous" \
:"=&r" (temp), "=m" (v->counter), "=&r" (result) \
:"Ir" (i), "m" (v->counter) : "memory"); \
+ smp_read_barrier_depends(); \
return result; \
}
--
2.5.2
Powered by blists - more mailing lists