[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190424124421.693353463@infradead.org>
Date: Wed, 24 Apr 2019 14:36:59 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: stern@...land.harvard.edu, akiyks@...il.com,
andrea.parri@...rulasolutions.com, boqun.feng@...il.com,
dlustig@...dia.com, dhowells@...hat.com, j.alglave@....ac.uk,
luc.maranget@...ia.fr, npiggin@...il.com, paulmck@...ux.ibm.com,
peterz@...radead.org, will.deacon@....com
Cc: linux-kernel@...r.kernel.org, torvalds@...ux-foundation.org,
Huacai Chen <chenhc@...ote.com>,
Huang Pei <huangpei@...ngson.cn>,
Paul Burton <paul.burton@...s.com>
Subject: [RFC][PATCH 3/5] mips/atomic: Optimize loongson3_llsc_mb()
Now that every single LL/SC loop has loongson_llsc_mb() in front, we
can NO-OP smp_mb__before_llsc() in that case.
While there, remove the superfluous __smp_mb__before_llsc().
Cc: Huacai Chen <chenhc@...ote.com>
Cc: Huang Pei <huangpei@...ngson.cn>
Cc: Paul Burton <paul.burton@...s.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
---
arch/mips/include/asm/barrier.h | 11 ++++++++---
1 file changed, 8 insertions(+), 3 deletions(-)
--- a/arch/mips/include/asm/barrier.h
+++ b/arch/mips/include/asm/barrier.h
@@ -221,15 +221,12 @@
#ifdef CONFIG_CPU_CAVIUM_OCTEON
#define smp_mb__before_llsc() smp_wmb()
-#define __smp_mb__before_llsc() __smp_wmb()
/* Cause previous writes to become visible on all CPUs as soon as possible */
#define nudge_writes() __asm__ __volatile__(".set push\n\t" \
".set arch=octeon\n\t" \
"syncw\n\t" \
".set pop" : : : "memory")
#else
-#define smp_mb__before_llsc() smp_llsc_mb()
-#define __smp_mb__before_llsc() smp_llsc_mb()
#define nudge_writes() mb()
#endif
@@ -264,11 +261,19 @@
* This case affects all current Loongson 3 CPUs.
*/
#ifdef CONFIG_CPU_LOONGSON3_WORKAROUNDS /* Loongson-3's LLSC workaround */
+#define smp_mb__before_llsc() do { } while (0)
#define loongson_llsc_mb() __asm__ __volatile__("sync" : : :"memory")
#else
#define loongson_llsc_mb() do { } while (0)
#endif
+#ifndef smp_mb__before_llsc
+#define smp_mb__before_llsc() smp_llsc_mb()
+#endif
+
+#define __smp_mb__before_atomic() smp_mb__before_llsc()
+#define __smp_mb__after_atomic() smp_llsc_mb()
+
static inline void sync_ginv(void)
{
asm volatile("sync\t%0" :: "i"(STYPE_GINV));
Powered by blists - more mailing lists