[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-id: <1394429506.566981444014475210.JavaMail.weblogic@epmlwas08c>
Date: Mon, 05 Oct 2015 03:07:58 +0000 (GMT)
From: Sarbojit Ganguly <ganguly.s@...sung.com>
To: linux@....linux.org.uk, catalin.marinas@....com,
will.deacon@....com
Cc: CatalinMarinas@...sung.com (Catalin.Marinas@....com),
linux-arm-kernel@...ts.infradead.org, peterz@...radead.org,
Waiman.Long@...com, linux-kernel@...r.kernel.org,
torvalds@...ux-foundation.org, suneel@...sung.com,
SHARAN ALLUR <sharan.allur@...sung.com>,
VIKRAM MUPPARTHI <vikram.m@...sung.com>
Subject: [PATCH v2] arm: Adding support for atomic half word exchange
Hello Will,
This is my second version of the patch which covers the byte exclusive case as pointed out by you.
Please share your opinion on this.
v1-->v2 : Extended the guard code to cover the byte exchange case as
well following opinion of Will Deacon.
Checkpatch has been run and issues were taken care of.
Since support for half-word atomic exchange was not there and Qspinlock
on ARM requires it, modified __xchg() to add support for that as well.
ARMv6 and lower does not support ldrex{b,h} so, added a guard code
to prevent build breaks.
Signed-off-by: Sarbojit Ganguly <ganguly.s@...sung.com>
---
arch/arm/include/asm/cmpxchg.h | 17 +++++++++++++++++
1 file changed, 17 insertions(+)
diff --git a/arch/arm/include/asm/cmpxchg.h b/arch/arm/include/asm/cmpxchg.h index 916a274..a53cbeb 100644
--- a/arch/arm/include/asm/cmpxchg.h
+++ b/arch/arm/include/asm/cmpxchg.h
@@ -39,6 +39,7 @@ static inline unsigned long __xchg(unsigned long x, volatile void *ptr, int size
switch (size) {
#if __LINUX_ARM_ARCH__ >= 6
+#if !defined(CONFIG_CPU_V6)
case 1:
asm volatile("@ __xchg1\n"
"1: ldrexb %0, [%3]\n"
@@ -49,6 +50,22 @@ static inline unsigned long __xchg(unsigned long x, volatile void *ptr, int size
: "r" (x), "r" (ptr)
: "memory", "cc");
break;
+
+ /*
+ * Half-word atomic exchange, required
+ * for Qspinlock support on ARM.
+ */
+ case 2:
+ asm volatile("@ __xchg2\n"
+ "1: ldrexh %0, [%3]\n"
+ " strexh %1, %2, [%3]\n"
+ " teq %1, #0\n"
+ " bne 1b"
+ : "=&r" (ret), "=&r" (tmp)
+ : "r" (x), "r" (ptr)
+ : "memory", "cc");
+ break;
+#endif
case 4:
asm volatile("@ __xchg4\n"
"1: ldrex %0, [%3]\n"
--
Powered by blists - more mailing lists