lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 18 Aug 2015 08:17:52 +0000 (GMT)
From:	Sarbojit Ganguly <ganguly.s@...sung.com>
To:	"SHARANALLUR<sharan.allur"@samsung.com,
	"VIKRAMMUPPARTHI<vikram.m"@samsung.com,
	Sarbojit Ganguly <ganguly.s@...sung.com>, tglx@...utronix.de,
	mingo@...hat.com, peterz@...radead.org, Waiman.Long@...com,
	oleg@...hat.com, linux-kernel@...r.kernel.org,
	torvalds@...ux-foundation.org, catalin.marinas@....com,
	"RaghavendraKT<raghavendra.kt"@linux.vnet.ibm.com
Subject: [PATCH] arm: Adding support for atomic half word exchange


<Ping>

Since 16 bit half word exchange was not there and MCS based qspinlock by Waiman's xchg_tail() requires an atomic exchange on a half word, here is a small modification to __xchg() code to support the exchange.
ARMv6 and lower does not have support for LDREXH, so we need to make sure things do not break when we're compiling on ARMv6.

Signed-off-by: Sarbojit Ganguly <ganguly.s@...sung.com>
---
 arch/arm/include/asm/cmpxchg.h | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/arch/arm/include/asm/cmpxchg.h b/arch/arm/include/asm/cmpxchg.h index 1692a05..547101d 100644
--- a/arch/arm/include/asm/cmpxchg.h
+++ b/arch/arm/include/asm/cmpxchg.h
@@ -50,6 +50,24 @@ static inline unsigned long __xchg(unsigned long x, volatile void *ptr, int size
 			: "r" (x), "r" (ptr)
 			: "memory", "cc");
 		break;
+#if !defined (CONFIG_CPU_V6)
+		/*
+		 * Halfword exclusive exchange
+		 * This is new implementation as qspinlock
+		 * wants 16 bit atomic CAS.
+		 * This is not supported on ARMv6.
+		 */
+	case 2:
+		asm volatile("@ __xchg2\n"
+		"1:     ldrexh  %0, [%3]\n"
+		"       strexh  %1, %2, [%3]\n"
+		"       teq     %1, #0\n"
+		"       bne     1b"
+		: "=&r" (ret), "=&r" (tmp)
+		: "r" (x), "r" (ptr)
+		: "memory", "cc");
+		break;
+#endif
 	case 4:
 		asm volatile("@	__xchg4\n"
 		"1:	ldrex	%0, [%3]\n"
-- 
Sarbojit

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ