lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-id: <1357250509.710431444228599576.JavaMail.weblogic@epmlwas04a>
Date:	Wed, 07 Oct 2015 14:36:42 +0000 (GMT)
From:	Sarbojit Ganguly <ganguly.s@...sung.com>
To:	rmk+kernel@....linux.co.uk,
	Sarbojit Ganguly <ganguly.s@...sung.com>,
	"linux@....linux.org.uk" <linux@....linux.org.uk>,
	"catalin.marinas@....com" <catalin.marinas@....com>
Cc:	Will Deacon <will.deacon@....com>,
	"Waiman.Long@...com" <Waiman.Long@...com>,
	"peterz@...radead.org" <peterz@...radead.org>,
	VIKRAM MUPPARTHI <vikram.m@...sung.com>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	SUNEEL KUMAR SURIMANI <suneel@...sung.com>,
	SHARAN ALLUR <sharan.allur@...sung.com>,
	"torvalds@...ux-foundation.org" <torvalds@...ux-foundation.org>,
	"linux-arm-kernel@...ts.infradead.org" 
	<linux-arm-kernel@...ts.infradead.org>
Subject: Re: Re: Re: Re: Re: [PATCH v3] arm: Adding support for atomic half
 word exchange

Hello Russell,

Please have a look at this patch, please let me know if any modification is required.
I have also submitted the same in your patch system.

v2 -> v3: Removed the comment related to Qspinlock, changed !defined to #ifndef.
v1 -> v2: Extended the guard code to cover the byte exchange case as
well following opinion of Will Deacon. Checkpatch has been run and issues were
taken care of.

Since support for half-word atomic exchange was not there and Qspinlock on ARM
requires it, modified __xchg() to add support for that as well. ARMv6 and lower
does not support ldrex{b,h} so, added a guard code to prevent build breaks.

Signed-off-by: Sarbojit Ganguly <ganguly.s@...sung.com>
---
 arch/arm/include/asm/cmpxchg.h | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/arch/arm/include/asm/cmpxchg.h b/arch/arm/include/asm/cmpxchg.h
index 916a274..97882f9 100644
--- a/arch/arm/include/asm/cmpxchg.h
+++ b/arch/arm/include/asm/cmpxchg.h
@@ -39,6 +39,7 @@ static inline unsigned long __xchg(unsigned long x, volatile void *ptr, int size
 
 	switch (size) {
 #if __LINUX_ARM_ARCH__ >= 6
+#ifndef CONFIG_CPU_V6 /* MIN ARCH >= V6K */
 	case 1:
 		asm volatile("@	__xchg1\n"
 		"1:	ldrexb	%0, [%3]\n"
@@ -49,6 +50,17 @@ static inline unsigned long __xchg(unsigned long x, volatile void *ptr, int size
 			: "r" (x), "r" (ptr)
 			: "memory", "cc");
 		break;
+	case 2:
+		asm volatile("@	__xchg2\n"
+		"1:	ldrexh	%0, [%3]\n"
+		"	strexh	%1, %2, [%3]\n"
+		"	teq	%1, #0\n"
+		"	bne	1b"
+			: "=&r" (ret), "=&r" (tmp)
+			: "r" (x), "r" (ptr)
+			: "memory", "cc");
+		break;
+#endif
 	case 4:
 		asm volatile("@	__xchg4\n"
 		"1:	ldrex	%0, [%3]\n"
-- 
1.9.1


Regards,
Sarbojit

------- Original Message -------
Sender : Will Deacon<will.deacon@....com>
Date : Oct 06, 2015 20:24 (GMT+05:30)
Title : Re: Re: Re: Re: [PATCH v3] arm: Adding support for atomic half word exchange

On Tue, Oct 06, 2015 at 08:03:02AM +0000, Sarbojit Ganguly wrote:
> Here is the version 3 of the patch correcting earlier issues.

This looks good to me now:

  Acked-by: Will Deacon 

> v2 -> v3 : Removed the comment related to Qspinlock, changed !defined to
> #ifndef.
> v1 -> v2 : Extended the guard code to cover the byte exchange case as 
> well following opinion of Will Deacon.
> Checkpatch has been run and issues were taken care of.

The part of your text up until here doesn't belong in the commit message.
You'll also need to send this to Russell's patch system.

Will

> Since support for half-word atomic exchange was not there and Qspinlock
> on ARM requires it, modified __xchg() to add support for that as well.
> ARMv6 and lower does not support ldrex{b,h} so, added a guard code
> to prevent build breaks.
> 
> Signed-off-by: Sarbojit Ganguly 
> ---
>  arch/arm/include/asm/cmpxchg.h | 12 ++++++++++++
>  1 file changed, 12 insertions(+)
> 
> diff --git a/arch/arm/include/asm/cmpxchg.h b/arch/arm/include/asm/cmpxchg.h
> index 916a274..c6436c1 100644
> --- a/arch/arm/include/asm/cmpxchg.h
> +++ b/arch/arm/include/asm/cmpxchg.h
> @@ -39,6 +39,7 @@ static inline unsigned long __xchg(unsigned long x, volatile void *ptr, int size
>  
>   switch (size) {
>  #if __LINUX_ARM_ARCH__ >= 6
> +#ifndef CONFIG_CPU_V6 /* MIN ARCH >= V6K */
>   case 1:
>   asm volatile("@ __xchg1\n"
>   "1: ldrexb %0, [%3]\n"
> @@ -49,6 +50,17 @@ static inline unsigned long __xchg(unsigned long x, volatile void *ptr, int size
>   : "r" (x), "r" (ptr)
>   : "memory", "cc");
>   break;
> + case 2:
> + asm volatile("@ __xchg2\n"
> + "1: ldrexh %0, [%3]\n"
> + " strexh %1, %2, [%3]\n"
> + " teq %1, #0\n"
> + " bne 1b"
> + : "=&r" (ret), "=&r" (tmp)
> + : "r" (x), "r" (ptr)
> + : "memory", "cc");
> + break;
> +#endif
>   case 4:
>   asm volatile("@ __xchg4\n"
>   "1: ldrex %0, [%3]\n"
> -- 
> 1.9.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ