lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Mon,  6 May 2013 21:01:05 +0100
From:	Will Deacon <will.deacon@....com>
To:	linux-alpha@...r.kernel.org
Cc:	linux-kernel@...r.kernel.org, Will Deacon <will.deacon@....com>,
	Richard Henderson <rth@...ddle.net>,
	Ivan Kokshaysky <ink@...assic.park.msu.ru>,
	Matt Turner <mattst88@...il.com>
Subject: [PATCH] alpha: spinlock: don't perform memory access in locked critical section

The Alpha Architecture Reference Manual states that any memory access
performed between an LD_xL and a STx_C instruction may cause the
store-conditional to fail unconditionally and, as such, `no useful
program should do this'.

Linux is a useful program, so fix up the Alpha spinlock implementation
to use logical operations rather than load-address instructions for
generating immediates.

Cc: Richard Henderson <rth@...ddle.net>
Cc: Ivan Kokshaysky <ink@...assic.park.msu.ru>
Cc: Matt Turner <mattst88@...il.com>
Signed-off-by: Will Deacon <will.deacon@....com>
---
 arch/alpha/include/asm/spinlock.h | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/arch/alpha/include/asm/spinlock.h b/arch/alpha/include/asm/spinlock.h
index 3bba21e..0c357cd 100644
--- a/arch/alpha/include/asm/spinlock.h
+++ b/arch/alpha/include/asm/spinlock.h
@@ -29,7 +29,7 @@ static inline void arch_spin_lock(arch_spinlock_t * lock)
 	__asm__ __volatile__(
 	"1:	ldl_l	%0,%1\n"
 	"	bne	%0,2f\n"
-	"	lda	%0,1\n"
+	"	mov	1,%0\n"
 	"	stl_c	%0,%1\n"
 	"	beq	%0,2f\n"
 	"	mb\n"
@@ -86,7 +86,7 @@ static inline void arch_write_lock(arch_rwlock_t *lock)
 	__asm__ __volatile__(
 	"1:	ldl_l	%1,%0\n"
 	"	bne	%1,6f\n"
-	"	lda	%1,1\n"
+	"	mov	1,%1\n"
 	"	stl_c	%1,%0\n"
 	"	beq	%1,6f\n"
 	"	mb\n"
@@ -106,7 +106,7 @@ static inline int arch_read_trylock(arch_rwlock_t * lock)
 
 	__asm__ __volatile__(
 	"1:	ldl_l	%1,%0\n"
-	"	lda	%2,0\n"
+	"	mov	0,%2\n"
 	"	blbs	%1,2f\n"
 	"	subl	%1,2,%2\n"
 	"	stl_c	%2,%0\n"
@@ -128,9 +128,9 @@ static inline int arch_write_trylock(arch_rwlock_t * lock)
 
 	__asm__ __volatile__(
 	"1:	ldl_l	%1,%0\n"
-	"	lda	%2,0\n"
+	"	mov	0,%2\n"
 	"	bne	%1,2f\n"
-	"	lda	%2,1\n"
+	"	mov	1,%2\n"
 	"	stl_c	%2,%0\n"
 	"	beq	%2,6f\n"
 	"2:	mb\n"
-- 
1.8.2.2

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ