[<prev] [next>] [day] [month] [year] [list]
Message-Id: <200611211659.06576.baldrick@free.fr>
Date: Tue, 21 Nov 2006 16:59:05 +0100
From: Duncan Sands <baldrick@...e.fr>
To: linux-kernel@...r.kernel.org
Cc: Linus Torvalds <torvalds@...l.org>
Subject: [PATCH] fix asm constraints in i386 atomic_add_return
Since v->counter is both read and written, it should be an
output as well as an input for the asm. The current code
only gets away with this because counter is volatile. Also,
according to Documents/atomic_ops.txt, atomic_add_return
should provide a memory barrier, in particular a compiler
barrier, so the asm should be marked as clobbering memory.
Test case attached.
Signed-off-by: Duncan Sands <baldrick@...e.fr>
diff --git a/include/asm-i386/atomic.h b/include/asm-i386/atomic.h
index 51a1662..6aab7a1 100644
--- a/include/asm-i386/atomic.h
+++ b/include/asm-i386/atomic.h
@@ -187,9 +187,9 @@ static __inline__ int atomic_add_return(
/* Modern 486+ processor */
__i = i;
__asm__ __volatile__(
- LOCK_PREFIX "xaddl %0, %1;"
- :"=r"(i)
- :"m"(v->counter), "0"(i));
+ LOCK_PREFIX "xaddl %0, %1"
+ :"+r" (i), "+m" (v->counter)
+ : : "memory");
return i + __i;
#ifdef CONFIG_M386
View attachment "t.c" of type "text/x-csrc" (619 bytes)
Powered by blists - more mailing lists