[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20070723160603.22137.16516.sendpatchset@cselinux1.cse.iitk.ac.in>
Date: Mon, 23 Jul 2007 21:36:03 +0530
From: Satyam Sharma <ssatyam@....iitk.ac.in>
To: Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Cc: David Howells <dhowells@...hat.com>,
Nick Piggin <nickpiggin@...oo.com.au>, Andi Kleen <ak@...e.de>,
Andrew Morton <akpm@...ux-foundation.org>,
Linus Torvalds <torvalds@...ux-foundation.org>
Subject: [PATCH 7/8] i386: bitops: Kill needless usage of __asm__ __volatile__
From: Satyam Sharma <ssatyam@....iitk.ac.in>
[7/8] i386: bitops: Kill needless usage of __asm__ __volatile__
Another oddity I noticed in this file. The semantics of __volatile__
when used to qualify inline __asm__ are that the compiler will not
(1) elid, or, (2) reorder, or, (3) intersperse, our inline asm with
the rest of the generated code.
However, we do not want these guarantees in the unlocked variants of the
bitops functions. Also, note that test_bit() is *always* an unlocked
operation on i386 -- the i386 instruction set disallows the use of the LOCK
prefix with the "btl" instruction anyway, and the CPU will throw an invalid
opcode exception otherwise. Hence, every caller of variable_test_bit() must
have all the required locking implemented at a higher-level anyway and this
operation would necessarily be guarantee-less.
So let's remove the __volatile__ qualification of the inline asm in the
variable_test_bit() function also -- and let's give the compiler a chance
to optimize / combine multiple bitops, if possible.
Signed-off-by: Satyam Sharma <ssatyam@....iitk.ac.in>
Cc: David Howells <dhowells@...hat.com>
Cc: Nick Piggin <nickpiggin@...oo.com.au>
---
include/asm-i386/bitops.h | 8 ++++----
1 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/include/asm-i386/bitops.h b/include/asm-i386/bitops.h
index f37b8a2..4f1fda5 100644
--- a/include/asm-i386/bitops.h
+++ b/include/asm-i386/bitops.h
@@ -96,7 +96,7 @@ static inline void clear_bit(int nr, unsigned long *addr)
*/
static inline void __clear_bit(int nr, unsigned long *addr)
{
- __asm__ __volatile__(
+ __asm__(
"btrl %1,%0"
:"=m" (*addr)
:"r" (nr));
@@ -123,7 +123,7 @@ static inline void __clear_bit(int nr, unsigned long *addr)
*/
static inline void __change_bit(int nr, unsigned long *addr)
{
- __asm__ __volatile__(
+ __asm__(
"btcl %1,%0"
:"=m" (*addr)
:"r" (nr));
@@ -251,7 +251,7 @@ static inline int __test_and_change_bit(int nr, unsigned long *addr)
{
int oldbit;
- __asm__ __volatile__(
+ __asm__(
"btcl %2,%1\n\tsbbl %0,%0"
:"=r" (oldbit),"=m" (*addr)
:"r" (nr));
@@ -297,7 +297,7 @@ static inline int variable_test_bit(int nr, const unsigned long *addr)
{
int oldbit;
- __asm__ __volatile__(
+ __asm__(
"btl %2,%1\n\tsbbl %0,%0"
:"=r" (oldbit)
:"m" (*addr),"r" (nr));
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists