[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1453842730-28463-2-git-send-email-bp@alien8.de>
Date: Tue, 26 Jan 2016 22:12:01 +0100
From: Borislav Petkov <bp@...en8.de>
To: Ingo Molnar <mingo@...nel.org>
Cc: LKML <linux-kernel@...r.kernel.org>
Subject: [PATCH 01/10] x86/asm: Add condition codes clobber to memory barrier macros
From: "Michael S. Tsirkin" <mst@...hat.com>
ADDL clobbers flags (such as CF) but barrier.h didn't tell this to gcc.
Historically, gcc doesn't need them on x86, and always considers flags
clobbered. We are probably missing the cc clobber in a *lot* of places
for this reason.
But even if not necessary, it's probably a good thing to add for
documentation, and in case gcc semantics ever change.
Reported-by: Borislav Petkov <bp@...en8.de>
Signed-off-by: Michael S. Tsirkin <mst@...hat.com>
Cc: Andrey Konovalov <andreyknvl@...gle.com>
Cc: Andy Lutomirski <luto@...nel.org>
Cc: Arnd Bergmann <arnd@...db.de>
Cc: Davidlohr Bueso <dave@...olabs.net>
Cc: "H. Peter Anvin" <hpa@...or.com>
Cc: Ingo Molnar <mingo@...nel.org>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: virtualization <virtualization@...ts.linux-foundation.org>
Link: http://lkml.kernel.org/r/1452715911-12067-2-git-send-email-mst@redhat.com
Signed-off-by: Borislav Petkov <bp@...e.de>
---
arch/x86/include/asm/barrier.h | 9 ++++++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/arch/x86/include/asm/barrier.h b/arch/x86/include/asm/barrier.h
index 0681d2532527..5bce7865b623 100644
--- a/arch/x86/include/asm/barrier.h
+++ b/arch/x86/include/asm/barrier.h
@@ -15,9 +15,12 @@
* Some non-Intel clones support out of order store. wmb() ceases to be a
* nop for these.
*/
-#define mb() alternative("lock; addl $0,0(%%esp)", "mfence", X86_FEATURE_XMM2)
-#define rmb() alternative("lock; addl $0,0(%%esp)", "lfence", X86_FEATURE_XMM2)
-#define wmb() alternative("lock; addl $0,0(%%esp)", "sfence", X86_FEATURE_XMM)
+#define mb() asm volatile(ALTERNATIVE("lock; addl $0,0(%%esp)", "mfence", \
+ X86_FEATURE_XMM2) ::: "memory", "cc")
+#define rmb() asm volatile(ALTERNATIVE("lock; addl $0,0(%%esp)", "lfence", \
+ X86_FEATURE_XMM2) ::: "memory", "cc")
+#define wmb() asm volatile(ALTERNATIVE("lock; addl $0,0(%%esp)", "sfence", \
+ X86_FEATURE_XMM2) ::: "memory", "cc")
#else
#define mb() asm volatile("mfence":::"memory")
#define rmb() asm volatile("lfence":::"memory")
--
2.3.5
Powered by blists - more mailing lists