[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200901161857.566142-1-namit@vmware.com>
Date: Tue, 1 Sep 2020 09:18:57 -0700
From: Nadav Amit <nadav.amit@...il.com>
To: linux-kernel@...r.kernel.org, Thomas Gleixner <tglx@...utronix.de>
Cc: Nadav Amit <namit@...are.com>, Ingo Molnar <mingo@...hat.com>,
Borislav Petkov <bp@...en8.de>, x86@...nel.org,
"H. Peter Anvin" <hpa@...or.com>,
Peter Zijlstra <peterz@...radead.org>,
Andy Lutomirski <luto@...nel.org>,
Kees Cook <keescook@...omium.org>, stable@...r.kernel.org
Subject: [PATCH] x86/special_insn: reverse __force_order logic
From: Nadav Amit <namit@...are.com>
The __force_order logic seems to be inverted. __force_order is
supposedly used to manipulate the compiler to use its memory
dependencies analysis to enforce orders between CR writes and reads.
Therefore, the memory should behave as a "CR": when the CR is read, the
memory should be "read" by the inline assembly, and __force_order should
be an output. When the CR is written, the memory should be "written".
This change should allow to remove the "volatile" qualifier from CR
reads at a later patch.
While at it, remove the extra new-line from the inline assembly, as it
only confuses GCC when it estimates the cost of the inline assembly.
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: Ingo Molnar <mingo@...hat.com>
Cc: Borislav Petkov <bp@...en8.de>
Cc: x86@...nel.org
Cc: "H. Peter Anvin" <hpa@...or.com>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Andy Lutomirski <luto@...nel.org>
Cc: Kees Cook <keescook@...omium.org>
Cc: stable@...r.kernel.org
Signed-off-by: Nadav Amit <namit@...are.com>
---
Unless I misunderstand the logic, __force_order should also be used by
rdpkru() and wrpkru() which do not have dependency on __force_order. I
also did not understand why native_write_cr0() has R/W dependency on
__force_order, and why native_write_cr4() no longer has any dependency
on __force_order.
---
arch/x86/include/asm/special_insns.h | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/arch/x86/include/asm/special_insns.h b/arch/x86/include/asm/special_insns.h
index 5999b0b3dd4a..dff5e5b01a3c 100644
--- a/arch/x86/include/asm/special_insns.h
+++ b/arch/x86/include/asm/special_insns.h
@@ -24,32 +24,32 @@ void native_write_cr0(unsigned long val);
static inline unsigned long native_read_cr0(void)
{
unsigned long val;
- asm volatile("mov %%cr0,%0\n\t" : "=r" (val), "=m" (__force_order));
+ asm volatile("mov %%cr0,%0" : "=r" (val) : "m" (__force_order));
return val;
}
static __always_inline unsigned long native_read_cr2(void)
{
unsigned long val;
- asm volatile("mov %%cr2,%0\n\t" : "=r" (val), "=m" (__force_order));
+ asm volatile("mov %%cr2,%0" : "=r" (val) : "m" (__force_order));
return val;
}
static __always_inline void native_write_cr2(unsigned long val)
{
- asm volatile("mov %0,%%cr2": : "r" (val), "m" (__force_order));
+ asm volatile("mov %1,%%cr2" : "=m" (__force_order) : "r" (val));
}
static inline unsigned long __native_read_cr3(void)
{
unsigned long val;
- asm volatile("mov %%cr3,%0\n\t" : "=r" (val), "=m" (__force_order));
+ asm volatile("mov %%cr3,%0" : "=r" (val) : "m" (__force_order));
return val;
}
static inline void native_write_cr3(unsigned long val)
{
- asm volatile("mov %0,%%cr3": : "r" (val), "m" (__force_order));
+ asm volatile("mov %1,%%cr3" : "=m" (__force_order) : "r" (val));
}
static inline unsigned long native_read_cr4(void)
@@ -64,10 +64,10 @@ static inline unsigned long native_read_cr4(void)
asm volatile("1: mov %%cr4, %0\n"
"2:\n"
_ASM_EXTABLE(1b, 2b)
- : "=r" (val), "=m" (__force_order) : "0" (0));
+ : "=r" (val) : "m" (__force_order), "0" (0));
#else
/* CR4 always exists on x86_64. */
- asm volatile("mov %%cr4,%0\n\t" : "=r" (val), "=m" (__force_order));
+ asm volatile("mov %%cr4,%0" : "=r" (val) : "m" (__force_order));
#endif
return val;
}
--
2.25.1
Powered by blists - more mailing lists