lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 6 Mar 2019 05:31:56 -0800
From:   tip-bot for Kees Cook <>
Subject: [tip:x86/asm] x86/asm: Pin sensitive CR0 bits

Commit-ID:  d884bc119c4ebe7ac53b61fc0750bbc89b4d63db
Author:     Kees Cook <>
AuthorDate: Wed, 27 Feb 2019 12:01:30 -0800
Committer:  Thomas Gleixner <>
CommitDate: Wed, 6 Mar 2019 13:25:55 +0100

x86/asm: Pin sensitive CR0 bits

With sensitive CR4 bits pinned now, it's possible that the WP bit for CR0
might become a target as well. Following the same reasoning for the CR4
pinning, pin CR0's WP bit (but this can be done with a static value).

As before, to convince the compiler to not optimize away the check for the
WP bit after the set, mark "val" as an output from the asm() block.  This
protects against just jumping into the function past where the masking
happens; we must check that the mask was applied after we do the set). Due
to how this function can be built by the compiler (especially due to the
removal of frame pointers), jumping into the middle of the function
frequently doesn't require stack manipulation to construct a stack frame
(there may only a retq without pops, which is sufficient for use with
exploits like timer overwrites).

Additionally, this avoids WARN()ing before resetting the bit, just to
minimize any race conditions with leaving the bit unset.

Suggested-by: Peter Zijlstra <>
Signed-off-by: Kees Cook <>
Signed-off-by: Thomas Gleixner <>
Cc: Solar Designer <>
Cc: Greg KH <>
Cc: Jann Horn <>
Cc: Sean Christopherson <>
Cc: Dominik Brodowski <>
Cc: Kernel Hardening <>

 arch/x86/include/asm/special_insns.h | 24 +++++++++++++++++++++++-
 1 file changed, 23 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/special_insns.h b/arch/x86/include/asm/special_insns.h
index 99607f142cad..7fa4fe880395 100644
--- a/arch/x86/include/asm/special_insns.h
+++ b/arch/x86/include/asm/special_insns.h
@@ -5,6 +5,7 @@
 #ifdef __KERNEL__
+#include <asm/processor-flags.h>
 #include <asm/nops.h>
@@ -25,7 +26,28 @@ static inline unsigned long native_read_cr0(void)
 static inline void native_write_cr0(unsigned long val)
-	asm volatile("mov %0,%%cr0": : "r" (val), "m" (__force_order));
+	bool warn = false;
+	val |= X86_CR0_WP;
+	/*
+	 * In order to have the compiler not optimize away the check
+	 * after the cr4 write, mark "val" as being also an output ("+r")
+	 * by this asm() block so it will perform an explicit check, as
+	 * if it were "volatile".
+	 */
+	asm volatile("mov %0,%%cr0" : "+r" (val) : "m" (__force_order) : );
+	/*
+	 * If the MOV above was used directly as a ROP gadget we can
+	 * notice the lack of pinned bits in "val" and start the function
+	 * from the beginning to gain the WP bit for sure. And do it
+	 * without first taking the exception for a WARN().
+	 */
+	if ((val & X86_CR0_WP) != X86_CR0_WP) {
+		warn = true;
+		goto again;
+	}
+	WARN_ONCE(warn, "Attempt to unpin X86_CR0_WP, cr0 bypass attack?!\n");
 static inline unsigned long native_read_cr2(void)

Powered by blists - more mailing lists