lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <tip-1201dc68361cdb83ba314bef565b89400a68f5a5@git.kernel.org>
Date:   Wed, 6 Mar 2019 01:55:48 -0800
From:   tip-bot for Kees Cook <tipbot@...or.com>
To:     linux-tip-commits@...r.kernel.org
Cc:     solar@...nwall.com, jannh@...gle.com, mingo@...nel.org,
        kernel-hardening@...ts.openwall.com, keescook@...omium.org,
        linux@...inikbrodowski.net, tglx@...utronix.de,
        peterz@...radead.org, gregkh@...uxfoundation.org,
        sean.j.christopherson@...el.com, hpa@...or.com,
        linux-kernel@...r.kernel.org
Subject: [tip:x86/asm] x86/asm: Avoid taking an exception before cr4 restore

Commit-ID:  1201dc68361cdb83ba314bef565b89400a68f5a5
Gitweb:     https://git.kernel.org/tip/1201dc68361cdb83ba314bef565b89400a68f5a5
Author:     Kees Cook <keescook@...omium.org>
AuthorDate: Wed, 27 Feb 2019 12:01:31 -0800
Committer:  Thomas Gleixner <tglx@...utronix.de>
CommitDate: Wed, 6 Mar 2019 10:49:50 +0100

x86/asm: Avoid taking an exception before cr4 restore

Instead of taking a full WARN() exception before restoring a potentially
missed CR4 bit, this retains the missing bit for later reporting. This
matches the logic done for the CR0 pinning. Additionally updates the
comments to note the required use of "volatile".

Suggested-by: Solar Designer <solar@...nwall.com>
Signed-off-by: Kees Cook <keescook@...omium.org>
Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Greg KH <gregkh@...uxfoundation.org>
Cc: Jann Horn <jannh@...gle.com>
Cc: Sean Christopherson <sean.j.christopherson@...el.com>
Cc: Dominik Brodowski <linux@...inikbrodowski.net>
Cc: Kernel Hardening <kernel-hardening@...ts.openwall.com>
Link: https://lkml.kernel.org/r/20190227200132.24707-3-keescook@chromium.org

---
 arch/x86/include/asm/special_insns.h | 14 ++++++++++----
 1 file changed, 10 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/special_insns.h b/arch/x86/include/asm/special_insns.h
index 1f01dc3f6c64..6020cb1de66e 100644
--- a/arch/x86/include/asm/special_insns.h
+++ b/arch/x86/include/asm/special_insns.h
@@ -97,18 +97,24 @@ extern volatile unsigned long cr4_pin;
 
 static inline void native_write_cr4(unsigned long val)
 {
+	unsigned long warn = 0;
+
 again:
 	val |= cr4_pin;
 	asm volatile("mov %0,%%cr4": : "r" (val), "m" (__force_order));
 	/*
 	 * If the MOV above was used directly as a ROP gadget we can
 	 * notice the lack of pinned bits in "val" and start the function
-	 * from the beginning to gain the cr4_pin bits for sure.
+	 * from the beginning to gain the cr4_pin bits for sure. Note
+	 * that "val" must be volatile to keep the compiler from
+	 * optimizing away this check.
 	 */
-	if (WARN_ONCE((val & cr4_pin) != cr4_pin,
-		      "Attempt to unpin cr4 bits: %lx, cr4 bypass attack?!",
-		      ~val & cr4_pin))
+	if ((val & cr4_pin) != cr4_pin) {
+		warn = ~val & cr4_pin;
 		goto again;
+	}
+	WARN_ONCE(warn, "Attempt to unpin cr4 bits: %lx; bypass attack?!\n",
+		  warn);
 }
 
 #ifdef CONFIG_X86_64

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ