[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <tip-3c8dbef26202a83207611b3d6f4d2db5e5cbbc0f@git.kernel.org>
Date: Wed, 6 Mar 2019 05:31:15 -0800
From: tip-bot for Kees Cook <tipbot@...or.com>
To: linux-tip-commits@...r.kernel.org
Cc: kernel-hardening@...ts.openwall.com, keescook@...omium.org,
hpa@...or.com, sean.j.christopherson@...el.com, jannh@...gle.com,
solar@...nwall.com, linux@...inikbrodowski.net, mingo@...nel.org,
tglx@...utronix.de, linux-kernel@...r.kernel.org,
gregkh@...uxfoundation.org, peterz@...radead.org
Subject: [tip:x86/asm] x86/asm: Avoid taking an exception before cr4 restore
Commit-ID: 3c8dbef26202a83207611b3d6f4d2db5e5cbbc0f
Gitweb: https://git.kernel.org/tip/3c8dbef26202a83207611b3d6f4d2db5e5cbbc0f
Author: Kees Cook <keescook@...omium.org>
AuthorDate: Wed, 27 Feb 2019 12:01:31 -0800
Committer: Thomas Gleixner <tglx@...utronix.de>
CommitDate: Wed, 6 Mar 2019 13:25:55 +0100
x86/asm: Avoid taking an exception before cr4 restore
Instead of taking a full WARN() exception before restoring a potentially
missed CR4 bit, retain the missing bit for later reporting.
Additionally update the comments to note the required use of "volatile".
Suggested-by: Solar Designer <solar@...nwall.com>
Signed-off-by: Kees Cook <keescook@...omium.org>
Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Greg KH <gregkh@...uxfoundation.org>
Cc: Jann Horn <jannh@...gle.com>
Cc: Sean Christopherson <sean.j.christopherson@...el.com>
Cc: Dominik Brodowski <linux@...inikbrodowski.net>
Cc: Kernel Hardening <kernel-hardening@...ts.openwall.com>
Link: https://lkml.kernel.org/r/20190227200132.24707-3-keescook@chromium.org
---
arch/x86/include/asm/special_insns.h | 14 ++++++++++----
1 file changed, 10 insertions(+), 4 deletions(-)
diff --git a/arch/x86/include/asm/special_insns.h b/arch/x86/include/asm/special_insns.h
index fabda1400137..99607f142cad 100644
--- a/arch/x86/include/asm/special_insns.h
+++ b/arch/x86/include/asm/special_insns.h
@@ -76,18 +76,24 @@ extern volatile unsigned long cr4_pin;
static inline void native_write_cr4(unsigned long val)
{
+ unsigned long warn = 0;
+
again:
val |= cr4_pin;
asm volatile("mov %0,%%cr4": : "r" (val), "m" (__force_order));
/*
* If the MOV above was used directly as a ROP gadget we can
* notice the lack of pinned bits in "val" and start the function
- * from the beginning to gain the cr4_pin bits for sure.
+ * from the beginning to gain the cr4_pin bits for sure. Note
+ * that "val" must be volatile to keep the compiler from
+ * optimizing away this check.
*/
- if (WARN_ONCE((val & cr4_pin) != cr4_pin,
- "Attempt to unpin cr4 bits: %lx, cr4 bypass attack?!",
- ~val & cr4_pin))
+ if ((val & cr4_pin) != cr4_pin) {
+ warn = ~val & cr4_pin;
goto again;
+ }
+ WARN_ONCE(warn, "Attempt to unpin cr4 bits: %lx; bypass attack?!\n",
+ warn);
}
#ifdef CONFIG_X86_64
Powered by blists - more mailing lists