[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <05e787a0d0661d0bfb40e44db39bf5ead5f7e4ef.1612113550.git.luto@kernel.org>
Date: Sun, 31 Jan 2021 09:24:37 -0800
From: Andy Lutomirski <luto@...nel.org>
To: x86@...nel.org
Cc: LKML <linux-kernel@...r.kernel.org>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Yonghong Song <yhs@...com>,
Masami Hiramatsu <mhiramat@...nel.org>,
Andy Lutomirski <luto@...nel.org>,
Peter Zijlstra <peterz@...radead.org>
Subject: [PATCH 06/11] x86/fault: Improve kernel-executing-user-memory handling
Right now we treat the case of the kernel trying to execute from user
memory more or less just like the kernel getting a page fault on a user
access. In the failure path, we check for erratum #93, try to otherwise
fix up the error, and then oops.
If we manage to jump to the user address space, with or without SMEP, we
should not try to resolve the page fault. This is an error, pure and
simple. Rearrange the code so that we catch this case early, check for
erratum #93, and bail out.
Cc: Dave Hansen <dave.hansen@...ux.intel.com>
Cc: Peter Zijlstra <peterz@...radead.org>
Signed-off-by: Andy Lutomirski <luto@...nel.org>
---
arch/x86/mm/fault.c | 23 ++++++++++++++++++-----
1 file changed, 18 insertions(+), 5 deletions(-)
diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
index 602cdf8e070a..1939e546beae 100644
--- a/arch/x86/mm/fault.c
+++ b/arch/x86/mm/fault.c
@@ -406,8 +406,11 @@ static void dump_pagetable(unsigned long address)
static int is_errata93(struct pt_regs *regs, unsigned long address)
{
#if defined(CONFIG_X86_64) && defined(CONFIG_CPU_SUP_AMD)
- if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD
- || boot_cpu_data.x86 != 0xf)
+ if (likely(boot_cpu_data.x86_vendor != X86_VENDOR_AMD
+ || boot_cpu_data.x86 != 0xf))
+ return 0;
+
+ if (user_mode(regs))
return 0;
if (address != regs->ip)
@@ -707,9 +710,6 @@ no_context(struct pt_regs *regs, unsigned long error_code,
if (is_prefetch(regs, error_code, address))
return;
- if (is_errata93(regs, address))
- return;
-
/*
* Buggy firmware could access regions which might page fault, try to
* recover from such faults.
@@ -1202,6 +1202,19 @@ void do_user_addr_fault(struct pt_regs *regs,
tsk = current;
mm = tsk->mm;
+ if (unlikely((error_code & (X86_PF_USER | X86_PF_INSTR)) == X86_PF_INSTR)) {
+ /*
+ * Whoops, this is kernel mode code trying to execute from
+ * user memory. Unless this is AMD erratum #93, we are toast.
+ * Don't even try to look up the VMA.
+ */
+ if (is_errata93(regs, address))
+ return;
+
+ bad_area_nosemaphore(regs, error_code, address);
+ return;
+ }
+
/* kprobes don't want to hook the spurious faults: */
if (unlikely(kprobe_page_fault(regs, X86_TRAP_PF)))
return;
--
2.29.2
Powered by blists - more mailing lists