[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <tip-367e3f1d3fc9bbf1e626da7aea527f40babf8079@git.kernel.org>
Date: Tue, 9 Oct 2018 08:05:54 -0700
From: tip-bot for Dave Hansen <tipbot@...or.com>
To: linux-tip-commits@...r.kernel.org
Cc: peterz@...radead.org, mingo@...nel.org, tglx@...utronix.de,
dave.hansen@...ux.intel.com, jannh@...gle.com,
linux-kernel@...r.kernel.org, hpa@...or.com, luto@...nel.org,
sean.j.christopherson@...el.com
Subject: [tip:x86/mm] x86/mm: Remove spurious fault pkey check
Commit-ID: 367e3f1d3fc9bbf1e626da7aea527f40babf8079
Gitweb: https://git.kernel.org/tip/367e3f1d3fc9bbf1e626da7aea527f40babf8079
Author: Dave Hansen <dave.hansen@...ux.intel.com>
AuthorDate: Fri, 28 Sep 2018 09:02:31 -0700
Committer: Peter Zijlstra <peterz@...radead.org>
CommitDate: Tue, 9 Oct 2018 16:51:16 +0200
x86/mm: Remove spurious fault pkey check
Spurious faults only ever occur in the kernel's address space. They
are also constrained specifically to faults with one of these error codes:
X86_PF_WRITE | X86_PF_PROT
X86_PF_INSTR | X86_PF_PROT
So, it's never even possible to reach spurious_kernel_fault_check() with
X86_PF_PK set.
In addition, the kernel's address space never has pages with user-mode
protections. Protection Keys are only enforced on pages with user-mode
protection.
This gives us lots of reasons to not check for protection keys in our
sprurious kernel fault handling.
But, let's also add some warnings to ensure that these assumptions about
protection keys hold true.
Cc: x86@...nel.org
Cc: Jann Horn <jannh@...gle.com>
Cc: Sean Christopherson <sean.j.christopherson@...el.com>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: Andy Lutomirski <luto@...nel.org>
Signed-off-by: Dave Hansen <dave.hansen@...ux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Link: http://lkml.kernel.org/r/20180928160231.243A0D6A@viggo.jf.intel.com
---
arch/x86/mm/fault.c | 13 +++++++------
1 file changed, 7 insertions(+), 6 deletions(-)
diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
index 7e0fa7e24168..a16652982f98 100644
--- a/arch/x86/mm/fault.c
+++ b/arch/x86/mm/fault.c
@@ -1037,12 +1037,6 @@ static int spurious_kernel_fault_check(unsigned long error_code, pte_t *pte)
if ((error_code & X86_PF_INSTR) && !pte_exec(*pte))
return 0;
- /*
- * Note: We do not do lazy flushing on protection key
- * changes, so no spurious fault will ever set X86_PF_PK.
- */
- if ((error_code & X86_PF_PK))
- return 1;
return 1;
}
@@ -1217,6 +1211,13 @@ static void
do_kern_addr_fault(struct pt_regs *regs, unsigned long hw_error_code,
unsigned long address)
{
+ /*
+ * Protection keys exceptions only happen on user pages. We
+ * have no user pages in the kernel portion of the address
+ * space, so do not expect them here.
+ */
+ WARN_ON_ONCE(hw_error_code & X86_PF_PK);
+
/*
* We can fault-in kernel-space virtual memory on-demand. The
* 'reference' page table is init_mm.pgd.
Powered by blists - more mailing lists