[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20171108194653.D6C7EFF4@viggo.jf.intel.com>
Date: Wed, 08 Nov 2017 11:46:53 -0800
From: Dave Hansen <dave.hansen@...ux.intel.com>
To: linux-kernel@...r.kernel.org
Cc: linux-mm@...ck.org, dave.hansen@...ux.intel.com,
moritz.lipp@...k.tugraz.at, daniel.gruss@...k.tugraz.at,
michael.schwarz@...k.tugraz.at, richard.fellner@...dent.tugraz.at,
luto@...nel.org, torvalds@...ux-foundation.org,
keescook@...gle.com, hughd@...gle.com, x86@...nel.org
Subject: [PATCH 04/30] x86, kaiser: disable global pages by default with KAISER
From: Dave Hansen <dave.hansen@...ux.intel.com>
Global pages stay in the TLB across context switches. Since all
contexts share the same kernel mapping, we use global pages to
allow kernel entries in the TLB to survive when we context
switch.
But, even having these entries in the TLB opens up something that
an attacker can use [1].
Disable global pages so that kernel TLB entries are flushed when
we run userspace. This way, all accesses to kernel memory result
in a TLB miss whether there is good data there or not. Without
this, even when KAISER switches pages tables, the kernel entries
might remain in the TLB.
We keep _PAGE_GLOBAL available so that we can use it for things
that are global even with KAISER like the entry/exit code and
data.
1. The double-page-fault attack:
http://www.ieee-security.org/TC/SP2013/papers/4977a191.pdf
Signed-off-by: Dave Hansen <dave.hansen@...ux.intel.com>
Cc: Moritz Lipp <moritz.lipp@...k.tugraz.at>
Cc: Daniel Gruss <daniel.gruss@...k.tugraz.at>
Cc: Michael Schwarz <michael.schwarz@...k.tugraz.at>
Cc: Richard Fellner <richard.fellner@...dent.tugraz.at>
Cc: Andy Lutomirski <luto@...nel.org>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Kees Cook <keescook@...gle.com>
Cc: Hugh Dickins <hughd@...gle.com>
Cc: x86@...nel.org
---
b/arch/x86/include/asm/pgtable_types.h | 14 +++++++++++++-
b/arch/x86/mm/pageattr.c | 16 ++++++++--------
2 files changed, 21 insertions(+), 9 deletions(-)
diff -puN arch/x86/include/asm/pgtable_types.h~kaiser-prep-disable-global-pages arch/x86/include/asm/pgtable_types.h
--- a/arch/x86/include/asm/pgtable_types.h~kaiser-prep-disable-global-pages 2017-11-08 10:45:27.525681399 -0800
+++ b/arch/x86/include/asm/pgtable_types.h 2017-11-08 10:45:27.530681399 -0800
@@ -179,8 +179,20 @@ enum page_cache_mode {
#define PAGE_READONLY_EXEC __pgprot(_PAGE_PRESENT | _PAGE_USER | \
_PAGE_ACCESSED)
+/*
+ * Disable global pages for anything using the default
+ * __PAGE_KERNEL* macros. PGE will still be enabled
+ * and _PAGE_GLOBAL may still be used carefully.
+ */
+#ifdef CONFIG_KAISER
+#define __PAGE_KERNEL_GLOBAL 0
+#else
+#define __PAGE_KERNEL_GLOBAL _PAGE_GLOBAL
+#endif
+
#define __PAGE_KERNEL_EXEC \
- (_PAGE_PRESENT | _PAGE_RW | _PAGE_DIRTY | _PAGE_ACCESSED | _PAGE_GLOBAL)
+ (_PAGE_PRESENT | _PAGE_RW | _PAGE_DIRTY | _PAGE_ACCESSED | \
+ __PAGE_KERNEL_GLOBAL)
#define __PAGE_KERNEL (__PAGE_KERNEL_EXEC | _PAGE_NX)
#define __PAGE_KERNEL_RO (__PAGE_KERNEL & ~_PAGE_RW)
diff -puN arch/x86/mm/pageattr.c~kaiser-prep-disable-global-pages arch/x86/mm/pageattr.c
--- a/arch/x86/mm/pageattr.c~kaiser-prep-disable-global-pages 2017-11-08 10:45:27.527681399 -0800
+++ b/arch/x86/mm/pageattr.c 2017-11-08 10:45:27.531681399 -0800
@@ -585,9 +585,9 @@ try_preserve_large_page(pte_t *kpte, uns
* for the ancient hardware that doesn't support it.
*/
if (pgprot_val(req_prot) & _PAGE_PRESENT)
- pgprot_val(req_prot) |= _PAGE_PSE | _PAGE_GLOBAL;
+ pgprot_val(req_prot) |= _PAGE_PSE | __PAGE_KERNEL_GLOBAL;
else
- pgprot_val(req_prot) &= ~(_PAGE_PSE | _PAGE_GLOBAL);
+ pgprot_val(req_prot) &= ~(_PAGE_PSE | __PAGE_KERNEL_GLOBAL);
req_prot = canon_pgprot(req_prot);
@@ -705,9 +705,9 @@ __split_large_page(struct cpa_data *cpa,
* for the ancient hardware that doesn't support it.
*/
if (pgprot_val(ref_prot) & _PAGE_PRESENT)
- pgprot_val(ref_prot) |= _PAGE_GLOBAL;
+ pgprot_val(ref_prot) |= __PAGE_KERNEL_GLOBAL;
else
- pgprot_val(ref_prot) &= ~_PAGE_GLOBAL;
+ pgprot_val(ref_prot) &= ~__PAGE_KERNEL_GLOBAL;
/*
* Get the target pfn from the original entry:
@@ -938,9 +938,9 @@ static void populate_pte(struct cpa_data
* support it.
*/
if (pgprot_val(pgprot) & _PAGE_PRESENT)
- pgprot_val(pgprot) |= _PAGE_GLOBAL;
+ pgprot_val(pgprot) |= __PAGE_KERNEL_GLOBAL;
else
- pgprot_val(pgprot) &= ~_PAGE_GLOBAL;
+ pgprot_val(pgprot) &= ~__PAGE_KERNEL_GLOBAL;
pgprot = canon_pgprot(pgprot);
@@ -1242,9 +1242,9 @@ repeat:
* support it.
*/
if (pgprot_val(new_prot) & _PAGE_PRESENT)
- pgprot_val(new_prot) |= _PAGE_GLOBAL;
+ pgprot_val(new_prot) |= __PAGE_KERNEL_GLOBAL;
else
- pgprot_val(new_prot) &= ~_PAGE_GLOBAL;
+ pgprot_val(new_prot) &= ~__PAGE_KERNEL_GLOBAL;
/*
* We need to keep the pfn from the existing PTE,
_
Powered by blists - more mailing lists