lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 27 Nov 2017 14:20:37 +0100
From:   Ingo Molnar <>
To:     Thomas Gleixner <>
Cc:     Rik van Riel <>,
        Dave Hansen <>,,,,,,,,,,,,
Subject: [PATCH v2] x86/mm/kaiser: Disable global pages by default with KAISER

* Thomas Gleixner <> wrote:

> On Sun, 26 Nov 2017, Ingo Molnar wrote:
> >  * Disable global pages for anything using the default
> >  * __PAGE_KERNEL* macros.
> >  *
> >  * PGE will still be enabled and _PAGE_GLOBAL may still be used carefully
> >  * for a few selected kernel mappings which must be visible to userspace,
> >  * when KAISER is enabled, like the entry/exit code and data.
> >  */
> > #ifdef CONFIG_KAISER
> > #define __PAGE_KERNEL_GLOBAL	0
> > #else
> > #endif
> > 
> > ... and I've added your Reviewed-by tag which I assume now applies?
> Ideally we replace the whole patch with the __supported_pte_mask one which
> I posted as a delta patch.

Yeah, so I squashed these two patches:

  09d76fc407e0: x86/mm/kaiser: Disable global pages by default with KAISER
  bac79112ee4a: x86/mm/kaiser: Simplify disabling of global pages

into a single patch, which results in the single patch below, with an updated 
changelog that reflects the cleanups. I kept Dave's authorship and credited you 
for the simplification.

Note that the squashed commit had some whitespace noise which I skipped, further 
simplifying the patch.

Is it OK this way? If yes then I'll reshuffle the tree with this variant.



>From 12cffe1598c3ebdad76453c72acb8c606f22a747 Mon Sep 17 00:00:00 2001
From: Dave Hansen <>
Date: Wed, 22 Nov 2017 16:34:41 -0800
Subject: [PATCH] x86/mm/kaiser: Disable global pages by default with KAISER

Global pages stay in the TLB across context switches.  Since all contexts
share the same kernel mapping, these mappings are marked as global pages
so kernel entries in the TLB are not flushed out on a context switch.

But, even having these entries in the TLB opens up something that an
attacker can use, such as the double-page-fault attack:

That means that even when KAISER switches page tables on return to user
space the global pages would stay in the TLB cache.

Disable global pages so that kernel TLB entries can be flushed before
returning to user space. This way, all accesses to kernel addresses from
userspace result in a TLB miss independent of the existence of a kernel

Supress global pages via the __supported_pte_mask. The shadow mappings
set PAGE_GLOBAL for the minimal kernel mappings which are required
for entry/exit. These mappings are set up manually so the filtering does not
take place.

[ The __supported_pte_mask simplification was written by Thomas Gleixner. ]

Signed-off-by: Dave Hansen <>
Signed-off-by: Thomas Gleixner <>
Reviewed-by: Borislav Petkov <>
Reviewed-by: Thomas Gleixner <>
Reviewed-by: Rik van Riel <>
Cc: Andy Lutomirski <>
Cc: Brian Gerst <>
Cc: Denys Vlasenko <>
Cc: H. Peter Anvin <>
Cc: Josh Poimboeuf <>
Cc: Linus Torvalds <>
Cc: Peter Zijlstra <>
Signed-off-by: Ingo Molnar <>
 arch/x86/mm/init.c | 13 ++++++++++---
 1 file changed, 10 insertions(+), 3 deletions(-)

diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index a22c2b95e513..4a2df8babd29 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -161,6 +161,13 @@ struct map_range {
 static int page_size_mask;
+static void enable_global_pages(void)
+	__supported_pte_mask |= _PAGE_GLOBAL;
 static void __init probe_page_size_mask(void)
@@ -179,11 +186,11 @@ static void __init probe_page_size_mask(void)
 	/* Enable PGE if available */
+	__supported_pte_mask &= ~_PAGE_GLOBAL;
 	if (boot_cpu_has(X86_FEATURE_PGE)) {
-		__supported_pte_mask |= _PAGE_GLOBAL;
-	} else
-		__supported_pte_mask &= ~_PAGE_GLOBAL;
+		enable_global_pages();
+	}
 	/* Enable 1 GB linear kernel mappings if available: */
 	if (direct_gbpages && boot_cpu_has(X86_FEATURE_GBPAGES)) {

Powered by blists - more mailing lists