[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <tip-23b2a4ddebdd17fad265b4bb77256c2e4ec37dee@git.kernel.org>
Date: Thu, 23 Mar 2017 02:14:40 -0700
From: tip-bot for Andy Lutomirski <tipbot@...or.com>
To: linux-tip-commits@...r.kernel.org
Cc: peterz@...radead.org, linux-kernel@...r.kernel.org, hpa@...or.com,
boris.ostrovsky@...cle.com, bp@...en8.de, jpoimboe@...hat.com,
torvalds@...ux-foundation.org, matt@...eblueprint.co.uk,
luto@...nel.org, brgerst@...il.com, tglx@...utronix.de,
thgarnie@...gle.com, mingo@...nel.org, dvlasenk@...hat.com,
jgross@...e.com, ard.biesheuvel@...aro.org
Subject: [tip:x86/mm] x86/boot/32: Defer resyncing initial_page_table until
per-cpu is set up
Commit-ID: 23b2a4ddebdd17fad265b4bb77256c2e4ec37dee
Gitweb: http://git.kernel.org/tip/23b2a4ddebdd17fad265b4bb77256c2e4ec37dee
Author: Andy Lutomirski <luto@...nel.org>
AuthorDate: Wed, 22 Mar 2017 14:32:32 -0700
Committer: Ingo Molnar <mingo@...nel.org>
CommitDate: Thu, 23 Mar 2017 08:25:08 +0100
x86/boot/32: Defer resyncing initial_page_table until per-cpu is set up
The x86 smpboot trampoline expects initial_page_table to have the
GDT mapped. If the GDT ends up in a virtually mapped per-cpu page,
then it won't be in the page tables at all until perc-pu areas are
set up. The result will be a triple fault the first time that the
CPU attempts to access the GDT after LGDT loads the perc-pu GDT.
This appears to be an old bug, but somehow the GDT fixmap rework
is triggering it. This seems to have something to do with the
memory layout.
Signed-off-by: Andy Lutomirski <luto@...nel.org>
Cc: Ard Biesheuvel <ard.biesheuvel@...aro.org>
Cc: Boris Ostrovsky <boris.ostrovsky@...cle.com>
Cc: Borislav Petkov <bp@...en8.de>
Cc: Brian Gerst <brgerst@...il.com>
Cc: Denys Vlasenko <dvlasenk@...hat.com>
Cc: H. Peter Anvin <hpa@...or.com>
Cc: Josh Poimboeuf <jpoimboe@...hat.com>
Cc: Juergen Gross <jgross@...e.com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Matt Fleming <matt@...eblueprint.co.uk>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Thomas Garnier <thgarnie@...gle.com>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: linux-efi@...r.kernel.org
Link: http://lkml.kernel.org/r/a553264a5972c6a86f9b5caac237470a0c74a720.1490218061.git.luto@kernel.org
Signed-off-by: Ingo Molnar <mingo@...nel.org>
---
arch/x86/kernel/setup.c | 15 ---------------
arch/x86/kernel/setup_percpu.c | 21 +++++++++++++++++++++
2 files changed, 21 insertions(+), 15 deletions(-)
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index 4bf0c89..56b1177 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -1226,21 +1226,6 @@ void __init setup_arch(char **cmdline_p)
kasan_init();
-#ifdef CONFIG_X86_32
- /* sync back kernel address range */
- clone_pgd_range(initial_page_table + KERNEL_PGD_BOUNDARY,
- swapper_pg_dir + KERNEL_PGD_BOUNDARY,
- KERNEL_PGD_PTRS);
-
- /*
- * sync back low identity map too. It is used for example
- * in the 32-bit EFI stub.
- */
- clone_pgd_range(initial_page_table,
- swapper_pg_dir + KERNEL_PGD_BOUNDARY,
- min(KERNEL_PGD_PTRS, KERNEL_PGD_BOUNDARY));
-#endif
-
tboot_probe();
map_vsyscall();
diff --git a/arch/x86/kernel/setup_percpu.c b/arch/x86/kernel/setup_percpu.c
index 11338b0..bb1e8cc 100644
--- a/arch/x86/kernel/setup_percpu.c
+++ b/arch/x86/kernel/setup_percpu.c
@@ -288,4 +288,25 @@ void __init setup_per_cpu_areas(void)
/* Setup cpu initialized, callin, callout masks */
setup_cpu_local_masks();
+
+#ifdef CONFIG_X86_32
+ /*
+ * Sync back kernel address range. We want to make sure that
+ * all kernel mappings, including percpu mappings, are available
+ * in the smpboot asm. We can't reliably pick up percpu
+ * mappings using vmalloc_fault(), because exception dispatch
+ * needs percpu data.
+ */
+ clone_pgd_range(initial_page_table + KERNEL_PGD_BOUNDARY,
+ swapper_pg_dir + KERNEL_PGD_BOUNDARY,
+ KERNEL_PGD_PTRS);
+
+ /*
+ * sync back low identity map too. It is used for example
+ * in the 32-bit EFI stub.
+ */
+ clone_pgd_range(initial_page_table,
+ swapper_pg_dir + KERNEL_PGD_BOUNDARY,
+ min(KERNEL_PGD_PTRS, KERNEL_PGD_BOUNDARY));
+#endif
}
Powered by blists - more mailing lists