[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <tip-91ed140d6c1e168b11bbbddac4f6066f40a0c6b5@git.kernel.org>
Date: Wed, 13 Apr 2016 04:47:38 -0700
From: tip-bot for Borislav Petkov <tipbot@...or.com>
To: linux-tip-commits@...r.kernel.org
Cc: torvalds@...ux-foundation.org, tglx@...utronix.de, bp@...e.de,
hpa@...or.com, mingo@...nel.org, thomas.lendacky@....com,
linux-kernel@...r.kernel.org, brgerst@...il.com,
peterz@...radead.org, mika.penttila@...tfour.com
Subject: [tip:x86/asm] x86/asm: Make sure verify_cpu() has a good stack
Commit-ID: 91ed140d6c1e168b11bbbddac4f6066f40a0c6b5
Gitweb: http://git.kernel.org/tip/91ed140d6c1e168b11bbbddac4f6066f40a0c6b5
Author: Borislav Petkov <bp@...e.de>
AuthorDate: Thu, 31 Mar 2016 16:21:02 +0200
Committer: Ingo Molnar <mingo@...nel.org>
CommitDate: Wed, 13 Apr 2016 11:52:19 +0200
x86/asm: Make sure verify_cpu() has a good stack
04633df0c43d ("x86/cpu: Call verify_cpu() after having entered long mode too")
added the call to verify_cpu() for sanitizing CPU configuration.
The latter uses the stack minimally and it can happen that we land in
startup_64() directly from a 64-bit bootloader. Then we want to use our
own, known good stack.
Do that.
APs don't need this as the trampoline sets up a stack for them.
Reported-by: Tom Lendacky <thomas.lendacky@....com>
Signed-off-by: Borislav Petkov <bp@...e.de>
Cc: Brian Gerst <brgerst@...il.com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Mika Penttilä <mika.penttila@...tfour.com>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Thomas Gleixner <tglx@...utronix.de>
Link: http://lkml.kernel.org/r/1459434062-31055-1-git-send-email-bp@alien8.de
Signed-off-by: Ingo Molnar <mingo@...nel.org>
---
arch/x86/kernel/head_64.S | 8 ++++++++
include/asm-generic/vmlinux.lds.h | 4 +++-
2 files changed, 11 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
index 3de91a7..5df831e 100644
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -65,6 +65,14 @@ startup_64:
* tables and then reload them.
*/
+ /*
+ * Setup stack for verify_cpu(). "-8" because stack_start is defined
+ * this way, see below. Our best guess is a NULL ptr for stack
+ * termination heuristics and we don't want to break anything which
+ * might depend on it (kgdb, ...).
+ */
+ leaq (__end_init_task - 8)(%rip), %rsp
+
/* Sanitize CPU configuration */
call verify_cpu
diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
index 339125b..6a67ab9 100644
--- a/include/asm-generic/vmlinux.lds.h
+++ b/include/asm-generic/vmlinux.lds.h
@@ -245,7 +245,9 @@
#define INIT_TASK_DATA(align) \
. = ALIGN(align); \
- *(.data..init_task)
+ VMLINUX_SYMBOL(__start_init_task) = .; \
+ *(.data..init_task) \
+ VMLINUX_SYMBOL(__end_init_task) = .;
/*
* Read only Data
Powered by blists - more mailing lists