[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <158954160454.17951.15828011095215471629.tip-bot2@tip-bot2>
Date: Fri, 15 May 2020 11:20:04 -0000
From: "tip-bot2 for Borislav Petkov" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: Sergei Trofimovich <slyfox@...too.org>,
Borislav Petkov <bp@...e.de>,
Kalle Valo <kvalo@...eaurora.org>, <stable@...r.kernel.org>,
x86 <x86@...nel.org>, LKML <linux-kernel@...r.kernel.org>
Subject: [tip: x86/urgent] x86: Fix early boot crash on gcc-10, third try
The following commit has been merged into the x86/urgent branch of tip:
Commit-ID: a9a3ed1eff3601b63aea4fb462d8b3b92c7c1e7e
Gitweb: https://git.kernel.org/tip/a9a3ed1eff3601b63aea4fb462d8b3b92c7c1e7e
Author: Borislav Petkov <bp@...e.de>
AuthorDate: Wed, 22 Apr 2020 18:11:30 +02:00
Committer: Borislav Petkov <bp@...e.de>
CommitterDate: Fri, 15 May 2020 11:48:01 +02:00
x86: Fix early boot crash on gcc-10, third try
... or the odyssey of trying to disable the stack protector for the
function which generates the stack canary value.
The whole story started with Sergei reporting a boot crash with a kernel
built with gcc-10:
Kernel panic — not syncing: stack-protector: Kernel stack is corrupted in: start_secondary
CPU: 1 PID: 0 Comm: swapper/1 Not tainted 5.6.0-rc5—00235—gfffb08b37df9 #139
Hardware name: Gigabyte Technology Co., Ltd. To be filled by O.E.M./H77M—D3H, BIOS F12 11/14/2013
Call Trace:
dump_stack
panic
? start_secondary
__stack_chk_fail
start_secondary
secondary_startup_64
-—-[ end Kernel panic — not syncing: stack—protector: Kernel stack is corrupted in: start_secondary
This happens because gcc-10 tail-call optimizes the last function call
in start_secondary() - cpu_startup_entry() - and thus emits a stack
canary check which fails because the canary value changes after the
boot_init_stack_canary() call.
To fix that, the initial attempt was to mark the one function which
generates the stack canary with:
__attribute__((optimize("-fno-stack-protector"))) ... start_secondary(void *unused)
however, using the optimize attribute doesn't work cumulatively
as the attribute does not add to but rather replaces previously
supplied optimization options - roughly all -fxxx options.
The key one among them being -fno-omit-frame-pointer and thus leading to
not present frame pointer - frame pointer which the kernel needs.
The next attempt to prevent compilers from tail-call optimizing
the last function call cpu_startup_entry(), shy of carving out
start_secondary() into a separate compilation unit and building it with
-fno-stack-protector, was to add an empty asm("").
This current solution was short and sweet, and reportedly, is supported
by both compilers but we didn't get very far this time: future (LTO?)
optimization passes could potentially eliminate this, which leads us
to the third attempt: having an actual memory barrier there which the
compiler cannot ignore or move around etc.
That should hold for a long time, but hey we said that about the other
two solutions too so...
Reported-by: Sergei Trofimovich <slyfox@...too.org>
Signed-off-by: Borislav Petkov <bp@...e.de>
Tested-by: Kalle Valo <kvalo@...eaurora.org>
Cc: <stable@...r.kernel.org>
Link: https://lkml.kernel.org/r/20200314164451.346497-1-slyfox@gentoo.org
---
arch/x86/include/asm/stackprotector.h | 7 ++++++-
arch/x86/kernel/smpboot.c | 8 ++++++++
arch/x86/xen/smp_pv.c | 1 +
include/linux/compiler.h | 6 ++++++
init/main.c | 2 ++
5 files changed, 23 insertions(+), 1 deletion(-)
diff --git a/arch/x86/include/asm/stackprotector.h b/arch/x86/include/asm/stackprotector.h
index 91e29b6..9804a79 100644
--- a/arch/x86/include/asm/stackprotector.h
+++ b/arch/x86/include/asm/stackprotector.h
@@ -55,8 +55,13 @@
/*
* Initialize the stackprotector canary value.
*
- * NOTE: this must only be called from functions that never return,
+ * NOTE: this must only be called from functions that never return
* and it must always be inlined.
+ *
+ * In addition, it should be called from a compilation unit for which
+ * stack protector is disabled. Alternatively, the caller should not end
+ * with a function call which gets tail-call optimized as that would
+ * lead to checking a modified canary value.
*/
static __always_inline void boot_init_stack_canary(void)
{
diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index 8c89e4d..2f24c33 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -266,6 +266,14 @@ static void notrace start_secondary(void *unused)
wmb();
cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
+
+ /*
+ * Prevent tail call to cpu_startup_entry() because the stack protector
+ * guard has been changed a couple of function calls up, in
+ * boot_init_stack_canary() and must not be checked before tail calling
+ * another function.
+ */
+ prevent_tail_call_optimization();
}
/**
diff --git a/arch/x86/xen/smp_pv.c b/arch/x86/xen/smp_pv.c
index 8fb8a50..f2adb63 100644
--- a/arch/x86/xen/smp_pv.c
+++ b/arch/x86/xen/smp_pv.c
@@ -93,6 +93,7 @@ asmlinkage __visible void cpu_bringup_and_idle(void)
cpu_bringup();
boot_init_stack_canary();
cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
+ prevent_tail_call_optimization();
}
void xen_smp_intr_free_pv(unsigned int cpu)
diff --git a/include/linux/compiler.h b/include/linux/compiler.h
index 034b0a6..448c91b 100644
--- a/include/linux/compiler.h
+++ b/include/linux/compiler.h
@@ -356,4 +356,10 @@ static inline void *offset_to_ptr(const int *off)
/* &a[0] degrades to a pointer: a different type from an array */
#define __must_be_array(a) BUILD_BUG_ON_ZERO(__same_type((a), &(a)[0]))
+/*
+ * This is needed in functions which generate the stack canary, see
+ * arch/x86/kernel/smpboot.c::start_secondary() for an example.
+ */
+#define prevent_tail_call_optimization() mb()
+
#endif /* __LINUX_COMPILER_H */
diff --git a/init/main.c b/init/main.c
index 1a5da2c..ad3812b 100644
--- a/init/main.c
+++ b/init/main.c
@@ -1036,6 +1036,8 @@ asmlinkage __visible void __init start_kernel(void)
/* Do the rest non-__init'ed, we're now alive */
arch_call_rest_init();
+
+ prevent_tail_call_optimization();
}
/* Call all constructor functions linked into the kernel. */
Powered by blists - more mailing lists