[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20170501212601.265075022@linuxfoundation.org>
Date: Mon, 1 May 2017 14:27:43 -0700
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Paul Menzel <pmenzel@...gen.mpg.de>,
Josh Poimboeuf <jpoimboe@...hat.com>,
"Steven Rostedt (VMware)" <rostedt@...dmis.org>,
"Rafael J . Wysocki" <rjw@...ysocki.net>,
linux-acpi@...r.kernel.org, Borislav Petkov <bp@...en8.de>,
Len Brown <lenb@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>
Subject: [PATCH 4.4 43/43] ftrace/x86: Fix triple fault with graph tracing and suspend-to-ram
4.4-stable review patch. If anyone has any objections, please let me know.
------------------
From: Josh Poimboeuf <jpoimboe@...hat.com>
commit 34a477e5297cbaa6ecc6e17c042a866e1cbe80d6 upstream.
On x86-32, with CONFIG_FIRMWARE and multiple CPUs, if you enable function
graph tracing and then suspend to RAM, it will triple fault and reboot when
it resumes.
The first fault happens when booting a secondary CPU:
startup_32_smp()
load_ucode_ap()
prepare_ftrace_return()
ftrace_graph_is_dead()
(accesses 'kill_ftrace_graph')
The early head_32.S code calls into load_ucode_ap(), which has an an
ftrace hook, so it calls prepare_ftrace_return(), which calls
ftrace_graph_is_dead(), which tries to access the global
'kill_ftrace_graph' variable with a virtual address, causing a fault
because the CPU is still in real mode.
The fix is to add a check in prepare_ftrace_return() to make sure it's
running in protected mode before continuing. The check makes sure the
stack pointer is a virtual kernel address. It's a bit of a hack, but
it's not very intrusive and it works well enough.
For reference, here are a few other (more difficult) ways this could
have potentially been fixed:
- Move startup_32_smp()'s call to load_ucode_ap() down to *after* paging
is enabled. (No idea what that would break.)
- Track down load_ucode_ap()'s entire callee tree and mark all the
functions 'notrace'. (Probably not realistic.)
- Pause graph tracing in ftrace_suspend_notifier_call() or bringup_cpu()
or __cpu_up(), and ensure that the pause facility can be queried from
real mode.
Reported-by: Paul Menzel <pmenzel@...gen.mpg.de>
Signed-off-by: Josh Poimboeuf <jpoimboe@...hat.com>
Tested-by: Paul Menzel <pmenzel@...gen.mpg.de>
Reviewed-by: Steven Rostedt (VMware) <rostedt@...dmis.org>
Cc: "Rafael J . Wysocki" <rjw@...ysocki.net>
Cc: linux-acpi@...r.kernel.org
Cc: Borislav Petkov <bp@...en8.de>
Cc: Len Brown <lenb@...nel.org>
Link: http://lkml.kernel.org/r/5c1272269a580660703ed2eccf44308e790c7a98.1492123841.git.jpoimboe@redhat.com
Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
arch/x86/kernel/ftrace.c | 12 ++++++++++++
1 file changed, 12 insertions(+)
--- a/arch/x86/kernel/ftrace.c
+++ b/arch/x86/kernel/ftrace.c
@@ -977,6 +977,18 @@ void prepare_ftrace_return(unsigned long
unsigned long return_hooker = (unsigned long)
&return_to_handler;
+ /*
+ * When resuming from suspend-to-ram, this function can be indirectly
+ * called from early CPU startup code while the CPU is in real mode,
+ * which would fail miserably. Make sure the stack pointer is a
+ * virtual address.
+ *
+ * This check isn't as accurate as virt_addr_valid(), but it should be
+ * good enough for this purpose, and it's fast.
+ */
+ if (unlikely((long)__builtin_frame_address(0) >= 0))
+ return;
+
if (unlikely(ftrace_graph_is_dead()))
return;
Powered by blists - more mailing lists