[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210315165800.5948-9-madvenka@linux.microsoft.com>
Date: Mon, 15 Mar 2021 11:58:00 -0500
From: madvenka@...ux.microsoft.com
To: broonie@...nel.org, mark.rutland@....com, jpoimboe@...hat.com,
jthierry@...hat.com, catalin.marinas@....com, will@...nel.org,
linux-arm-kernel@...ts.infradead.org,
live-patching@...r.kernel.org, linux-kernel@...r.kernel.org,
madvenka@...ux.microsoft.com
Subject: [RFC PATCH v2 8/8] arm64: Implement arch_stack_walk_reliable()
From: "Madhavan T. Venkataraman" <madvenka@...ux.microsoft.com>
unwind_frame() already sets the reliable flag in the stack frame during
a stack walk to indicate whether the stack trace is reliable or not.
Implement arch_stack_walk_reliable() like arch_stack_walk() but abort
the stack walk as soon as the reliable flag is set to false for any
reason.
Signed-off-by: Madhavan T. Venkataraman <madvenka@...ux.microsoft.com>
---
arch/arm64/Kconfig | 1 +
arch/arm64/kernel/stacktrace.c | 35 ++++++++++++++++++++++++++++++++++
2 files changed, 36 insertions(+)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 1f212b47a48a..954f60c35b26 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -167,6 +167,7 @@ config ARM64
if $(cc-option,-fpatchable-function-entry=2)
select FTRACE_MCOUNT_USE_PATCHABLE_FUNCTION_ENTRY \
if DYNAMIC_FTRACE_WITH_REGS
+ select HAVE_RELIABLE_STACKTRACE
select HAVE_EFFICIENT_UNALIGNED_ACCESS
select HAVE_FAST_GUP
select HAVE_FTRACE_MCOUNT_RECORD
diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
index 752b77f11c61..5d15c111f3aa 100644
--- a/arch/arm64/kernel/stacktrace.c
+++ b/arch/arm64/kernel/stacktrace.c
@@ -361,4 +361,39 @@ void arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie,
walk_stackframe(task, &frame, consume_entry, cookie);
}
+/*
+ * Walk the stack like arch_stack_walk() but stop the walk as soon as
+ * some unreliability is detected in the stack.
+ */
+int arch_stack_walk_reliable(stack_trace_consume_fn consume_entry,
+ void *cookie, struct task_struct *task)
+{
+ struct stackframe frame;
+ int ret = 0;
+
+ if (task == current) {
+ start_backtrace(&frame,
+ (unsigned long)__builtin_frame_address(0),
+ (unsigned long)arch_stack_walk_reliable);
+ } else {
+ /*
+ * The task must not be running anywhere for the duration of
+ * arch_stack_walk_reliable(). The caller must guarantee
+ * this.
+ */
+ start_backtrace(&frame, thread_saved_fp(task),
+ thread_saved_pc(task));
+ }
+
+ while (!ret) {
+ if (!frame.reliable)
+ return -EINVAL;
+ if (!consume_entry(cookie, frame.pc))
+ return -EINVAL;
+ ret = unwind_frame(task, &frame);
+ }
+
+ return ret == -ENOENT ? 0 : -EINVAL;
+}
+
#endif
--
2.25.1
Powered by blists - more mailing lists