[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20211014170552.a588e29947e1cd63cdf0c5b5@kernel.org>
Date: Thu, 14 Oct 2021 17:05:52 +0900
From: Masami Hiramatsu <mhiramat@...nel.org>
To: Will Deacon <will@...nel.org>
Cc: Steven Rostedt <rostedt@...dmis.org>,
"Naveen N . Rao" <naveen.n.rao@...ux.vnet.ibm.com>,
Ananth N Mavinakayanahalli <ananth@...ux.ibm.com>,
Ingo Molnar <mingo@...nel.org>, linux-kernel@...r.kernel.org,
Sven Schnelle <svens@...ux.ibm.com>,
Catalin Marinas <catalin.marinas@....com>,
Russell King <linux@...linux.org.uk>,
Nathan Chancellor <nathan@...nel.org>,
Nick Desaulniers <ndesaulniers@...gle.com>,
linux-arm-kernel@...ts.infradead.org
Subject: Re: [PATCH 5/8] arm64: Recover kretprobe modified return address in
stacktrace
On Wed, 13 Oct 2021 09:14:16 +0100
Will Deacon <will@...nel.org> wrote:
> On Fri, Oct 08, 2021 at 09:28:58PM +0900, Masami Hiramatsu wrote:
> > Since the kretprobe replaces the function return address with
> > the kretprobe_trampoline on the stack, stack unwinder shows it
> > instead of the correct return address.
> >
> > This checks whether the next return address is the
> > __kretprobe_trampoline(), and if so, try to find the correct
> > return address from the kretprobe instance list.
> >
> > With this fix, now arm64 can enable
> > CONFIG_ARCH_CORRECT_STACKTRACE_ON_KRETPROBE, and pass the
> > kprobe self tests.
> >
> > Signed-off-by: Masami Hiramatsu <mhiramat@...nel.org>
> > ---
> > arch/arm64/Kconfig | 1 +
> > arch/arm64/include/asm/stacktrace.h | 2 ++
> > arch/arm64/kernel/stacktrace.c | 3 +++
> > 3 files changed, 6 insertions(+)
> >
> > diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> > index 5c7ae4c3954b..edde5171ffb2 100644
> > --- a/arch/arm64/Kconfig
> > +++ b/arch/arm64/Kconfig
> > @@ -11,6 +11,7 @@ config ARM64
> > select ACPI_PPTT if ACPI
> > select ARCH_HAS_DEBUG_WX
> > select ARCH_BINFMT_ELF_STATE
> > + select ARCH_CORRECT_STACKTRACE_ON_KRETPROBE
> > select ARCH_ENABLE_HUGEPAGE_MIGRATION if HUGETLB_PAGE && MIGRATION
> > select ARCH_ENABLE_MEMORY_HOTPLUG
> > select ARCH_ENABLE_MEMORY_HOTREMOVE
> > diff --git a/arch/arm64/include/asm/stacktrace.h b/arch/arm64/include/asm/stacktrace.h
> > index 8aebc00c1718..8f997a602651 100644
> > --- a/arch/arm64/include/asm/stacktrace.h
> > +++ b/arch/arm64/include/asm/stacktrace.h
> > @@ -9,6 +9,7 @@
> > #include <linux/sched.h>
> > #include <linux/sched/task_stack.h>
> > #include <linux/types.h>
> > +#include <linux/llist.h>
> >
> > #include <asm/memory.h>
> > #include <asm/ptrace.h>
> > @@ -59,6 +60,7 @@ struct stackframe {
> > #ifdef CONFIG_FUNCTION_GRAPH_TRACER
> > int graph;
> > #endif
> > + struct llist_node *kr_cur;
> > };
>
> Please update the comment above this structure to describe the new member
> you're adding.
OK, let me update that.
> If it's only relevant for kprobes, then let's define it
> conditionally too (based on CONFIG_KRETPROBES ?)
Ah, good point! Yes, it must be only valid when CONFIG_KRETPROBES=y.
Thank you,
>
> > extern int unwind_frame(struct task_struct *tsk, struct stackframe *frame);
> > diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
> > index 8982a2b78acf..f1eef5745542 100644
> > --- a/arch/arm64/kernel/stacktrace.c
> > +++ b/arch/arm64/kernel/stacktrace.c
> > @@ -129,6 +129,8 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame)
> > frame->pc = ret_stack->ret;
> > }
> > #endif /* CONFIG_FUNCTION_GRAPH_TRACER */
> > + if (is_kretprobe_trampoline(frame->pc))
> > + frame->pc = kretprobe_find_ret_addr(tsk, (void *)frame->fp, &frame->kr_cur);
> >
> > frame->pc = ptrauth_strip_insn_pac(frame->pc);
> >
> > @@ -224,6 +226,7 @@ noinline notrace void arch_stack_walk(stack_trace_consume_fn consume_entry,
> > {
> > struct stackframe frame;
> >
> > + memset(&frame, 0, sizeof(frame));
>
> Why do we need this?
>
> Will
--
Masami Hiramatsu <mhiramat@...nel.org>
Powered by blists - more mailing lists