lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 22 Oct 2021 19:11:54 +0100
From:   Mark Rutland <mark.rutland@....com>
To:     madvenka@...ux.microsoft.com
Cc:     broonie@...nel.org, jpoimboe@...hat.com, ardb@...nel.org,
        nobuta.keiya@...itsu.com, sjitindarsingh@...il.com,
        catalin.marinas@....com, will@...nel.org, jmorris@...ei.org,
        linux-arm-kernel@...ts.infradead.org,
        live-patching@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v10 02/11] arm64: Make perf_callchain_kernel() use
 arch_stack_walk()

On Thu, Oct 14, 2021 at 09:58:38PM -0500, madvenka@...ux.microsoft.com wrote:
> From: "Madhavan T. Venkataraman" <madvenka@...ux.microsoft.com>
> 
> Currently, perf_callchain_kernel() in ARM64 code walks the stack using
> start_backtrace() and walk_stackframe(). Make it use arch_stack_walk()
> instead. This makes maintenance easier.
> 
> Signed-off-by: Madhavan T. Venkataraman <madvenka@...ux.microsoft.com>

This looks good to me; bailing out when perf_callchain_store() can't
accept any more entries absolutely makes sense.

I gave this a spin with:

| #  perf record -g -c1 ls
| #  perf report

... and the recorded callchains look sane.

Reviewed-by: Mark Rutland <mark.rutland@....com>
Tested-by: Mark Rutland <mark.rutland@....com>

As mentioned on patch 1, I'd like to get this rebased atop Peter's
untangling of ARCH_STACKWALK from STACKTRACE.

Thanks,
Mark.

> ---
>  arch/arm64/kernel/perf_callchain.c | 8 ++------
>  1 file changed, 2 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/arm64/kernel/perf_callchain.c b/arch/arm64/kernel/perf_callchain.c
> index 4a72c2727309..f173c448e852 100644
> --- a/arch/arm64/kernel/perf_callchain.c
> +++ b/arch/arm64/kernel/perf_callchain.c
> @@ -140,22 +140,18 @@ void perf_callchain_user(struct perf_callchain_entry_ctx *entry,
>  static bool callchain_trace(void *data, unsigned long pc)
>  {
>  	struct perf_callchain_entry_ctx *entry = data;
> -	perf_callchain_store(entry, pc);
> -	return true;
> +	return perf_callchain_store(entry, pc) == 0;
>  }
>  
>  void perf_callchain_kernel(struct perf_callchain_entry_ctx *entry,
>  			   struct pt_regs *regs)
>  {
> -	struct stackframe frame;
> -
>  	if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) {
>  		/* We don't support guest os callchain now */
>  		return;
>  	}
>  
> -	start_backtrace(&frame, regs->regs[29], regs->pc);
> -	walk_stackframe(current, &frame, callchain_trace, entry);
> +	arch_stack_walk(callchain_trace, entry, current, regs);
>  }
>  
>  unsigned long perf_instruction_pointer(struct pt_regs *regs)
> -- 
> 2.25.1
> 

Powered by blists - more mailing lists