[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <Z9SLL50yuiLOGGYI@x1>
Date: Fri, 14 Mar 2025 17:01:51 -0300
From: Arnaldo Carvalho de Melo <acme@...nel.org>
To: Ian Rogers <irogers@...gle.com>
Cc: Peter Zijlstra <peterz@...radead.org>, Ingo Molnar <mingo@...hat.com>,
Namhyung Kim <namhyung@...nel.org>,
Mark Rutland <mark.rutland@....com>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Jiri Olsa <jolsa@...nel.org>,
Adrian Hunter <adrian.hunter@...el.com>,
Kan Liang <kan.liang@...ux.intel.com>,
James Clark <james.clark@...aro.org>,
Yicong Yang <yangyicong@...ilicon.com>,
Howard Chu <howardchu95@...il.com>, Andi Kleen <ak@...ux.intel.com>,
Michael Petlan <mpetlan@...hat.com>,
Anne Macedo <retpolanne@...teo.net>,
"Dr. David Alan Gilbert" <linux@...blig.org>,
Dmitry Vyukov <dvyukov@...gle.com>,
linux-perf-users@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v1 1/2] perf machine: Factor creating a "live" machine
out of dwarf-unwind
On Fri, Mar 14, 2025 at 05:00:58PM -0300, Arnaldo Carvalho de Melo wrote:
> On Fri, Mar 14, 2025 at 02:18:49PM -0300, Arnaldo Carvalho de Melo wrote:
> > On Wed, Mar 12, 2025 at 10:29:51PM -0700, Ian Rogers wrote:
> > > Factor out for use in places other than the dwarf unwinding tests for
> > > libunwind.
> >
> > Testing with another patchset being reviewed/tested, seems to work, if
> > it showed the line number would be even better!
>
> But it gets the lines, at least in this secoond attempt, after applying
> Namhyungs fix for the previous problem (int16_t):
Nevermind, this time I built with DEBUG=1, so DWARF, probably.
- Arnaldo
> root@...ber:~# perf trace -e landlock_add_rule perf test -w landlock
> perf: Segmentation fault
> #0 0x6698d0 in dump_stack debug.c:355
> #1 0x66994c in sighandler_dump_stack debug.c:367
> #2 0x7f784be95fd0 in __restore_rt libc.so.6[40fd0]
> #3 0x4d0e56 in trace__find_usable_bpf_prog_entry builtin-trace.c:3882
> #4 0x4cf3de in trace__init_syscalls_bpf_prog_array_maps builtin-trace.c:4040
> #5 0x4bf626 in trace__run builtin-trace.c:4477
> #6 0x4bb7a9 in cmd_trace builtin-trace.c:5741
> #7 0x4d873f in run_builtin perf.c:351
> #8 0x4d7df3 in handle_internal_command perf.c:404
> #9 0x4d860f in run_argv perf.c:451
> #10 0x4d7a4f in main perf.c:558
> #11 0x7f784be7f088 in __libc_start_call_main libc.so.6[2a088]
> #12 0x7f784be7f14b in __libc_start_main@@GLIBC_2.34 libc.so.6[2a14b]
> #13 0x410ff5 in _start perf[410ff5]
> Segmentation fault (core dumped)
> root@...ber:~#
>
> > I'll continue working on that other case with this applied just before
> > that series and finally will give my Tested-by.
> >
> > - Arnaldo
> >
> > root@...ber:~# perf trace -e landlock_add_rule perf test -w landlock
> > perf: Segmentation fault
> > #0 0x5be81d in dump_stack perf[5be81d]
> > #1 0x5be879 in sighandler_dump_stack perf[5be879]
> > #2 0x7f313d24efd0 in __restore_rt libc.so.6[40fd0]
> > #3 0x491bc1 in cmd_trace perf[491bc1]
> > #4 0x497090 in run_builtin perf.c:0
> > #5 0x4973ab in handle_internal_command perf.c:0
> > #6 0x413483 in main perf[413483]
> > #7 0x7f313d238088 in __libc_start_call_main libc.so.6[2a088]
> > #8 0x7f313d23814b in __libc_start_main@@GLIBC_2.34 libc.so.6[2a14b]
> > #9 0x413ad5 in _start perf[413ad5]
> > Segmentation fault (core dumped)
> > root@...ber:~#
Powered by blists - more mailing lists