[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240913133629.GV4723@noisy.programming.kicks-ass.net>
Date: Fri, 13 Sep 2024 15:36:29 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Josh Poimboeuf <jpoimboe@...nel.org>
Cc: Namhyung Kim <namhyung@...nel.org>,
Steven Rostedt <rostedt@...dmis.org>,
Ingo Molnar <mingo@...nel.org>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
linux-kernel@...r.kernel.org, x86@...nel.org,
Indu Bhagat <indu.bhagat@...cle.com>,
Mark Rutland <mark.rutland@....com>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Jiri Olsa <jolsa@...nel.org>, Ian Rogers <irogers@...gle.com>,
Adrian Hunter <adrian.hunter@...el.com>,
linux-perf-users@...r.kernel.org, Mark Brown <broonie@...nel.org>,
linux-toolchains@...r.kernel.org
Subject: Re: [PATCH RFC 04/10] perf: Introduce deferred user callchains
On Fri, Sep 13, 2024 at 06:08:34AM -0700, Josh Poimboeuf wrote:
> On Mon, Nov 20, 2023 at 03:03:34PM +0100, Peter Zijlstra wrote:
> > On Wed, Nov 15, 2023 at 08:13:31AM -0800, Namhyung Kim wrote:
> >
> > > ---8<---
> > > diff --git a/include/uapi/linux/perf_event.h b/include/uapi/linux/perf_event.h
> > > index 39c6a250dd1b..a3765ff59798 100644
> > > --- a/include/uapi/linux/perf_event.h
> > > +++ b/include/uapi/linux/perf_event.h
> > > @@ -456,7 +456,8 @@ struct perf_event_attr {
> > > inherit_thread : 1, /* children only inherit if cloned with CLONE_THREAD */
> > > remove_on_exec : 1, /* event is removed from task on exec */
> > > sigtrap : 1, /* send synchronous SIGTRAP on event */
> > > - __reserved_1 : 26;
> > > + defer_callchain: 1, /* generate DEFERRED_CALLCHAINS records for userspace */
> > > + __reserved_1 : 25;
> > >
> > > union {
> > > __u32 wakeup_events; /* wakeup every n events */
> > > @@ -1207,6 +1208,20 @@ enum perf_event_type {
> > > */
> > > PERF_RECORD_AUX_OUTPUT_HW_ID = 21,
> > >
> > > + /*
> > > + * Deferred user stack callchains (for SFrame). Previous samples would
> >
> > Possibly also useful for ShadowStack based unwinders. And by virtue of
> > it possibly saving work when multiple consecutive samples hit
> > the same kernel section, for everything.
>
> [ necroing old thread as I'm finally working on a v2 ]
>
> Peter, can you elaborate? What did you mean by "same kernel section"?
>
> Like if there's a duplicate kernel callchain? Or something else?
Yeah, multiple samples hitting the same syscall invocation will, by
necessity, have the same user callchain.
Powered by blists - more mailing lists