[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Sat, 11 Nov 2023 10:54:30 -0800
From: Josh Poimboeuf <jpoimboe@...nel.org>
To: Namhyung Kim <namhyung@...nel.org>
Cc: Peter Zijlstra <peterz@...radead.org>,
Steven Rostedt <rostedt@...dmis.org>,
Ingo Molnar <mingo@...nel.org>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
linux-kernel@...r.kernel.org, x86@...nel.org,
Indu Bhagat <indu.bhagat@...cle.com>,
Mark Rutland <mark.rutland@....com>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Jiri Olsa <jolsa@...nel.org>, Ian Rogers <irogers@...gle.com>,
Adrian Hunter <adrian.hunter@...el.com>,
linux-perf-users@...r.kernel.org, Mark Brown <broonie@...nel.org>,
linux-toolchains@...r.kernel.org
Subject: Re: [PATCH RFC 04/10] perf: Introduce deferred user callchains
On Sat, Nov 11, 2023 at 10:49:10AM -0800, Josh Poimboeuf wrote:
> On Fri, Nov 10, 2023 at 10:57:58PM -0800, Namhyung Kim wrote:
> > > +static void perf_pending_task_unwind(struct perf_event *event)
> > > +{
> > > + struct pt_regs *regs = task_pt_regs(current);
> > > + struct perf_output_handle handle;
> > > + struct perf_event_header header;
> > > + struct perf_sample_data data;
> > > + struct perf_callchain_entry *callchain;
> > > +
> > > + callchain = kmalloc(sizeof(struct perf_callchain_entry) +
> > > + (sizeof(__u64) * event->attr.sample_max_stack) +
> > > + (sizeof(__u64) * 1) /* one context */,
> > > + GFP_KERNEL);
> >
> > Any chance it can reuse get_perf_callchain() instead of
> > allocating the callchains every time?
>
> I don't think so, because if it gets preempted, the new task might also
> need to do an unwind. But there's only one task-context callchain per
> CPU.
BTW it's not just preemption, this code can also block when the unwinder
tries to copy from user space. So disabling preemption isn't an option.
--
Josh
Powered by blists - more mailing lists