[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180913093754.GV24124@hirez.programming.kicks-ass.net>
Date: Thu, 13 Sep 2018 11:37:54 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Jiri Olsa <jolsa@...hat.com>
Cc: Jiri Olsa <jolsa@...nel.org>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
lkml <linux-kernel@...r.kernel.org>,
Ingo Molnar <mingo@...nel.org>,
Namhyung Kim <namhyung@...nel.org>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Andi Kleen <andi@...stfloor.org>
Subject: Re: [PATCH] perf: Prevent recursion in ring buffer
On Thu, Sep 13, 2018 at 09:46:07AM +0200, Jiri Olsa wrote:
> On Thu, Sep 13, 2018 at 09:07:40AM +0200, Peter Zijlstra wrote:
> > On Wed, Sep 12, 2018 at 09:33:17PM +0200, Jiri Olsa wrote:
> > > Some of the scheduling tracepoints allow the perf_tp_event
> > > code to write to ring buffer under different cpu than the
> > > code is running on.
> >
> > ARGH.. that is indeed borken.
> I was first thinking to just leave it on the current cpu,
> but not sure current users would be ok with that ;-)
> ---
> diff --git a/kernel/events/core.c b/kernel/events/core.c
> index abaed4f8bb7f..9b534a2ecf17 100644
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -8308,6 +8308,8 @@ void perf_tp_event(u16 event_type, u64 count, void *record, int entry_size,
> continue;
> if (event->attr.config != entry->type)
> continue;
> + if (event->cpu != smp_processor_id())
> + continue;
> if (perf_tp_event_match(event, &data, regs))
> perf_swevent_event(event, count, &data, regs);
> }
That might indeed be the best we can do.
So the whole TP muck would be responsible for placing only matching
events on the hlist, which is where our normal CPU filter is I think.
The above then does the same for @task. Which without this would also be
getting nr_cpus copies of the event I think.
It does mean not getting any events if the @task only has a per-task
buffer, but there's nothing to be done about that. And I'm not even sure
we can create a useful warning for that :/
Powered by blists - more mailing lists