[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180913074042.GU24124@hirez.programming.kicks-ass.net>
Date: Thu, 13 Sep 2018 09:40:42 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Jiri Olsa <jolsa@...nel.org>
Cc: Arnaldo Carvalho de Melo <acme@...nel.org>,
lkml <linux-kernel@...r.kernel.org>,
Ingo Molnar <mingo@...nel.org>,
Namhyung Kim <namhyung@...nel.org>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Andi Kleen <andi@...stfloor.org>
Subject: Re: [PATCH] perf: Prevent recursion in ring buffer
On Wed, Sep 12, 2018 at 09:33:17PM +0200, Jiri Olsa wrote:
> # perf record -e 'sched:sched_switch,sched:sched_wakeup' perf bench sched messaging
> The reason for the corruptions are some of the scheduling tracepoints,
> that have __perf_task dfined and thus allow to store data to another
> cpu ring buffer:
>
> sched_waking
> sched_wakeup
> sched_wakeup_new
> sched_stat_wait
> sched_stat_sleep
> sched_stat_iowait
> sched_stat_blocked
> And then iterates events of the 'task' and store the sample
> for any task's event that passes tracepoint checks:
>
> ctx = rcu_dereference(task->perf_event_ctxp[perf_sw_context]);
>
> list_for_each_entry_rcu(event, &ctx->event_list, event_entry) {
> if (event->attr.type != PERF_TYPE_TRACEPOINT)
> continue;
> if (event->attr.config != entry->type)
> continue;
>
> perf_swevent_event(event, count, &data, regs);
> }
>
> Above code can race with same code running on another cpu,
> ending up with 2 cpus trying to store under the same ring
> buffer, which is not handled at the moment.
It can yes, however the only way I can see this breaking is if we use
!inherited events with a strict per-task buffer, but your record command
doesn't use that.
Now, your test-case uses inherited events, which would all share the
buffer, however IIRC inherited events require per-task-per-cpu buffers,
because there is already no guarantee the various tasks run on the same
CPU in the first place.
This means we _should_ write to the @task's local CPU buffer, and that
would work again.
Let me try and figure out where this is going wrong.
Powered by blists - more mailing lists