[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170111160358.GA3081@twins.programming.kicks-ass.net>
Date: Wed, 11 Jan 2017 17:03:58 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Mark Rutland <mark.rutland@....com>
Cc: linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...hat.com>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
jeremy.linton@....com, Will Deacon <will.deacon@....com>
Subject: Re: Perf hotplug lockup in v4.9-rc8
On Wed, Jan 11, 2017 at 02:59:20PM +0000, Mark Rutland wrote:
> Hi Peter,
>
> Sorry for the delay; this fell into my backlog over the holiday.
>
> On Fri, Dec 09, 2016 at 02:59:00PM +0100, Peter Zijlstra wrote:
> > So while I went back and forth trying to make that less ugly, I figured
> > there was another problem.
> >
> > Imagine the cpu_function_call() hitting the 'right' cpu, but not finding
> > the task current. It will then continue to install the event in the
> > context. However, that doesn't stop another CPU from pulling the task in
> > question from our rq and scheduling it elsewhere.
> >
> > This all lead me to the below patch.. Now it has a rather large comment,
> > and while it represents my current thinking on the matter, I'm not at
> > all sure its entirely correct. I got my brain in a fair twist while
> > writing it.
> >
> > Please as to carefully think about it.
>
> FWIW, I've given the below a spin on a few systems, and with it applied
> my reproducer no longer triggers the issue.
>
> Unfortunately, most of the ordering concerns have gone over my head. :/
>
> > @@ -2331,13 +2330,36 @@ perf_install_in_context(struct perf_event_context *ctx,
> > /*
> > * Installing events is tricky because we cannot rely on ctx->is_active
> > * to be set in case this is the nr_events 0 -> 1 transition.
> > + *
> > + * Instead we use task_curr(), which tells us if the task is running.
> > + * However, since we use task_curr() outside of rq::lock, we can race
> > + * against the actual state. This means the result can be wrong.
> > + *
> > + * If we get a false positive, we retry, this is harmless.
> > + *
> > + * If we get a false negative, things are complicated. If we are after
> > + * perf_event_context_sched_in() ctx::lock will serialize us, and the
> > + * value must be correct. If we're before, it doesn't matter since
> > + * perf_event_context_sched_in() will program the counter.
> > + *
> > + * However, this hinges on the remote context switch having observed
> > + * our task->perf_event_ctxp[] store, such that it will in fact take
> > + * ctx::lock in perf_event_context_sched_in().
>
> Sorry if I'm being thick here, but which store are we describing above?
> i.e. which function, how does that relate to perf_install_in_context()?
The only store to perf_event_ctxp[] of interest is the initial one in
find_get_context().
> I haven't managed to wrap my head around why this matters. :/
See the scenario from:
https://lkml.kernel.org/r/20161212124228.GE3124@twins.programming.kicks-ass.net
Its installing the first event on 't', which concurrently with the
install gets migrated to a third CPU.
CPU0 CPU1 CPU2
(current == t)
t->perf_event_ctxp[] = ctx;
smp_mb();
cpu = task_cpu(t);
switch(t, n);
migrate(t, 2);
switch(p, t);
ctx = t->perf_event_ctxp[]; // must not be NULL
smp_function_call(cpu, ..);
generic_exec_single()
func();
spin_lock(ctx->lock);
if (task_curr(t)) // false
add_event_to_ctx();
spin_unlock(ctx->lock);
perf_event_context_sched_in();
spin_lock(ctx->lock);
// sees event
So its CPU0's store of t->perf_event_ctxp[] that must not go 'missing.
Because if CPU2's load of that variable were to observe NULL, it would
not try to schedule the ctx and we'd have a task running without its
counter, which would be 'bad'.
As long as we observe !NULL, we'll acquire ctx->lock. If we acquire it
first and not see the event yet, then CPU0 must observe task_running()
and retry. If the install happens first, then we must see the event on
sched-in and all is well.
In any case, I'll try and write a proper Changelog for this...
Powered by blists - more mailing lists