[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110330171331.GB6038@redhat.com>
Date: Wed, 30 Mar 2011 19:13:31 +0200
From: Oleg Nesterov <oleg@...hat.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Jiri Olsa <jolsa@...hat.com>, Paul Mackerras <paulus@...ba.org>,
Ingo Molnar <mingo@...e.hu>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH,RFC] perf: panic due to inclied cpu context task_ctx
value
On 03/30, Peter Zijlstra wrote:
>
> On Wed, 2011-03-30 at 17:32 +0200, Oleg Nesterov wrote:
> > probably smp_mb__after_atomic_inc() needs a comment...
> >
> > It is needed to avoid the race between perf_sched_events_dec() and
> > perf_sched_events_inc().
> >
> > Suppose that we have a single event, both counters == 1. We create
> > another event and call perf_sched_events_inc(). Without the barrier
> > we could increment the counters in reverse order,
> >
> > jump_label_inc(&perf_sched_events_in);
> > /* ---- WINDOW ---- */
> > jump_label_inc(&perf_sched_events_out);
> >
> > Now, if perf_sched_events_dec() is called in between, it can disable
> > _out but not _in. This means we can leak ->task_ctx again.
>
> But in that case we need an mb in perf_sched_events_dec() too, because
> for the !JUMP_LABEL case that's a simple atomic_dec() and combined with
> synchronize_sched() being a nop for num_online_cpus()==1 there's no
> ordering there either.
I think you are right... afaics we only need barrier() in this case.
> Also, wouldn't this then require an smp_rmb() in the
> perf_event_task_sched_{in,out} COND_STMT/JUMP_LABEL read side?
Oh, I don't think so, but can't prove. We don't need it in UP case.
And if synchronize_sched() worked (see another email), it should
ensure that perf_sched_events_in == 0 must be visible after it
completes.
Oleg.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists