[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160107162604.GP3818@linux.vnet.ibm.com>
Date: Thu, 7 Jan 2016 08:26:04 -0800
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Eric Dumazet <edumazet@...gle.com>,
syzkaller <syzkaller@...glegroups.com>,
LKML <linux-kernel@...r.kernel.org>, acme@...hat.com,
Linus Torvalds <torvalds@...ux-foundation.org>,
Alexander Potapenko <glider@...gle.com>,
Kostya Serebryany <kcc@...gle.com>,
Dmitry Vyukov <dvyukov@...gle.com>,
Sasha Levin <sasha.levin@...cle.com>,
"H. Peter Anvin" <hpa@...or.com>, jolsa@...hat.com,
Ingo Molnar <mingo@...nel.org>, vincent.weaver@...ne.edu,
Thomas Gleixner <tglx@...utronix.de>, acme@...nel.org,
Stephane Eranian <eranian@...gle.com>,
linux-tip-commits@...r.kernel.org
Subject: Re: [tip:perf/core] perf: Fix race in perf_event_exec()
On Thu, Jan 07, 2016 at 02:40:08PM +0100, Peter Zijlstra wrote:
> On Wed, Jan 06, 2016 at 01:56:56PM -0500, Eric Dumazet wrote:
> > On Wed, Jan 6, 2016 at 1:46 PM, tip-bot for Peter Zijlstra
> > <tipbot@...or.com> wrote:
> >
> > >
> > > This is because context switches can swap the task_struct::perf_event_ctxp[]
> > > pointer around. Therefore you have to either disable preemption when looking
> > > at current, or hold ctx->lock.
> > >
> >
> >
> > >
> > > void perf_event_exec(void)
> > > {
> > > - struct perf_event_context *ctx;
> > > int ctxn;
> > >
> > > rcu_read_lock();
> >
> > Do we still need this rcu_read_lock(), if perf_event_enable_on_exec()
> > uses local_irq_save( ?
>
> Strictly speaking we should not rely on the fact that RCU grace periods
> do not progress with IRQs disabled, so yes.
What Peter said!
The current implementation would let you get away with IRQs disabled
(aside from lockdep splats), but it is easy to imagine implementations
that do not, hence the restriction.
Is the extra rcu_read_lock() and rcu_read_unlock() causing a performance
problem? If so, one option would be use of synchronize_sched() and
call_rcu_sched(), which explicitly recognize disabled IRQs as read-side
critical sections. Of course, this might well put you in a position
where you would like to use IRQ disabling in some cases, BH disabling
(or softirq context) in others, and rcu_read_lock() in yet other cases.
If so, the good news is that RCU actually now supports this efficiently.
There is the shiny new synchronize_rcu_mult() macro that can be used
as follows to wait for all three types of grace periods:
synchronize_rcu_mult(call_rcu, call_rcu_bh, call_rcu_sched);
This would wait concurrently for all three grace periods to elapse.
Thanx, Paul
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists