[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140811033714.GU5821@linux.vnet.ibm.com>
Date: Sun, 10 Aug 2014 20:37:14 -0700
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: linux-kernel@...r.kernel.org, mingo@...nel.org,
laijs@...fujitsu.com, dipankar@...ibm.com,
akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com,
josh@...htriplett.org, tglx@...utronix.de, rostedt@...dmis.org,
dhowells@...hat.com, edumazet@...gle.com, dvhart@...ux.intel.com,
fweisbec@...il.com, oleg@...hat.com, bobby.prani@...il.com
Subject: Re: [PATCH v3 tip/core/rcu 1/9] rcu: Add call_rcu_tasks()
On Sun, Aug 10, 2014 at 05:00:05PM +0200, Peter Zijlstra wrote:
> On Sat, Aug 09, 2014 at 06:38:29PM -0700, Paul E. McKenney wrote:
> > On Sat, Aug 09, 2014 at 08:33:55PM +0200, Peter Zijlstra wrote:
> > > On Fri, Aug 08, 2014 at 01:58:26PM -0700, Paul E. McKenney wrote:
> > > >
> > > > > And on that, you probably should change rcu_sched_rq() to read:
> > > > >
> > > > > this_cpu_inc(rcu_sched_data.passed_quiesce);
> > > > >
> > > > > That avoids touching the per-cpu data offset.
> > > >
> > > > Hmmm... Interrupts are disabled,
> > >
> > > No they are not, __schedule()->rcu_note_context_switch()->rcu_sched_qs()
> > > is only called with preemption disabled.
> > >
> > > We only disable IRQs later, where we take the rq->lock.
> >
> > You want me not to disable irqs before invoking rcu_preempt_qs() from
> > rcu_preempt_note_context_switch(), I get that. But right now, they
> > really are disabled courtesy of the local_irq_save() before the call
> > to rcu_preempt_qs() from rcu_preempt_note_context_switch().
>
> Ah, confusion there, I said rcu_sched_qs(), you're talking about
> rcu_preempt_qs().
>
> Yes the call to rcu_preempt_qs() is unconditionally wrapped in IRQ
> disable.
Apologies for my confusion! The rcu_sched_qs() call doesn't need
to interact directly with the scheduling-clock interrupt using
read-modify-write variables, so it gets a pass.
> > > void rcu_sched_qs(int cpu)
> > > {
> > > if (trace_rcu_grace_period_enabled()) {
> > > if (!__this_cpu_read(rcu_sched_data.passed_quiesce))
> > > trace_rcu_grace_period(...);
> > > }
> > > __this_cpu_write(rcu_sched_data.passed_quiesce, 1);
> > > }
> > >
> > > Would further avoid emitting the conditional in the normal case where
> > > the tracepoint is inactive.
> >
> > It might be better to avoid storing to rcu_sched_data.passed_quiesce when
> > it is already 1, though the difference would be quite hard to measure.
> > In that case, this would work nicely:
> >
> > static void rcu_preempt_qs(int cpu)
> > {
> > if (rdp->passed_quiesce == 0) {
> > trace_rcu_grace_period(TPS("rcu_preempt"), rdp->gpnum, TPS("cpuqs"));
> > > __this_cpu_write(rcu_sched_data.passed_quiesce, 1);
> > }
> > current->rcu_read_unlock_special &= ~RCU_READ_UNLOCK_NEED_QS;
> > }
>
> Yes, that's a consideration, fair enough. Again note the confusion
> between sched/preempt. But yes, both can use this 'cleanup'.
OK, it is on my list.
Thanx, Paul
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists