[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LFD.2.02.1206121701350.3086@ionos>
Date: Tue, 12 Jun 2012 17:07:37 +0200 (CEST)
From: Thomas Gleixner <tglx@...utronix.de>
To: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
cc: Sasha Levin <levinsasha928@...il.com>, Ingo Molnar <mingo@...e.hu>,
Peter Zijlstra <peterz@...radead.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Dave Jones <davej@...hat.com>
Subject: Re: rcu,sched: spinlock recursion on 3.5-rc2
On Tue, 12 Jun 2012, Paul E. McKenney wrote:
> On Tue, Jun 12, 2012 at 03:40:13PM +0200, Thomas Gleixner wrote:
> > The torture thread got preempted. rcu_preempt_note_context_switch()
> > tries to unlock the boosting rt mutex.
> >
> > Though rcu_preempt_note_context_switch() is called with rq lock
> > held. So it's not a surprise that the code will dead lock.
> >
> > My brain hurts already from looking, so Paul to the rescue!
>
> My brain hurts from beating my head on my desk. It seems that attempts
> to enhance PREEMPT_RCU's read-side performance require even more paranoia
> than I normally bring to bear. :-/
>
> Please see below for what I expect is the relevant revert.
>
> Thanx, Paul
>
> ------------------------------------------------------------------------
>
> Revert "rcu: Move PREEMPT_RCU preemption to switch_to() invocation"
>
> This reverts commit 616c310e83b872024271c915c1b9ab505b9efad9
> (Move PREEMPT_RCU preemption to switch_to() invocation) which can
> result in runqueue deadlock.
Hmm, not sure. The deadlock was not triggered in switch_to. It was
just at the beginning of __schedule()
need_resched:
preempt_disable();
cpu = smp_processor_id();
rq = cpu_rq(cpu);
rcu_note_context_switch(cpu);
Which ends up in rcu_read_unlock_special() which tries to
unlock the rtmutex.
So that code is still there ....
Thanks,
tglx
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists