[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1310664742.27864.45.camel@gandalf.stny.rr.com>
Date: Thu, 14 Jul 2011 13:32:22 -0400
From: Steven Rostedt <rostedt@...dmis.org>
To: paulmck@...ux.vnet.ibm.com
Cc: Sergey Senozhatsky <sergey.senozhatsky@...il.com>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Ingo Molnar <mingo@...e.hu>,
Thomas Gleixner <tglx@...utronix.de>,
Andrew Morton <akpm@...ux-foundation.org>,
Dipankar Sarma <dipankar@...ibm.com>,
linux-kernel@...r.kernel.org
Subject: Re: INFO: possible circular locking dependency detected
On Thu, 2011-07-14 at 10:05 -0700, Paul E. McKenney wrote:
> On Thu, Jul 14, 2011 at 01:02:09PM -0400, Steven Rostedt wrote:
> > On Thu, 2011-07-14 at 12:58 -0400, Steven Rostedt wrote:
> >
> > > void __rcu_read_unlock(void)
> > > {
> > > struct task_struct *t = current;
> > >
> > > barrier(); /* needed if we ever invoke rcu_read_unlock in rcutree.c */
> > > --t->rcu_read_lock_nesting;
> > > barrier(); /* decrement before load of ->rcu_read_unlock_special */
> > > if (t->rcu_read_lock_nesting == 0 &&
> > > unlikely(ACCESS_ONCE(t->rcu_read_unlock_special)))
> > > rcu_read_unlock_special(t);
> > >
> > > Thus the question is, how did we get rcu_read_unlock_special set here?
> >
> > Looks like another process could set this with:
> >
> > static int rcu_boost(struct rcu_node *rnp)
> > {
> > [...]
> > t = container_of(tb, struct task_struct, rcu_node_entry);
> > rt_mutex_init_proxy_locked(&mtx, t);
> > t->rcu_boost_mutex = &mtx;
> > t->rcu_read_unlock_special |= RCU_READ_UNLOCK_BOOSTED;
>
> But only if that task was preempted while in the RCU read-side critical
> section that resulted in the call to rcu_read_unlock_special(), which
> should not happen if the task has irqs disabled for the duration of that
> RCU read-side critical section, right?
>
static void rcu_read_unlock_special(struct task_struct *t)
{
[...]
special = t->rcu_read_unlock_special; (A)
[...]
for (;;) { (B)
rnp = t->rcu_blocked_node;
raw_spin_lock(&rnp->lock); /* irqs already disabled. */
if (rnp == t->rcu_blocked_node)
break;
raw_spin_unlock(&rnp->lock); /* irqs remain disabled. */
}
[...]
list_del_init(&t->rcu_node_entry);
[...]
if (empty)
raw_spin_unlock_irqrestore(&rnp->lock, flags);
else
rcu_report_unblock_qs_rnp(rnp, flags);
/* Unboost if we were boosted. */
if (special & RCU_READ_UNLOCK_BOOSTED) {
t->rcu_read_unlock_special &= ~RCU_READ_UNLOCK_BOOSTED;
rt_mutex_unlock(t->rcu_boost_mutex);
t->rcu_boost_mutex = NULL;
}
Now what happens if between (A) and (B) the kthread wakes up and calls
rc_boost()?
static int rcu_boost(struct rcu_node *rnp)
{
[...]
raw_spin_lock_irqsave(&rnp->lock, flags);
[...]
t = container_of(tb, struct task_struct, rcu_node_entry);
[...]
t->rcu_read_unlock_special |= RCU_READ_UNLOCK_BOOSTED;
raw_spin_unlock_irqrestore(&rnp->lock, flags);
Seems that we could have RCU_READ_UNLOCK_BOOSTED set, and never get
cleared, because rcu_read_unlock_special() doesn't look at the flags
directly, but at a local variable. The next rcu_read_unlock() will now
see this flag set!
-- Steve
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists