[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1310748957.27864.62.camel@gandalf.stny.rr.com>
Date: Fri, 15 Jul 2011 12:55:57 -0400
From: Steven Rostedt <rostedt@...dmis.org>
To: Peter Zijlstra <a.p.zijlstra@...llo.nl>
Cc: paulmck@...ux.vnet.ibm.com, Ed Tomlinson <edt@....ca>,
Sergey Senozhatsky <sergey.senozhatsky@...il.com>,
Ingo Molnar <mingo@...e.hu>,
Thomas Gleixner <tglx@...utronix.de>,
Andrew Morton <akpm@...ux-foundation.org>,
Dipankar Sarma <dipankar@...ibm.com>,
linux-kernel@...r.kernel.org
Subject: Re: INFO: possible circular locking dependency detected
On Fri, 2011-07-15 at 15:07 +0200, Peter Zijlstra wrote:
> OK, so the latter case cannot happen (rcu_preempt_check_callbacks only
> sets NEED_QS when rcu_read_lock_nesting), we need two interrupts for
> this to happen.
>
> rcu_read_lock()
>
> <IRQ>
> |= RCU_READ_UNLOCK_NEED_QS
>
> rcu_read_unlock()
> __rcu_read_unlock()
> --rcu_read_lock_nesting;
> <IRQ>
> ttwu()
> rcu_read_lock()
> rcu_read_unlock()
> rcu_read_unlock_special()
> *BANG*
> rcu_read_unlock_special()
>
What about this patch? Not even compiled tested.
-- Steve
diff --git a/kernel/rcutree_plugin.h b/kernel/rcutree_plugin.h
index 14dc7dd..e3545fa 100644
--- a/kernel/rcutree_plugin.h
+++ b/kernel/rcutree_plugin.h
@@ -284,18 +284,17 @@ static struct list_head *rcu_next_node_entry(struct task_struct *t,
* notify RCU core processing or task having blocked during the RCU
* read-side critical section.
*/
-static void rcu_read_unlock_special(struct task_struct *t)
+static int rcu_read_unlock_special(struct task_struct *t, int special)
{
int empty;
int empty_exp;
unsigned long flags;
struct list_head *np;
struct rcu_node *rnp;
- int special;
/* NMI handlers cannot block and cannot safely manipulate state. */
if (in_nmi())
- return;
+ return special;
local_irq_save(flags);
@@ -303,7 +302,6 @@ static void rcu_read_unlock_special(struct task_struct *t)
* If RCU core is waiting for this CPU to exit critical section,
* let it know that we have done so.
*/
- special = t->rcu_read_unlock_special;
if (special & RCU_READ_UNLOCK_NEED_QS) {
rcu_preempt_qs(smp_processor_id());
}
@@ -311,7 +309,7 @@ static void rcu_read_unlock_special(struct task_struct *t)
/* Hardware IRQ handlers cannot block. */
if (in_irq()) {
local_irq_restore(flags);
- return;
+ return special;
}
/* Clean up if blocked during RCU read-side critical section. */
@@ -373,6 +371,7 @@ static void rcu_read_unlock_special(struct task_struct *t)
} else {
local_irq_restore(flags);
}
+ return special;
}
/*
@@ -385,13 +384,21 @@ static void rcu_read_unlock_special(struct task_struct *t)
void __rcu_read_unlock(void)
{
struct task_struct *t = current;
+ int special;
+ special = ACCESS_ONCE(t->rcu_read_unlock_special);
+ /*
+ * Clear special here to prevent interrupts from seeing it
+ * enabled after decrementing lock_nesting and calling
+ * rcu_read_unlock_special().
+ */
+ t->rcu_read_unlock_special = 0;
barrier(); /* needed if we ever invoke rcu_read_unlock in rcutree.c */
--t->rcu_read_lock_nesting;
barrier(); /* decrement before load of ->rcu_read_unlock_special */
- if (t->rcu_read_lock_nesting == 0 &&
- unlikely(ACCESS_ONCE(t->rcu_read_unlock_special)))
- rcu_read_unlock_special(t);
+ if (t->rcu_read_lock_nesting == 0 && special)
+ special = rcu_read_unlock_special(t, special);
+ t->rcu_read_unlock_special = special;
#ifdef CONFIG_PROVE_LOCKING
WARN_ON_ONCE(ACCESS_ONCE(t->rcu_read_lock_nesting) < 0);
#endif /* #ifdef CONFIG_PROVE_LOCKING */
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists