[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110715183319.GG2327@linux.vnet.ibm.com>
Date: Fri, 15 Jul 2011 11:33:19 -0700
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Steven Rostedt <rostedt@...dmis.org>
Cc: Peter Zijlstra <a.p.zijlstra@...llo.nl>, Ed Tomlinson <edt@....ca>,
Sergey Senozhatsky <sergey.senozhatsky@...il.com>,
Ingo Molnar <mingo@...e.hu>,
Thomas Gleixner <tglx@...utronix.de>,
Andrew Morton <akpm@...ux-foundation.org>,
Dipankar Sarma <dipankar@...ibm.com>,
linux-kernel@...r.kernel.org
Subject: Re: INFO: possible circular locking dependency detected
On Fri, Jul 15, 2011 at 01:42:31PM -0400, Steven Rostedt wrote:
> On Fri, 2011-07-15 at 10:24 -0700, Paul E. McKenney wrote:
>
> > But the rcu_read_unlock() called from within the irq handler would
> > take a second snapshot of ->special. It could then enter
> > rcu_read_unlock_special().
>
> You agree that an interrupt preempting the rcu_read_unlock() is causing
> the issues correct? But it is also contained within rcu_read_unlock().
> That is, we just don't want interrupts or softirqs from calling the
> special function when it preempted rcu_read_unlock().
>
> How about this patch? (again totally untested and not even compiled)
I really dislike the added overhead, especially the implied
preempt_disable() and preempt_enable() calls. I am actually trying to
-reduce- its overhead, for example, by removing the function call...
But as a short-term hack-around, it could be OK. It does seem to
cover all the possible conditions, at least all the ones I can see at
the moment.
Longer term, enclosing the rq/pi lock critical sections with
rcu_read_lock() and rcu_read_unlock() seems more reasonable.
Hmmm... Does just setting CONFIG_IRQ_FORCED_THREADING suffice to test
this stuff? Or is "threadirqs" also required on the kernel command line?
Thanx, Paul
> -- Steve
>
> diff --git a/kernel/rcupdate.c b/kernel/rcupdate.c
> index 7784bd2..0bdf0ea 100644
> --- a/kernel/rcupdate.c
> +++ b/kernel/rcupdate.c
> @@ -46,6 +46,8 @@
> #include <linux/module.h>
> #include <linux/hardirq.h>
>
> +DEFINE_PER_CPU(int, in_rcu_read_unlock);
> +
> #ifdef CONFIG_DEBUG_LOCK_ALLOC
> static struct lock_class_key rcu_lock_key;
> struct lockdep_map rcu_lock_map =
> diff --git a/kernel/rcutree_plugin.h b/kernel/rcutree_plugin.h
> index 14dc7dd..a4adbb7 100644
> --- a/kernel/rcutree_plugin.h
> +++ b/kernel/rcutree_plugin.h
> @@ -375,6 +375,8 @@ static void rcu_read_unlock_special(struct task_struct *t)
> }
> }
>
> +DECLARE_PER_CPU(int, in_rcu_read_unlock);
> +
> /*
> * Tree-preemptible RCU implementation for rcu_read_unlock().
> * Decrement ->rcu_read_lock_nesting. If the result is zero (outermost
> @@ -386,12 +388,16 @@ void __rcu_read_unlock(void)
> {
> struct task_struct *t = current;
>
> + get_cpu_var(in_rcu_read_unlock)++;
> barrier(); /* needed if we ever invoke rcu_read_unlock in rcutree.c */
> --t->rcu_read_lock_nesting;
> barrier(); /* decrement before load of ->rcu_read_unlock_special */
> if (t->rcu_read_lock_nesting == 0 &&
> + __get_cpu_var(in_rcu_read_unlock) == 1 &&
> unlikely(ACCESS_ONCE(t->rcu_read_unlock_special)))
> rcu_read_unlock_special(t);
> + __get_cpu_var(in_rcu_read_unlock)--;
> + put_cpu_var(in_rcu_read_unlock);
> #ifdef CONFIG_PROVE_LOCKING
> WARN_ON_ONCE(ACCESS_ONCE(t->rcu_read_lock_nesting) < 0);
> #endif /* #ifdef CONFIG_PROVE_LOCKING */
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists