[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20151104140133.GA32021@linux.vnet.ibm.com>
Date: Wed, 4 Nov 2015 06:01:33 -0800
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Dave Jones <davej@...emonkey.org.uk>,
Linux Kernel <linux-kernel@...r.kernel.org>,
Ingo Molnar <mingo@...hat.com>,
Stephane Eranian <eranian@...il.com>,
Andi Kleen <andi@...stfloor.org>
Subject: Re: perf related lockdep bug
On Wed, Nov 04, 2015 at 11:28:00AM +0100, Peter Zijlstra wrote:
> On Wed, Nov 04, 2015 at 11:21:51AM +0100, Peter Zijlstra wrote:
>
> > The problem appears to be due to the new RCU expedited grace period
> > stuff, with rcu_read_unlock() now randomly trying to acquire locks it
> > previously didn't.
> >
> > Lemme go look at those rcu bits again..
>
> Paul, I think this is because of:
>
> 8203d6d0ee78 ("rcu: Use single-stage IPI algorithm for RCU expedited grace period")
>
> What happens is that the IPI comes in and tags any random
> rcu_read_unlock() with the special bit, which then goes on and takes
> locks.
>
> Now the problem is that we have scheduler activity inside this lock;
> the one reported lockdep seems easy enough to fix, see below.
>
> I'll got and see if there's more sites than can cause this.
This one only happens during boot time, but it would be good hygiene
in any case. May I have your SOB on this?
Thanx, Paul
> ---
> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> index f07343b54fe5..a9c57b386258 100644
> --- a/kernel/rcu/tree.c
> +++ b/kernel/rcu/tree.c
> @@ -4333,8 +4333,8 @@ static int __init rcu_spawn_gp_kthread(void)
> sp.sched_priority = kthread_prio;
> sched_setscheduler_nocheck(t, SCHED_FIFO, &sp);
> }
> - wake_up_process(t);
> raw_spin_unlock_irqrestore(&rnp->lock, flags);
> + wake_up_process(t);
> }
> rcu_spawn_nocb_kthreads();
> rcu_spawn_boost_kthreads();
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists