[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20151104102800.GZ11639@twins.programming.kicks-ass.net>
Date: Wed, 4 Nov 2015 11:28:00 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Dave Jones <davej@...emonkey.org.uk>,
Linux Kernel <linux-kernel@...r.kernel.org>,
Ingo Molnar <mingo@...hat.com>,
Stephane Eranian <eranian@...il.com>
Cc: Paul McKenney <paulmck@...ux.vnet.ibm.com>,
Andi Kleen <andi@...stfloor.org>
Subject: Re: perf related lockdep bug
On Wed, Nov 04, 2015 at 11:21:51AM +0100, Peter Zijlstra wrote:
> The problem appears to be due to the new RCU expedited grace period
> stuff, with rcu_read_unlock() now randomly trying to acquire locks it
> previously didn't.
>
> Lemme go look at those rcu bits again..
Paul, I think this is because of:
8203d6d0ee78 ("rcu: Use single-stage IPI algorithm for RCU expedited grace period")
What happens is that the IPI comes in and tags any random
rcu_read_unlock() with the special bit, which then goes on and takes
locks.
Now the problem is that we have scheduler activity inside this lock;
the one reported lockdep seems easy enough to fix, see below.
I'll got and see if there's more sites than can cause this.
---
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index f07343b54fe5..a9c57b386258 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -4333,8 +4333,8 @@ static int __init rcu_spawn_gp_kthread(void)
sp.sched_priority = kthread_prio;
sched_setscheduler_nocheck(t, SCHED_FIFO, &sp);
}
- wake_up_process(t);
raw_spin_unlock_irqrestore(&rnp->lock, flags);
+ wake_up_process(t);
}
rcu_spawn_nocb_kthreads();
rcu_spawn_boost_kthreads();
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists