[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180607201143.247775-1-joel@joelfernandes.org>
Date: Thu, 7 Jun 2018 13:11:43 -0700
From: Joel Fernandes <joelaf@...gle.com>
To: linux-kernel@...r.kernel.org
Cc: "Joel Fernandes (Google)" <joel@...lfernandes.org>,
Steven Rostedt <rostedt@...dmis.org>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
Tom Zanussi <tom.zanussi@...ux.intel.com>,
Namhyung Kim <namhyung@...nel.org>,
Thomas Glexiner <tglx@...utronix.de>,
Boqun Feng <boqun.feng@...il.com>,
Paul McKenney <paulmck@...ux.vnet.ibm.com>,
Masami Hiramatsu <mhiramat@...nel.org>,
Todd Kjos <tkjos@...gle.com>,
Erick Reyes <erickreyes@...gle.com>,
Julia Cartwright <julia@...com>,
Byungchul Park <byungchul.park@....com>, stable@...r.kernel.org
Subject: [PATCH RESEND] softirq: reorder trace_softirqs_on to prevent lockdep splat
From: "Joel Fernandes (Google)" <joel@...lfernandes.org>
I'm able to reproduce a lockdep splat with config options:
CONFIG_PROVE_LOCKING=y,
CONFIG_DEBUG_LOCK_ALLOC=y and
CONFIG_PREEMPTIRQ_EVENTS=y
$ echo 1 > /d/tracing/events/preemptirq/preempt_enable/enable
[ 26.112609] DEBUG_LOCKS_WARN_ON(current->softirqs_enabled)
[ 26.112636] WARNING: CPU: 0 PID: 118 at kernel/locking/lockdep.c:3854
[...]
[ 26.144229] Call Trace:
[ 26.144926] <IRQ>
[ 26.145506] lock_acquire+0x55/0x1b0
[ 26.146499] ? __do_softirq+0x46f/0x4d9
[ 26.147571] ? __do_softirq+0x46f/0x4d9
[ 26.148646] trace_preempt_on+0x8f/0x240
[ 26.149744] ? trace_preempt_on+0x4d/0x240
[ 26.150862] ? __do_softirq+0x46f/0x4d9
[ 26.151930] preempt_count_sub+0x18a/0x1a0
[ 26.152985] __do_softirq+0x46f/0x4d9
[ 26.153937] irq_exit+0x68/0xe0
[ 26.154755] smp_apic_timer_interrupt+0x271/0x280
[ 26.156056] apic_timer_interrupt+0xf/0x20
[ 26.157105] </IRQ>
The issue was this:
preempt_count = 1 << SOFTIRQ_SHIFT
__local_bh_enable(cnt = 1 << SOFTIRQ_SHIFT) {
if (softirq_count() == (cnt && SOFTIRQ_MASK)) {
trace_softirqs_on() {
current->softirqs_enabled = 1;
}
}
preempt_count_sub(cnt) {
trace_preempt_on() {
tracepoint() {
rcu_read_lock_sched() {
// jumps into lockdep
Where preempt_count still has softirqs disabled, but
current->softirqs_enabled is true, and we get a splat.
Cc: Steven Rostedt <rostedt@...dmis.org>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Ingo Molnar <mingo@...hat.com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
Cc: Tom Zanussi <tom.zanussi@...ux.intel.com>
Cc: Namhyung Kim <namhyung@...nel.org>
Cc: Thomas Glexiner <tglx@...utronix.de>
Cc: Boqun Feng <boqun.feng@...il.com>
Cc: Paul McKenney <paulmck@...ux.vnet.ibm.com>
Cc: Masami Hiramatsu <mhiramat@...nel.org>
Cc: Todd Kjos <tkjos@...gle.com>
Cc: Erick Reyes <erickreyes@...gle.com>
Cc: Julia Cartwright <julia@...com>
Cc: Byungchul Park <byungchul.park@....com>
Cc: stable@...r.kernel.org
Reviewed-by: Steven Rostedt (VMware) <rostedt@...dmis.org>
Fixes: d59158162e032 ("tracing: Add support for preempt and irq enable/disable events")
Signed-off-by: Joel Fernandes (Google) <joel@...lfernandes.org>
---
kernel/softirq.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/kernel/softirq.c b/kernel/softirq.c
index 177de3640c78..8a040bcaa033 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -139,9 +139,13 @@ static void __local_bh_enable(unsigned int cnt)
{
lockdep_assert_irqs_disabled();
+ if (preempt_count() == cnt)
+ trace_preempt_on(CALLER_ADDR0, get_lock_parent_ip());
+
if (softirq_count() == (cnt & SOFTIRQ_MASK))
trace_softirqs_on(_RET_IP_);
- preempt_count_sub(cnt);
+
+ __preempt_count_sub(cnt);
}
/*
--
2.17.1.1185.g55be947832-goog
Powered by blists - more mailing lists