[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190627181112.GY26519@linux.ibm.com>
Date: Thu, 27 Jun 2019 11:11:12 -0700
From: "Paul E. McKenney" <paulmck@...ux.ibm.com>
To: Joel Fernandes <joel@...lfernandes.org>
Cc: Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Steven Rostedt <rostedt@...dmis.org>,
rcu <rcu@...r.kernel.org>, LKML <linux-kernel@...r.kernel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Josh Triplett <josh@...htriplett.org>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
Lai Jiangshan <jiangshanlai@...il.com>
Subject: Re: [RFC] Deadlock via recursive wakeup via RCU with threadirqs
On Thu, Jun 27, 2019 at 01:46:27PM -0400, Joel Fernandes wrote:
> On Thu, Jun 27, 2019 at 1:43 PM Joel Fernandes <joel@...lfernandes.org> wrote:
> >
> > On Thu, Jun 27, 2019 at 11:40 AM Sebastian Andrzej Siewior
> > <bigeasy@...utronix.de> wrote:
> > >
> > > On 2019-06-27 11:37:10 [-0400], Joel Fernandes wrote:
> > > > Sebastian it would be nice if possible to trace where the
> > > > t->rcu_read_unlock_special is set for this scenario of calling
> > > > rcu_read_unlock_special, to give a clear idea about whether it was
> > > > really because of an IPI. I guess we could also add additional RCU
> > > > debug fields to task_struct (just for debugging) to see where there
> > > > unlock_special is set.
> > > >
> > > > Is there a test to reproduce this, or do I just boot an intel x86_64
> > > > machine with "threadirqs" and run into it?
> > >
> > > Do you want to send me a patch or should I send you my kvm image which
> > > triggers the bug on boot?
> >
> > I could reproduce this as well just booting Linus tree with threadirqs
> > command line and running rcutorture. In 15 seconds or so it locks
> > up... gdb backtrace shows the recursive lock:
>
> Sorry that got badly wrapped, so I pasted it here:
> https://hastebin.com/ajivofomik.shell
Which rcutorture scenario would that be? TREE03 is thus far refusing
to fail for me when run this way:
$ tools/testing/selftests/rcutorture/bin/kvm.sh --cpus 8 --duration 5 --trust-make --configs "TREE03" --bootargs "threadirqs"
If it had failed, I would have tried the patch shown below. I know that
Sebastian has some concerns about the bug happening anyway, but we have
to start somewhere! ;-)
Thanx, Paul
------------------------------------------------------------------------
diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
index 82c925df1d92..be7bafc2c0a0 100644
--- a/kernel/rcu/tree_plugin.h
+++ b/kernel/rcu/tree_plugin.h
@@ -624,25 +624,16 @@ static void rcu_read_unlock_special(struct task_struct *t)
(rdp->grpmask & rnp->expmask) ||
tick_nohz_full_cpu(rdp->cpu);
// Need to defer quiescent state until everything is enabled.
- if ((exp || in_irq()) && irqs_were_disabled && use_softirq &&
- (in_irq() || !t->rcu_read_unlock_special.b.deferred_qs)) {
- // Using softirq, safe to awaken, and we get
- // no help from enabling irqs, unlike bh/preempt.
- raise_softirq_irqoff(RCU_SOFTIRQ);
- } else {
- // Enabling BH or preempt does reschedule, so...
- // Also if no expediting or NO_HZ_FULL, slow is OK.
- set_tsk_need_resched(current);
- set_preempt_need_resched();
- if (IS_ENABLED(CONFIG_IRQ_WORK) && irqs_were_disabled &&
- !rdp->defer_qs_iw_pending && exp) {
- // Get scheduler to re-evaluate and call hooks.
- // If !IRQ_WORK, FQS scan will eventually IPI.
- init_irq_work(&rdp->defer_qs_iw,
- rcu_preempt_deferred_qs_handler);
- rdp->defer_qs_iw_pending = true;
- irq_work_queue_on(&rdp->defer_qs_iw, rdp->cpu);
- }
+ set_tsk_need_resched(current);
+ set_preempt_need_resched();
+ if (IS_ENABLED(CONFIG_IRQ_WORK) && irqs_were_disabled &&
+ !rdp->defer_qs_iw_pending && exp) {
+ // Get scheduler to re-evaluate and call hooks.
+ // If !IRQ_WORK, FQS scan will eventually IPI.
+ init_irq_work(&rdp->defer_qs_iw,
+ rcu_preempt_deferred_qs_handler);
+ rdp->defer_qs_iw_pending = true;
+ irq_work_queue_on(&rdp->defer_qs_iw, rdp->cpu);
}
t->rcu_read_unlock_special.b.deferred_qs = true;
local_irq_restore(flags);
Powered by blists - more mailing lists