[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190619011908.25026-4-swood@redhat.com>
Date: Tue, 18 Jun 2019 20:19:07 -0500
From: Scott Wood <swood@...hat.com>
To: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Cc: "Paul E. McKenney" <paulmck@...ux.ibm.com>,
Peter Zijlstra <peterz@...radead.org>,
Steven Rostedt <rostedt@...dmis.org>,
Thomas Gleixner <tglx@...utronix.de>,
Juri Lelli <juri.lelli@...hat.com>,
Clark Williams <williams@...hat.com>,
linux-rt-users@...r.kernel.org, linux-kernel@...r.kernel.org,
Scott Wood <swood@...hat.com>
Subject: [RFC PATCH RT 3/4] rcu: unlock special: Treat irq and preempt disabled the same
[Note: Just before posting this I noticed that the invoke_rcu_core stuff
is part of the latest RCU pull request, and it has a patch that
addresses this in a more complicated way that appears to deal with the
bare irq-disabled sequence as well.
Assuming we need/want to support such sequences, is the
invoke_rcu_core() call actually going to result in scheduling any
sooner? resched_curr() just does the same setting of need_resched
when it's the same cpu.
]
Since special should never be getting set inside an irqs-disabled
critical section, this is safe as long as there are no sequences of
rcu_read_lock()/local_irq_disable()/rcu_read_unlock()/local_irq_enable()
(without preempt_disable() wrapped around the IRQ disabling, as spinlocks
do). If there are such sequences, then the grace period may be delayed
until the next time need_resched is checked.
This is needed because otherwise, in a sequence such as:
1. rcu_read_lock()
2. *preempt*, set rcu_read_unlock_special.b.blocked
3. preempt_disable()
4. rcu_read_unlock()
5. preempt_enable()
...rcu_read_unlock_special.b.blocked will not be cleared during
step 4, because of the disabled preemption. If an interrupt is then
taken between steps 4 and 5, and that interrupt enters scheduler code
that takes pi/rq locks, and an rcu read lock inside that, then when
dropping that rcu read lock we will end up in rcu_read_unlock_special()
again -- but this time, since irqs are disabled, it will call
invoke_rcu_core() in the RT tree (regardless of PREEMPT_RT_FULL), which
calls wake_up_process(). This can cause a pi/rq lock deadlock. An
example of interrupt code that does this is scheduler_tick().
The above sequence can be found in (at least) __lock_task_sighand() (for
!PREEMPT_RT_FULL) and d_alloc_parallel().
It's potentially an issue on non-RT as well. While
raise_softirq_irqoff() doesn't call wake_up_process() when in_interrupt()
is true, if code between steps 4 and 5 directly calls into scheduler
code, and that code uses RCU with pi/rq lock held, wake_up_process() can
still be called.
On RT, migrate_enable() is such a codepath, so an in_interrupt() check
alone would not work on RT. Instead, keep track of whether we've already
had an rcu_read_unlock_special() with preemption disabled but haven't yet
scheduled, and rely on the preempt_enable() yet to come instead of
calling invoke_rcu_core().
Signed-off-by: Scott Wood <swood@...hat.com>
---
kernel/rcu/tree_plugin.h | 10 ++--------
1 file changed, 2 insertions(+), 8 deletions(-)
diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
index 5d63914b3687..d7ddbcc7231c 100644
--- a/kernel/rcu/tree_plugin.h
+++ b/kernel/rcu/tree_plugin.h
@@ -630,14 +630,8 @@ static void rcu_read_unlock_special(struct task_struct *t)
if (preempt_bh_were_disabled || irqs_were_disabled) {
WRITE_ONCE(t->rcu_read_unlock_special.b.exp_hint, false);
/* Need to defer quiescent state until everything is enabled. */
- if (irqs_were_disabled) {
- /* Enabling irqs does not reschedule, so... */
- invoke_rcu_core();
- } else {
- /* Enabling BH or preempt does reschedule, so... */
- set_tsk_need_resched(current);
- set_preempt_need_resched();
- }
+ set_tsk_need_resched(current);
+ set_preempt_need_resched();
local_irq_restore(flags);
return;
}
--
1.8.3.1
Powered by blists - more mailing lists