[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220204225507.4193113-3-paulmck@kernel.org>
Date: Fri, 4 Feb 2022 14:55:07 -0800
From: "Paul E. McKenney" <paulmck@...nel.org>
To: rcu@...r.kernel.org
Cc: linux-kernel@...r.kernel.org, kernel-team@...com,
rostedt@...dmis.org, "Paul E. McKenney" <paulmck@...nel.org>,
Mukesh Ojha <quic_mojha@...cinc.com>, Tejun Heo <tj@...nel.org>
Subject: [PATCH rcu 3/3] rcu: Allow expedited RCU grace periods on incoming CPUs
Although it is usually safe to invoke synchronize_rcu_expedited() from a
preemption-enabled CPU-hotplug notifier, if it is invoked from a notifier
between CPUHP_AP_RCUTREE_ONLINE and CPUHP_AP_ACTIVE, its attempts to
invoke a workqueue handler will hang due to RCU waiting on a CPU that
the scheduler is not paying attention to. This commit therefore expands
use of the existing workqueue-independent synchronize_rcu_expedited()
from early boot to also include CPUs that are being hotplugged.
Link: https://lore.kernel.org/lkml/7359f994-8aaf-3cea-f5cf-c0d3929689d6@quicinc.com/
Reported-by: Mukesh Ojha <quic_mojha@...cinc.com>
Cc: Tejun Heo <tj@...nel.org>
Signed-off-by: Paul E. McKenney <paulmck@...nel.org>
---
kernel/rcu/tree_exp.h | 14 ++++++++++----
1 file changed, 10 insertions(+), 4 deletions(-)
diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h
index 60197ea24ceb9..1a45667402260 100644
--- a/kernel/rcu/tree_exp.h
+++ b/kernel/rcu/tree_exp.h
@@ -816,7 +816,7 @@ static int rcu_print_task_exp_stall(struct rcu_node *rnp)
*/
void synchronize_rcu_expedited(void)
{
- bool boottime = (rcu_scheduler_active == RCU_SCHEDULER_INIT);
+ bool no_wq;
struct rcu_exp_work rew;
struct rcu_node *rnp;
unsigned long s;
@@ -841,9 +841,15 @@ void synchronize_rcu_expedited(void)
if (exp_funnel_lock(s))
return; /* Someone else did our work for us. */
+ /* Don't use workqueue during boot or from an incoming CPU. */
+ preempt_disable();
+ no_wq = rcu_scheduler_active == RCU_SCHEDULER_INIT ||
+ !cpumask_test_cpu(smp_processor_id(), cpu_active_mask);
+ preempt_enable();
+
/* Ensure that load happens before action based on it. */
- if (unlikely(boottime)) {
- /* Direct call during scheduler init and early_initcalls(). */
+ if (unlikely(no_wq)) {
+ /* Direct call for scheduler init, early_initcall()s, and incoming CPUs. */
rcu_exp_sel_wait_wake(s);
} else {
/* Marshall arguments & schedule the expedited grace period. */
@@ -861,7 +867,7 @@ void synchronize_rcu_expedited(void)
/* Let the next expedited grace period start. */
mutex_unlock(&rcu_state.exp_mutex);
- if (likely(!boottime))
+ if (likely(!no_wq))
destroy_work_on_stack(&rew.rew_work);
}
EXPORT_SYMBOL_GPL(synchronize_rcu_expedited);
--
2.31.1.189.g2e36527f23
Powered by blists - more mailing lists