[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20191003013903.13079-6-paulmck@kernel.org>
Date: Wed, 2 Oct 2019 18:38:57 -0700
From: paulmck@...nel.org
To: rcu@...r.kernel.org
Cc: linux-kernel@...r.kernel.org, mingo@...nel.org,
jiangshanlai@...il.com, dipankar@...ibm.com,
akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com,
josh@...htriplett.org, tglx@...utronix.de, peterz@...radead.org,
rostedt@...dmis.org, dhowells@...hat.com, edumazet@...gle.com,
fweisbec@...il.com, oleg@...hat.com, joel@...lfernandes.org,
"Paul E. McKenney" <paulmck@...ux.ibm.com>
Subject: [PATCH tip/core/rcu 06/12] rcu: Make CPU-hotplug removal operations enable tick
From: "Paul E. McKenney" <paulmck@...ux.ibm.com>
CPU-hotplug removal operations run the multi_cpu_stop() function, which
relies on the scheduler to gain control from whatever is running on the
various online CPUs, including any nohz_full CPUs running long loops in
kernel-mode code. Lack of the scheduler-clock interrupt on such CPUs
can delay multi_cpu_stop() for several minutes and can also result in
RCU CPU stall warnings. This commit therefore causes CPU-hotplug removal
operations to enable the scheduler-clock interrupt on all online CPUs.
[ paulmck: Apply Joel Fernandes TICK_DEP_MASK_RCU->TICK_DEP_BIT_RCU fix. ]
Signed-off-by: Paul E. McKenney <paulmck@...ux.ibm.com>
---
kernel/rcu/tree.c | 15 +++++++++++++++
1 file changed, 15 insertions(+)
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index f708d54..74bf5c65 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -2091,6 +2091,7 @@ static void rcu_cleanup_dead_rnp(struct rcu_node *rnp_leaf)
*/
int rcutree_dead_cpu(unsigned int cpu)
{
+ int c;
struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
struct rcu_node *rnp = rdp->mynode; /* Outgoing CPU's rdp & rnp. */
@@ -2101,6 +2102,10 @@ int rcutree_dead_cpu(unsigned int cpu)
rcu_boost_kthread_setaffinity(rnp, -1);
/* Do any needed no-CB deferred wakeups from this CPU. */
do_nocb_deferred_wakeup(per_cpu_ptr(&rcu_data, cpu));
+
+ // Stop-machine done, so allow nohz_full to disable tick.
+ for_each_online_cpu(c)
+ tick_dep_clear_cpu(c, TICK_DEP_BIT_RCU);
return 0;
}
@@ -3074,6 +3079,7 @@ static void rcutree_affinity_setting(unsigned int cpu, int outgoing)
*/
int rcutree_online_cpu(unsigned int cpu)
{
+ int c;
unsigned long flags;
struct rcu_data *rdp;
struct rcu_node *rnp;
@@ -3087,6 +3093,10 @@ int rcutree_online_cpu(unsigned int cpu)
return 0; /* Too early in boot for scheduler work. */
sync_sched_exp_online_cleanup(cpu);
rcutree_affinity_setting(cpu, -1);
+
+ // Stop-machine done, so allow nohz_full to disable tick.
+ for_each_online_cpu(c)
+ tick_dep_clear_cpu(c, TICK_DEP_BIT_RCU);
return 0;
}
@@ -3096,6 +3106,7 @@ int rcutree_online_cpu(unsigned int cpu)
*/
int rcutree_offline_cpu(unsigned int cpu)
{
+ int c;
unsigned long flags;
struct rcu_data *rdp;
struct rcu_node *rnp;
@@ -3107,6 +3118,10 @@ int rcutree_offline_cpu(unsigned int cpu)
raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
rcutree_affinity_setting(cpu, cpu);
+
+ // nohz_full CPUs need the tick for stop-machine to work quickly
+ for_each_online_cpu(c)
+ tick_dep_set_cpu(c, TICK_DEP_BIT_RCU);
return 0;
}
--
2.9.5
Powered by blists - more mailing lists