[<prev] [next>] [day] [month] [year] [list]
Message-ID: <157252291178.29376.6993443685603243726.tip-bot2@tip-bot2>
Date: Thu, 31 Oct 2019 11:55:11 -0000
From: "tip-bot2 for Paul E. McKenney" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: "Paul E. McKenney" <paulmck@...nel.org>,
Ingo Molnar <mingo@...nel.org>, Borislav Petkov <bp@...en8.de>,
linux-kernel@...r.kernel.org
Subject: [tip: core/rcu] rcu: Force on tick when invoking lots of callbacks
The following commit has been merged into the core/rcu branch of tip:
Commit-ID: 6a949b7af82db7eb1e52caaed122eab1cf63acee
Gitweb: https://git.kernel.org/tip/6a949b7af82db7eb1e52caaed122eab1cf63acee
Author: Paul E. McKenney <paulmck@...nel.org>
AuthorDate: Sun, 28 Jul 2019 11:50:56 -07:00
Committer: Paul E. McKenney <paulmck@...nel.org>
CommitterDate: Sat, 05 Oct 2019 10:46:04 -07:00
rcu: Force on tick when invoking lots of callbacks
Callback invocation can run for a significant time period, and within
CONFIG_NO_HZ_FULL=y kernels, this period will be devoid of scheduler-clock
interrupts. In-kernel execution without such interrupts can cause all
manner of malfunction, with RCU CPU stall warnings being but one result.
This commit therefore forces scheduling-clock interrupts on whenever more
than a few RCU callbacks are invoked. Because offloaded callback invocation
can be preempted, this forcing is withdrawn on each context switch. This
in turn requires that the loop invoking RCU callbacks reiterate the forcing
periodically.
[ paulmck: Apply Joel Fernandes TICK_DEP_MASK_RCU->TICK_DEP_BIT_RCU fix. ]
[ paulmck: Remove NO_HZ_FULL check per Frederic Weisbecker feedback. ]
Signed-off-by: Paul E. McKenney <paulmck@...nel.org>
---
kernel/rcu/tree.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 8110514..238f93b 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -2151,6 +2151,7 @@ static void rcu_do_batch(struct rcu_data *rdp)
rcu_nocb_unlock_irqrestore(rdp, flags);
/* Invoke callbacks. */
+ tick_dep_set_task(current, TICK_DEP_BIT_RCU);
rhp = rcu_cblist_dequeue(&rcl);
for (; rhp; rhp = rcu_cblist_dequeue(&rcl)) {
debug_rcu_head_unqueue(rhp);
@@ -2217,6 +2218,7 @@ static void rcu_do_batch(struct rcu_data *rdp)
/* Re-invoke RCU core processing if there are callbacks remaining. */
if (!offloaded && rcu_segcblist_ready_cbs(&rdp->cblist))
invoke_rcu_core();
+ tick_dep_clear_task(current, TICK_DEP_BIT_RCU);
}
/*
Powered by blists - more mailing lists