[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250705203918.4149863-3-joelagnelf@nvidia.com>
Date: Sat, 5 Jul 2025 16:39:17 -0400
From: Joel Fernandes <joelagnelf@...dia.com>
To: linux-kernel@...r.kernel.org,
"Paul E. McKenney" <paulmck@...nel.org>,
Frederic Weisbecker <frederic@...nel.org>,
Neeraj Upadhyay <neeraj.upadhyay@...nel.org>,
Joel Fernandes <joelagnelf@...dia.com>,
Josh Triplett <josh@...htriplett.org>,
Boqun Feng <boqun.feng@...il.com>,
Uladzislau Rezki <urezki@...il.com>,
Steven Rostedt <rostedt@...dmis.org>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
Lai Jiangshan <jiangshanlai@...il.com>,
Zqiang <qiang.zhang@...ux.dev>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Clark Williams <clrkwllms@...nel.org>
Cc: rcu@...r.kernel.org,
linux-rt-devel@...ts.linux.dev
Subject: [PATCH RFC 3/3] rcu: Remove redundant check for irq state during unlock
The check for irqs_were_disabled is redundant in
rcu_unlock_needs_exp_handling() as the caller already checks for this.
This includes the boost case as well. Just remove the redundant check.
This is a first win for the refactor of the needs_exp (formerly
expboost) condition into a new rcu_unlock_needs_exp_handling() function,
as the conditions became more easier to read.
Signed-off-by: Joel Fernandes <joelagnelf@...dia.com>
---
kernel/rcu/tree_plugin.h | 22 ++++++++++------------
1 file changed, 10 insertions(+), 12 deletions(-)
diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
index 8504d95bb35b..112973ecebb8 100644
--- a/kernel/rcu/tree_plugin.h
+++ b/kernel/rcu/tree_plugin.h
@@ -659,14 +659,12 @@ static void rcu_preempt_deferred_qs_handler(struct irq_work *iwp)
* @t: The task being checked
* @rdp: The per-CPU RCU data
* @rnp: The RCU node for this CPU
- * @irqs_were_disabled: Whether interrupts were disabled before rcu_read_unlock()
*
* Returns true if expedited processing of the rcu_read_unlock() is needed.
*/
static bool rcu_unlock_needs_exp_handling(struct task_struct *t,
struct rcu_data *rdp,
- struct rcu_node *rnp,
- bool irqs_were_disabled)
+ struct rcu_node *rnp)
{
/*
* Check if this task is blocking an expedited grace period.
@@ -692,7 +690,7 @@ static bool rcu_unlock_needs_exp_handling(struct task_struct *t,
* disturbing the system more. Check if either:
* - This CPU has not yet reported a quiescent state, or
* - This task was preempted within an RCU critical section
- * In either case, requird expedited handling for strict GP mode.
+ * In either case, require expedited handling for strict GP mode.
*/
if (IS_ENABLED(CONFIG_RCU_STRICT_GRACE_PERIOD) &&
((rdp->grpmask & READ_ONCE(rnp->qsmask)) || t->rcu_blocked_node))
@@ -700,14 +698,14 @@ static bool rcu_unlock_needs_exp_handling(struct task_struct *t,
/*
* RCU priority boosting case: If a task is subject to RCU priority
- * boosting and exits an RCU read-side critical section with interrupts
- * disabled, we need expedited handling to ensure timely deboosting.
- * Without this, a low-priority task could incorrectly run at high
- * real-time priority for an extended period effecting real-time
- * responsiveness. This applies to all CONFIG_RCU_BOOST=y kernels,
- * not just PREEMPT_RT.
+ * boosting and exits an RCU read-side critical section, we need
+ * expedited handling to ensure timely deboosting. Without this,
+ * a low-priority task could incorrectly run at high real-time
+ * priority for an extended period effecting real-time
+ * responsiveness. This applies to all RCU_BOOST=y kernels,
+ * not just to PREEMPT_RT.
*/
- if (IS_ENABLED(CONFIG_RCU_BOOST) && irqs_were_disabled && t->rcu_blocked_node)
+ if (IS_ENABLED(CONFIG_RCU_BOOST) && t->rcu_blocked_node)
return true;
return false;
@@ -736,7 +734,7 @@ static void rcu_read_unlock_special(struct task_struct *t)
struct rcu_data *rdp = this_cpu_ptr(&rcu_data);
struct rcu_node *rnp = rdp->mynode;
- needs_exp = rcu_unlock_needs_exp_handling(t, rdp, rnp, irqs_were_disabled);
+ needs_exp = rcu_unlock_needs_exp_handling(t, rdp, rnp);
// Need to defer quiescent state until everything is enabled.
if (use_softirq && (in_hardirq() || (needs_exp && !irqs_were_disabled))) {
--
2.43.0
Powered by blists - more mailing lists