[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20260103002343.6599-4-joelagnelf@nvidia.com>
Date: Fri, 2 Jan 2026 19:23:32 -0500
From: Joel Fernandes <joelagnelf@...dia.com>
To: linux-kernel@...r.kernel.org
Cc: "Paul E . McKenney" <paulmck@...nel.org>,
Frederic Weisbecker <frederic@...nel.org>,
Neeraj Upadhyay <neeraj.upadhyay@...nel.org>,
Joel Fernandes <joelagnelf@...dia.com>,
Josh Triplett <josh@...htriplett.org>,
Boqun Feng <boqun.feng@...il.com>,
Steven Rostedt <rostedt@...dmis.org>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
Lai Jiangshan <jiangshanlai@...il.com>,
Zqiang <qiang.zhang@...ux.dev>,
Uladzislau Rezki <urezki@...il.com>,
joel@...lfernandes.org,
rcu@...r.kernel.org
Subject: [PATCH RFC 03/14] rcu: Early return during unlock for tasks only on per-CPU blocked list
Add a check for t->rcu_blocked_node being NULL after removing the task
from the per-CPU blocked list. If NULL, the task was only on the per-CPU
list and not on the rcu_node's blkd_tasks list, so we can skip all the
rnp lock acquisition and quiescent state reporting.
Currently this path is not taken since tasks are always added to both
lists. This prepares for a future optimization where tasks will initially
be added only to the per-CPU list and promoted to the rnp list only when
a grace period needs to wait for them.
Signed-off-by: Joel Fernandes <joelagnelf@...dia.com>
---
kernel/rcu/tree_plugin.h | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
index 5d2bde19131a..ee26e87c72f8 100644
--- a/kernel/rcu/tree_plugin.h
+++ b/kernel/rcu/tree_plugin.h
@@ -549,6 +549,22 @@ rcu_preempt_deferred_qs_irqrestore(struct task_struct *t, unsigned long flags)
list_del_init(&t->rcu_rdp_entry);
t->rcu_blocked_cpu = -1;
raw_spin_unlock(&blocked_rdp->blkd_lock);
+ /*
+ * TODO: This should just be "WARN_ON_ONCE(rnp); return;" since after
+ * the last patches, the task can only be in either the rdp or the rnp
+ * list, not both. Since blocked_cpu != -1, it is clearly not in the rnp
+ * so we activate the benefits of this patchset by removing the task
+ * from the rdp blocked list and early returning.
+ */
+ if (!rnp) {
+ /*
+ * Task was only on per-CPU list, not on rnp list.
+ * This can happen in future when tasks are added
+ * only to rdp initially and promoted to rnp later.
+ */
+ local_irq_restore(flags);
+ return;
+ }
}
#endif
raw_spin_lock_rcu_node(rnp); /* irqs already disabled. */
--
2.34.1
Powered by blists - more mailing lists