lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20260103002343.6599-13-joelagnelf@nvidia.com>
Date: Fri,  2 Jan 2026 19:23:41 -0500
From: Joel Fernandes <joelagnelf@...dia.com>
To: linux-kernel@...r.kernel.org
Cc: "Paul E . McKenney" <paulmck@...nel.org>,
	Frederic Weisbecker <frederic@...nel.org>,
	Neeraj Upadhyay <neeraj.upadhyay@...nel.org>,
	Joel Fernandes <joelagnelf@...dia.com>,
	Josh Triplett <josh@...htriplett.org>,
	Boqun Feng <boqun.feng@...il.com>,
	Steven Rostedt <rostedt@...dmis.org>,
	Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
	Lai Jiangshan <jiangshanlai@...il.com>,
	Zqiang <qiang.zhang@...ux.dev>,
	Uladzislau Rezki <urezki@...il.com>,
	joel@...lfernandes.org,
	rcu@...r.kernel.org
Subject: [PATCH RFC 12/14] rcu: Skip per-CPU list addition when GP already started

When a grace period is already started or waiting on this CPU, skip
adding the blocked task to the per-CPU rdp->blkd_list. The task goes
directly to rnp->blkd_tasks via rcu_preempt_ctxt_queue(), which is the
same behavior as before per-CPU lists were added.

When no GP is waiting, add the task to BOTH lists as before this patch. This
maintains the existing behavior while preparing for the next patch which will
skip rnp blocked list addition when no GP is waiting.

Because the rnp->blkd_tasks handling remains unchanged (tasks still go
through rcu_preempt_ctxt_queue() in all cases), this work same as
before this patch.

Signed-off-by: Joel Fernandes <joelagnelf@...dia.com>
---
 kernel/rcu/tree_plugin.h | 16 ++++++++++++----
 1 file changed, 12 insertions(+), 4 deletions(-)

diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
index 8622e79660ed..d43dd153c152 100644
--- a/kernel/rcu/tree_plugin.h
+++ b/kernel/rcu/tree_plugin.h
@@ -339,10 +339,18 @@ void rcu_note_context_switch(bool preempt)
 		t->rcu_read_unlock_special.b.blocked = true;
 		t->rcu_blocked_node = rnp;
 #ifdef CONFIG_RCU_PER_CPU_BLOCKED_LISTS
-		t->rcu_blocked_cpu = rdp->cpu;
-		raw_spin_lock(&rdp->blkd_lock);
-		list_add(&t->rcu_rdp_entry, &rdp->blkd_list);
-		raw_spin_unlock(&rdp->blkd_lock);
+		/*
+		 * If no GP is waiting on this CPU, add to per-CPU list as well
+		 * so promotion can find it if a GP starts later. If GP waiting,
+		 * skip per-CPU list - task goes only to rnp->blkd_tasks (same
+		 * behavior as before per-CPU lists were added).
+		 */
+		if (!rcu_gp_in_progress() && !rdp->cpu_no_qs.b.norm && !rdp->cpu_no_qs.b.exp) {
+			t->rcu_blocked_cpu = rdp->cpu;
+			raw_spin_lock(&rdp->blkd_lock);
+			list_add(&t->rcu_rdp_entry, &rdp->blkd_list);
+			raw_spin_unlock(&rdp->blkd_lock);
+		}
 #endif
 
 		/*
-- 
2.34.1


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ