lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240517152303.19689-3-frederic@kernel.org>
Date: Fri, 17 May 2024 17:23:03 +0200
From: Frederic Weisbecker <frederic@...nel.org>
To: LKML <linux-kernel@...r.kernel.org>
Cc: Frederic Weisbecker <frederic@...nel.org>,
	Boqun Feng <boqun.feng@...il.com>,
	Joel Fernandes <joel@...lfernandes.org>,
	Neeraj Upadhyay <neeraj.upadhyay@....com>,
	Uladzislau Rezki <urezki@...il.com>,
	Zqiang <qiang.zhang1211@...il.com>,
	rcu <rcu@...r.kernel.org>,
	"Paul E . McKenney" <paulmck@...nel.org>,
	Peter Zijlstra <peterz@...radead.org>
Subject: [PATCH 2/2] rcu/tasks: Further comment ordering around current task snapshot on TASK-TRACE

Comment the current understanding of barriers and locking role around
task snapshot.

Signed-off-by: Frederic Weisbecker <frederic@...nel.org>
---
 kernel/rcu/tasks.h | 18 +++++++++++++++---
 1 file changed, 15 insertions(+), 3 deletions(-)

diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h
index 6a9ee35a282e..05413b37dd6e 100644
--- a/kernel/rcu/tasks.h
+++ b/kernel/rcu/tasks.h
@@ -1738,9 +1738,21 @@ static void rcu_tasks_trace_pregp_step(struct list_head *hop)
 	for_each_online_cpu(cpu) {
 		rcu_read_lock();
 		/*
-		 * RQ must be locked because no ordering exists/can be relied upon
-		 * between rq->curr write and subsequent read sides. This ensures that
-		 * further context switching tasks will see update side pre-GP accesses.
+		 * RQ lock + smp_mb__after_spinlock() before reading rq->curr serve
+		 * two purposes:
+		 *
+		 * 1) Ordering against previous tasks accesses (though already enforced
+		 *    by upcoming IPIs and post-gp synchronize_rcu()).
+		 *
+		 * 2) Make sure not to miss latest context switch, because no ordering
+		 *    exists/can be relied upon between rq->curr write and subsequent read
+		 *    sides.
+		 *
+		 * 3) Make sure subsequent context switching tasks will see update side
+		 *    pre-GP accesses.
+		 *
+		 * smp_mb() after reading rq->curr doesn't play a significant role and might
+		 * be considered for removal in the future.
 		 */
 		t = cpu_curr_snapshot(cpu);
 		if (rcu_tasks_trace_pertask_prep(t, true))
-- 
2.44.0


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ