[<prev] [next>] [day] [month] [year] [list]
Message-Id: <20241110144747.21379-1-zilinguan811@gmail.com>
Date: Sun, 10 Nov 2024 14:47:47 +0000
From: Zilin Guan <zilinguan811@...il.com>
To: paulmck@...nel.org
Cc: boqun.feng@...il.com,
frederic@...nel.org,
jiangshanlai@...il.com,
joel@...lfernandes.org,
josh@...htriplett.org,
linux-kernel@...r.kernel.org,
mathieu.desnoyers@...icios.com,
neeraj.upadhyay@...nel.org,
qiang.zhang1211@...il.com,
rcu@...r.kernel.org,
rostedt@...dmis.org,
urezki@...il.com,
zilinguan811@...il.com,
xujianhao01@...il.com
Subject: [PATCH v2] rcu: Remove READ_ONCE() for rdp->gpwrap access in __note_gp_changes()
There is one access to rdp->gpwrap in the __note_gp_changes() function
that does not use READ_ONCE() for protection, while other accesses to
rdp->gpwrap do use READ_ONCE(). When using the 8*TREE03 and
CONFIG_NR_CPUS=8 configuration, KCSAN found no data races at that point.
This is because other functions should hold rnp->lock when calling
__note_gp_changes(), which ensures proper exclusion of writes to the
rdp->gpwrap fields for all CPUs associated with that leaf rcu_node
structure.
Therefore, using READ_ONCE() to protect rdp->gpwrap within the
__note_gp_changes() function is unnecessary.
Signed-off-by: Zilin Guan <zilinguan811@...il.com>
---
v1 -> v2: Remove READ_ONCE() from accesses to rdp->gpwrap.
kernel/rcu/tree.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 77b5b39e19a8..68eb0f746575 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -1275,7 +1275,7 @@ static bool __note_gp_changes(struct rcu_node *rnp, struct rcu_data *rdp)
/* Handle the ends of any preceding grace periods first. */
if (rcu_seq_completed_gp(rdp->gp_seq, rnp->gp_seq) ||
- unlikely(READ_ONCE(rdp->gpwrap))) {
+ unlikely(rdp->gpwrap)) {
if (!offloaded)
ret = rcu_advance_cbs(rnp, rdp); /* Advance CBs. */
rdp->core_needs_qs = false;
@@ -1289,7 +1289,7 @@ static bool __note_gp_changes(struct rcu_node *rnp, struct rcu_data *rdp)
/* Now handle the beginnings of any new-to-this-CPU grace periods. */
if (rcu_seq_new_gp(rdp->gp_seq, rnp->gp_seq) ||
- unlikely(READ_ONCE(rdp->gpwrap))) {
+ unlikely(rdp->gpwrap)) {
/*
* If the current grace period is waiting for this CPU,
* set up to detect a quiescent state, otherwise don't
@@ -1304,7 +1304,7 @@ static bool __note_gp_changes(struct rcu_node *rnp, struct rcu_data *rdp)
rdp->gp_seq = rnp->gp_seq; /* Remember new grace-period state. */
if (ULONG_CMP_LT(rdp->gp_seq_needed, rnp->gp_seq_needed) || rdp->gpwrap)
WRITE_ONCE(rdp->gp_seq_needed, rnp->gp_seq_needed);
- if (IS_ENABLED(CONFIG_PROVE_RCU) && READ_ONCE(rdp->gpwrap))
+ if (IS_ENABLED(CONFIG_PROVE_RCU) && rdp->gpwrap)
WRITE_ONCE(rdp->last_sched_clock, jiffies);
WRITE_ONCE(rdp->gpwrap, false);
rcu_gpnum_ovf(rnp, rdp);
--
2.34.1
Powered by blists - more mailing lists