[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <0a2518eb-5547-4609-9254-fd98a317b5d8@paulmck-laptop>
Date: Tue, 10 Dec 2024 12:56:17 -0800
From: "Paul E. McKenney" <paulmck@...nel.org>
To: Zilin Guan <zilinguan811@...il.com>
Cc: boqun.feng@...il.com, frederic@...nel.org, jiangshanlai@...il.com,
joel@...lfernandes.org, josh@...htriplett.org,
linux-kernel@...r.kernel.org, mathieu.desnoyers@...icios.com,
neeraj.upadhyay@...nel.org, qiang.zhang1211@...il.com,
rcu@...r.kernel.org, rostedt@...dmis.org, urezki@...il.com,
xujianhao01@...il.com
Subject: Re: [PATCH v2] rcu: Remove READ_ONCE() for rdp->gpwrap access in
__note_gp_changes()
On Sun, Nov 10, 2024 at 02:47:47PM +0000, Zilin Guan wrote:
> There is one access to rdp->gpwrap in the __note_gp_changes() function
> that does not use READ_ONCE() for protection, while other accesses to
> rdp->gpwrap do use READ_ONCE(). When using the 8*TREE03 and
> CONFIG_NR_CPUS=8 configuration, KCSAN found no data races at that point.
> This is because other functions should hold rnp->lock when calling
> __note_gp_changes(), which ensures proper exclusion of writes to the
> rdp->gpwrap fields for all CPUs associated with that leaf rcu_node
> structure.
>
> Therefore, using READ_ONCE() to protect rdp->gpwrap within the
> __note_gp_changes() function is unnecessary.
>
> Signed-off-by: Zilin Guan <zilinguan811@...il.com>
> ---
I applied this unofficially to -rcu for further review and testing.
The delay was in part do to your "should hold" above, which suggested
that you had not actually verified this yourself. So this waited until
I had a chance to take a look.
I did the usual editing of the commit log as shown below. Please let
me know if I messed anything up.
Thanx, Paul
------------------------------------------------------------------------
commit 58b186eb8049230c475262f8e9eab34299677b8a
Author: Zilin Guan <zilinguan811@...il.com>
Date: Sun Nov 10 14:47:47 2024 +0000
rcu: Remove READ_ONCE() for rdp->gpwrap access in __note_gp_changes()
There is one access to the per-CPU rdp->gpwrap field in the
__note_gp_changes() function that does not use READ_ONCE(), but all other
accesses do use READ_ONCE(). When using the 8*TREE03 and CONFIG_NR_CPUS=8
configuration, KCSAN found no data races at that point. This is because
all calls to __note_gp_changes() hold rnp->lock, which excludes writes
to the rdp->gpwrap fields for all CPUs associated with that same leaf
rcu_node structure.
This commit therefore removes READ_ONCE() from rdp->gpwrap accesses
within the __note_gp_changes() function.
Signed-off-by: Zilin Guan <zilinguan811@...il.com>
Signed-off-by: Paul E. McKenney <paulmck@...nel.org>
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 6103778828085..6d91dbe64cf0e 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -1295,7 +1295,7 @@ static bool __note_gp_changes(struct rcu_node *rnp, struct rcu_data *rdp)
/* Handle the ends of any preceding grace periods first. */
if (rcu_seq_completed_gp(rdp->gp_seq, rnp->gp_seq) ||
- unlikely(READ_ONCE(rdp->gpwrap))) {
+ unlikely(rdp->gpwrap)) {
if (!offloaded)
ret = rcu_advance_cbs(rnp, rdp); /* Advance CBs. */
rdp->core_needs_qs = false;
@@ -1309,7 +1309,7 @@ static bool __note_gp_changes(struct rcu_node *rnp, struct rcu_data *rdp)
/* Now handle the beginnings of any new-to-this-CPU grace periods. */
if (rcu_seq_new_gp(rdp->gp_seq, rnp->gp_seq) ||
- unlikely(READ_ONCE(rdp->gpwrap))) {
+ unlikely(rdp->gpwrap)) {
/*
* If the current grace period is waiting for this CPU,
* set up to detect a quiescent state, otherwise don't
@@ -1324,7 +1324,7 @@ static bool __note_gp_changes(struct rcu_node *rnp, struct rcu_data *rdp)
rdp->gp_seq = rnp->gp_seq; /* Remember new grace-period state. */
if (ULONG_CMP_LT(rdp->gp_seq_needed, rnp->gp_seq_needed) || rdp->gpwrap)
WRITE_ONCE(rdp->gp_seq_needed, rnp->gp_seq_needed);
- if (IS_ENABLED(CONFIG_PROVE_RCU) && READ_ONCE(rdp->gpwrap))
+ if (IS_ENABLED(CONFIG_PROVE_RCU) && rdp->gpwrap)
WRITE_ONCE(rdp->last_sched_clock, jiffies);
WRITE_ONCE(rdp->gpwrap, false);
rcu_gpnum_ovf(rnp, rdp);
Powered by blists - more mailing lists