[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210929221012.228270-7-frederic@kernel.org>
Date: Thu, 30 Sep 2021 00:10:07 +0200
From: Frederic Weisbecker <frederic@...nel.org>
To: "Paul E . McKenney" <paulmck@...nel.org>
Cc: LKML <linux-kernel@...r.kernel.org>,
Frederic Weisbecker <frederic@...nel.org>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Peter Zijlstra <peterz@...radead.org>,
Uladzislau Rezki <urezki@...il.com>,
Valentin Schneider <valentin.schneider@....com>,
Thomas Gleixner <tglx@...utronix.de>,
Boqun Feng <boqun.feng@...il.com>,
Neeraj Upadhyay <neeraju@...eaurora.org>,
Josh Triplett <josh@...htriplett.org>,
Joel Fernandes <joel@...lfernandes.org>, rcu@...r.kernel.org
Subject: [RFC PATCH 06/11] rcu/nocb: Check a stable offloaded state to manipulate qlen_last_fqs_check
It's unclear why rdp->qlen_last_fqs_check is updated only on offloaded
rdp. Also rcu_core() and NOCB kthreads can run concurrently, should
we do the update on both cases? Probably not because one can erase the
initialized value of the other.
For now shutdown the RT warning due to volatile offloading state check
and wait for a debate after this half-baked patch.
Reported-by: Valentin Schneider <valentin.schneider@....com>
Signed-off-by: Frederic Weisbecker <frederic@...nel.org>
Cc: Valentin Schneider <valentin.schneider@....com>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Cc: Josh Triplett <josh@...htriplett.org>
Cc: Joel Fernandes <joel@...lfernandes.org>
Cc: Boqun Feng <boqun.feng@...il.com>
Cc: Neeraj Upadhyay <neeraju@...eaurora.org>
Cc: Uladzislau Rezki <urezki@...il.com>
Cc: Thomas Gleixner <tglx@...utronix.de>
---
kernel/rcu/tree.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 73971b8024d8..b1fc6e498d90 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -2508,7 +2508,7 @@ static void rcu_do_batch(struct rcu_data *rdp)
trace_rcu_batch_start(rcu_state.name,
rcu_segcblist_n_cbs(&rdp->cblist), bl);
rcu_segcblist_extract_done_cbs(&rdp->cblist, &rcl);
- if (offloaded)
+ if (rcu_rdp_is_offloaded(rdp))
rdp->qlen_last_fqs_check = rcu_segcblist_n_cbs(&rdp->cblist);
trace_rcu_segcb_stats(&rdp->cblist, TPS("SegCbDequeued"));
--
2.25.1
Powered by blists - more mailing lists