[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20221010223956.1041247-3-frederic@kernel.org>
Date: Tue, 11 Oct 2022 00:39:56 +0200
From: Frederic Weisbecker <frederic@...nel.org>
To: "Paul E . McKenney" <paulmck@...nel.org>
Cc: LKML <linux-kernel@...r.kernel.org>,
Frederic Weisbecker <frederic@...nel.org>,
Joel Fernandes <joel@...lfernandes.org>
Subject: [PATCH 2/2] rcu/nocb: Spare bypass locking upon normal enqueue
When a callback is to be enqueued to the normal queue and not the bypass
one, a flush to the bypass queue is always tried anyway. This attempt
involves locking the bypass lock unconditionally. Although it is
guaranteed not to be contended at this point, because only call_rcu()
can lock the bypass lock without holding the nocb lock, it's still not
free and the operation can easily be spared most of the time by just
checking if the bypass list is empty. The check is safe as nobody can
queue nor flush the bypass concurrently.
Signed-off-by: Frederic Weisbecker <frederic@...nel.org>
---
kernel/rcu/tree_nocb.h | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
index 094fd454b6c3..30c3d473ffd8 100644
--- a/kernel/rcu/tree_nocb.h
+++ b/kernel/rcu/tree_nocb.h
@@ -423,8 +423,10 @@ static bool rcu_nocb_try_bypass(struct rcu_data *rdp, struct rcu_head *rhp,
if (*was_alldone)
trace_rcu_nocb_wake(rcu_state.name, rdp->cpu,
TPS("FirstQ"));
- WARN_ON_ONCE(!rcu_nocb_flush_bypass(rdp, NULL, j));
- WARN_ON_ONCE(rcu_cblist_n_cbs(&rdp->nocb_bypass));
+ if (rcu_cblist_n_cbs(&rdp->nocb_bypass)) {
+ WARN_ON_ONCE(!rcu_nocb_flush_bypass(rdp, NULL, j));
+ WARN_ON_ONCE(rcu_cblist_n_cbs(&rdp->nocb_bypass));
+ }
return false; // Caller must enqueue the callback.
}
--
2.25.1
Powered by blists - more mailing lists