[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180626000859.23572-35-paulmck@linux.vnet.ibm.com>
Date: Mon, 25 Jun 2018 17:08:54 -0700
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: linux-kernel@...r.kernel.org
Cc: mingo@...nel.org, jiangshanlai@...il.com, dipankar@...ibm.com,
akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com,
josh@...htriplett.org, tglx@...utronix.de, peterz@...radead.org,
rostedt@...dmis.org, dhowells@...hat.com, edumazet@...gle.com,
fweisbec@...il.com, oleg@...hat.com, joel@...lfernandes.org,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Subject: [PATCH tip/core/rcu 35/40] rcu: Make rcu_start_this_gp() check for grace period already started
In the old days of ->gpnum and ->completed, the code requesting a new
grace period checked to see if that grace period had already started,
bailing early if so. The new-age ->gp_seq approach instead checks
whether the grace period has already finished. A compensating change
pushed the requested grace period down to the bottom of the tree, thus
reducing lock contention and even eliminating it in some cases. But why
not further reduce contention, especially on large systems, by doing both,
especially given that the cost of doing both is extremely small?
This commit therefore adds a new rcu_seq_started() function that checks
whether a specified grace period has already started. It then uses
this new function in place of rcu_seq_done() in the rcu_start_this_gp()
function's funnel locking code.
Reported-by: Joel Fernandes <joel@...lfernandes.org>
Signed-off-by: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
---
kernel/rcu/rcu.h | 9 +++++++++
kernel/rcu/tree.c | 2 +-
2 files changed, 10 insertions(+), 1 deletion(-)
diff --git a/kernel/rcu/rcu.h b/kernel/rcu/rcu.h
index 003671825d62..1c5cbd9d7c97 100644
--- a/kernel/rcu/rcu.h
+++ b/kernel/rcu/rcu.h
@@ -107,6 +107,15 @@ static inline unsigned long rcu_seq_current(unsigned long *sp)
return READ_ONCE(*sp);
}
+/*
+ * Given a snapshot from rcu_seq_snap(), determine whether or not the
+ * corresponding update-side operation has started.
+ */
+static inline bool rcu_seq_started(unsigned long *sp, unsigned long s)
+{
+ return ULONG_CMP_LT((s - 1) & ~RCU_SEQ_STATE_MASK, READ_ONCE(*sp));
+}
+
/*
* Given a snapshot from rcu_seq_snap(), determine whether or not a
* full update-side operation has occurred.
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 5c6f59b4fc9c..446163b6feba 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -1583,7 +1583,7 @@ static bool rcu_start_this_gp(struct rcu_node *rnp, struct rcu_data *rdp,
if (rnp_root != rnp)
raw_spin_lock_rcu_node(rnp_root);
if (ULONG_CMP_GE(rnp_root->gp_seq_needed, c) ||
- rcu_seq_done(&rnp_root->gp_seq, c) ||
+ rcu_seq_started(&rnp_root->gp_seq, c) ||
(rnp != rnp_root &&
rcu_seq_state(rcu_seq_current(&rnp_root->gp_seq)))) {
trace_rcu_this_gp(rnp_root, rdp, c, TPS("Prestarted"));
--
2.17.1
Powered by blists - more mailing lists