[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1365809557-22575-5-git-send-email-paulmck@linux.vnet.ibm.com>
Date: Fri, 12 Apr 2013 16:32:34 -0700
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: linux-kernel@...r.kernel.org
Cc: mingo@...e.hu, laijs@...fujitsu.com, dipankar@...ibm.com,
akpm@...ux-foundation.org, mathieu.desnoyers@...ymtl.ca,
josh@...htriplett.org, niv@...ibm.com, tglx@...utronix.de,
peterz@...radead.org, rostedt@...dmis.org, Valdis.Kletnieks@...edu,
dhowells@...hat.com, edumazet@...gle.com, darren@...art.com,
fweisbec@...il.com, sbw@....edu,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Subject: [PATCH tip/core/rcu 5/8] rcu: Merge __rcu_process_gp_end() into __note_gp_changes()
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
This commit eliminates some duplicated code by merging
__rcu_process_gp_end() into __note_gp_changes().
Signed-off-by: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
---
kernel/rcutree.c | 48 ++++++------------------------------------------
1 file changed, 6 insertions(+), 42 deletions(-)
diff --git a/kernel/rcutree.c b/kernel/rcutree.c
index 9040e0f..ca07f2d 100644
--- a/kernel/rcutree.c
+++ b/kernel/rcutree.c
@@ -1244,18 +1244,16 @@ static void rcu_advance_cbs(struct rcu_state *rsp, struct rcu_node *rnp,
}
/*
- * Advance this CPU's callbacks, but only if the current grace period
- * has ended. This may be called only from the CPU to whom the rdp
- * belongs. In addition, the corresponding leaf rcu_node structure's
- * ->lock must be held by the caller, with irqs disabled.
+ * Update CPU-local rcu_data state to record the beginnings and ends of
+ * grace periods. The caller must hold the ->lock of the leaf rcu_node
+ * structure corresponding to the current CPU, and must have irqs disabled.
*/
-static void
-__rcu_process_gp_end(struct rcu_state *rsp, struct rcu_node *rnp, struct rcu_data *rdp)
+static void __note_gp_changes(struct rcu_state *rsp, struct rcu_node *rnp, struct rcu_data *rdp)
{
- /* Did another grace period end? */
+ /* Handle the ends of any preceding grace periods first. */
if (rdp->completed == rnp->completed) {
- /* No, so just accelerate recent callbacks. */
+ /* No grace period end, so just accelerate recent callbacks. */
rcu_accelerate_cbs(rsp, rnp, rdp);
} else {
@@ -1266,41 +1264,7 @@ __rcu_process_gp_end(struct rcu_state *rsp, struct rcu_node *rnp, struct rcu_dat
/* Remember that we saw this grace-period completion. */
rdp->completed = rnp->completed;
trace_rcu_grace_period(rsp->name, rdp->gpnum, "cpuend");
-
- /*
- * If we were in an extended quiescent state, we may have
- * missed some grace periods that others CPUs handled on
- * our behalf. Catch up with this state to avoid noting
- * spurious new grace periods. If another grace period
- * has started, then rnp->gpnum will have advanced, so
- * we will detect this later on. Of course, any quiescent
- * states we found for the old GP are now invalid.
- */
- if (ULONG_CMP_LT(rdp->gpnum, rdp->completed)) {
- rdp->gpnum = rdp->completed;
- rdp->passed_quiesce = 0;
- }
-
- /*
- * If RCU does not need a quiescent state from this CPU,
- * then make sure that this CPU doesn't go looking for one.
- */
- if ((rnp->qsmask & rdp->grpmask) == 0)
- rdp->qs_pending = 0;
}
-}
-
-/*
- * Update CPU-local rcu_data state to record the newly noticed grace period.
- * This is used both when we started the grace period and when we notice
- * that someone else started the grace period. The caller must hold the
- * ->lock of the leaf rcu_node structure corresponding to the current CPU,
- * and must have irqs disabled.
- */
-static void __note_gp_changes(struct rcu_state *rsp, struct rcu_node *rnp, struct rcu_data *rdp)
-{
- /* Handle the ends of any preceding grace periods first. */
- __rcu_process_gp_end(rsp, rnp, rdp);
if (rdp->gpnum != rnp->gpnum) {
/*
--
1.8.1.5
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists