[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1348166900-18716-5-git-send-email-paulmck@linux.vnet.ibm.com>
Date: Thu, 20 Sep 2012 11:48:01 -0700
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: linux-kernel@...r.kernel.org
Cc: mingo@...e.hu, laijs@...fujitsu.com, dipankar@...ibm.com,
akpm@...ux-foundation.org, mathieu.desnoyers@...ymtl.ca,
josh@...htriplett.org, niv@...ibm.com, tglx@...utronix.de,
peterz@...radead.org, rostedt@...dmis.org, Valdis.Kletnieks@...edu,
dhowells@...hat.com, eric.dumazet@...il.com, darren@...art.com,
fweisbec@...il.com, sbw@....edu, patches@...aro.org,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Subject: [PATCH tip/core/rcu 05/23] rcu: Allow RCU grace-period cleanup to be preempted
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
RCU grace-period cleanup is currently carried out with interrupts
disabled, which can result in excessive latency spikes on large systems
(many hundreds or thousands of CPUs). This patch therefore makes the
RCU grace-period cleanup be preemptible, including voluntary preemption
points, which should eliminate those latency spikes. Similar spikes from
forcing of quiescent states will be dealt with similarly by later patches.
Updated to replace uses of spin_lock_irqsave() with spin_lock_irq(), as
suggested by Peter Zijlstra.
Reported-by: Mike Galbraith <mgalbraith@...e.de>
Reported-by: Dimitri Sivanich <sivanich@....com>
Signed-off-by: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
Reviewed-by: Josh Triplett <josh@...htriplett.org>
---
kernel/rcutree.c | 15 +++++++--------
1 files changed, 7 insertions(+), 8 deletions(-)
diff --git a/kernel/rcutree.c b/kernel/rcutree.c
index 3cd18ea..fa11e54 100644
--- a/kernel/rcutree.c
+++ b/kernel/rcutree.c
@@ -1128,7 +1128,7 @@ static int __noreturn rcu_gp_kthread(void *arg)
flush_signals(current);
}
- raw_spin_lock_irqsave(&rnp->lock, flags);
+ raw_spin_lock_irq(&rnp->lock);
gp_duration = jiffies - rsp->gp_start;
if (gp_duration > rsp->gp_max)
rsp->gp_max = gp_duration;
@@ -1149,7 +1149,7 @@ static int __noreturn rcu_gp_kthread(void *arg)
* completed.
*/
if (*rdp->nxttail[RCU_WAIT_TAIL] == NULL) {
- raw_spin_unlock(&rnp->lock); /* irqs remain disabled. */
+ raw_spin_unlock_irq(&rnp->lock);
/*
* Propagate new ->completed value to rcu_node
@@ -1158,14 +1158,13 @@ static int __noreturn rcu_gp_kthread(void *arg)
* to process their callbacks.
*/
rcu_for_each_node_breadth_first(rsp, rnp) {
- /* irqs already disabled. */
- raw_spin_lock(&rnp->lock);
+ raw_spin_lock_irq(&rnp->lock);
rnp->completed = rsp->gpnum;
- /* irqs remain disabled. */
- raw_spin_unlock(&rnp->lock);
+ raw_spin_unlock_irq(&rnp->lock);
+ cond_resched();
}
rnp = rcu_get_root(rsp);
- raw_spin_lock(&rnp->lock); /* irqs already disabled. */
+ raw_spin_lock_irq(&rnp->lock);
}
rsp->completed = rsp->gpnum; /* Declare grace period done. */
@@ -1173,7 +1172,7 @@ static int __noreturn rcu_gp_kthread(void *arg)
rsp->fqs_state = RCU_GP_IDLE;
if (cpu_needs_another_gp(rsp, rdp))
rsp->gp_flags = 1;
- raw_spin_unlock_irqrestore(&rnp->lock, flags);
+ raw_spin_unlock_irq(&rnp->lock);
}
}
--
1.7.8
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists