[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1335830115-14335-35-git-send-email-fweisbec@gmail.com>
Date: Tue, 1 May 2012 01:55:08 +0200
From: Frederic Weisbecker <fweisbec@...il.com>
To: LKML <linux-kernel@...r.kernel.org>,
linaro-sched-sig@...ts.linaro.org
Cc: Frederic Weisbecker <fweisbec@...il.com>,
Alessio Igor Bogani <abogani@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Avi Kivity <avi@...hat.com>,
Chris Metcalf <cmetcalf@...era.com>,
Christoph Lameter <cl@...ux.com>,
Daniel Lezcano <daniel.lezcano@...aro.org>,
Geoff Levand <geoff@...radead.org>,
Gilad Ben Yossef <gilad@...yossef.com>,
Hakan Akkan <hakanakkan@...il.com>,
Ingo Molnar <mingo@...nel.org>, Kevin Hilman <khilman@...com>,
Max Krasnyansky <maxk@...lcomm.com>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Peter Zijlstra <peterz@...radead.org>,
Stephen Hemminger <shemminger@...tta.com>,
Steven Rostedt <rostedt@...dmis.org>,
Sven-Thorsten Dietrich <thebigcorporation@...il.com>,
Thomas Gleixner <tglx@...utronix.de>
Subject: [PATCH 34/41] sched: Update clock of nohz busiest rq before balancing
move_tasks() and active_load_balance_cpu_stop() both need
to have the busiest rq clock uptodate because they may end
up calling can_migrate_task() that uses rq->clock_task
to determine if the task running in the busiest runqueue
is cache hot.
Hence if the busiest runqueue is tickless, update its clock
before reading it.
Signed-off-by: Frederic Weisbecker <fweisbec@...il.com>
Cc: Alessio Igor Bogani <abogani@...nel.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>
Cc: Avi Kivity <avi@...hat.com>
Cc: Chris Metcalf <cmetcalf@...era.com>
Cc: Christoph Lameter <cl@...ux.com>
Cc: Daniel Lezcano <daniel.lezcano@...aro.org>
Cc: Geoff Levand <geoff@...radead.org>
Cc: Gilad Ben Yossef <gilad@...yossef.com>
Cc: Hakan Akkan <hakanakkan@...il.com>
Cc: Ingo Molnar <mingo@...nel.org>
Cc: Kevin Hilman <khilman@...com>
Cc: Max Krasnyansky <maxk@...lcomm.com>
Cc: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Stephen Hemminger <shemminger@...tta.com>
Cc: Steven Rostedt <rostedt@...dmis.org>
Cc: Sven-Thorsten Dietrich <thebigcorporation@...il.com>
Cc: Thomas Gleixner <tglx@...utronix.de>
---
kernel/sched/fair.c | 15 +++++++++++++++
1 files changed, 15 insertions(+), 0 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 42a87d7..eff80e0 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4455,6 +4455,7 @@ static int load_balance(int this_cpu, struct rq *this_rq,
int *balance)
{
int ld_moved, lb_flags = 0, active_balance = 0;
+ int clock_updated;
struct sched_group *group;
unsigned long imbalance;
struct rq *busiest;
@@ -4488,6 +4489,7 @@ redo:
schedstat_add(sd, lb_imbalance[idle], imbalance);
ld_moved = 0;
+ clock_updated = 0;
if (busiest->nr_running > 1) {
/*
* Attempt to move tasks. If find_busiest_group has found
@@ -4498,6 +4500,12 @@ redo:
lb_flags |= LBF_ALL_PINNED;
local_irq_save(flags);
double_rq_lock(this_rq, busiest);
+ /*
+ * Move tasks may end up calling can_migrate_task() which
+ * requires an uptodate value of the rq clock.
+ */
+ update_nohz_rq_clock(busiest);
+ clock_updated = 1;
ld_moved = move_tasks(this_rq, this_cpu, busiest,
imbalance, sd, idle, &lb_flags);
double_rq_unlock(this_rq, busiest);
@@ -4563,6 +4571,13 @@ redo:
busiest->active_balance = 1;
busiest->push_cpu = this_cpu;
active_balance = 1;
+ /*
+ * active_load_balance_cpu_stop may end up calling
+ * can_migrate_task() which requires an uptodate
+ * value of the rq clock.
+ */
+ if (!clock_updated)
+ update_nohz_rq_clock(busiest);
}
raw_spin_unlock_irqrestore(&busiest->lock, flags);
--
1.7.5.4
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists