[<prev] [next>] [day] [month] [year] [list]
Message-ID: <tip-ae154be1f34a674e6cbb43ccf6e442f56acd7a70@git.kernel.org>
Date: Wed, 16 Sep 2009 10:21:04 GMT
From: tip-bot for Peter Zijlstra <a.p.zijlstra@...llo.nl>
To: linux-tip-commits@...r.kernel.org
Cc: linux-kernel@...r.kernel.org, hpa@...or.com, mingo@...hat.com,
a.p.zijlstra@...llo.nl, tglx@...utronix.de, mingo@...e.hu
Subject: [tip:sched/core] sched: Weaken SD_POWERSAVINGS_BALANCE
Commit-ID: ae154be1f34a674e6cbb43ccf6e442f56acd7a70
Gitweb: http://git.kernel.org/tip/ae154be1f34a674e6cbb43ccf6e442f56acd7a70
Author: Peter Zijlstra <a.p.zijlstra@...llo.nl>
AuthorDate: Thu, 10 Sep 2009 14:40:57 +0200
Committer: Ingo Molnar <mingo@...e.hu>
CommitDate: Tue, 15 Sep 2009 16:01:06 +0200
sched: Weaken SD_POWERSAVINGS_BALANCE
One of the problems of power-saving balancing is that under certain
scenarios it is too slow and allows tons of real work to pile up.
Avoid this by ignoring the powersave stuff when there's real work
to be done.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@...llo.nl>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@...e.hu>
---
kernel/sched.c | 40 ++++++++++++++++++++--------------------
kernel/sched_fair.c | 21 ++++++++++++++++++---
2 files changed, 38 insertions(+), 23 deletions(-)
diff --git a/kernel/sched.c b/kernel/sched.c
index 6c819f3..f0ccb8b 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -1538,6 +1538,26 @@ static unsigned long target_load(int cpu, int type)
return max(rq->cpu_load[type-1], total);
}
+static struct sched_group *group_of(int cpu)
+{
+ struct sched_domain *sd = rcu_dereference(cpu_rq(cpu)->sd);
+
+ if (!sd)
+ return NULL;
+
+ return sd->groups;
+}
+
+static unsigned long power_of(int cpu)
+{
+ struct sched_group *group = group_of(cpu);
+
+ if (!group)
+ return SCHED_LOAD_SCALE;
+
+ return group->cpu_power;
+}
+
static int task_hot(struct task_struct *p, u64 now, struct sched_domain *sd);
static unsigned long cpu_avg_load_per_task(int cpu)
@@ -3982,26 +4002,6 @@ ret:
return NULL;
}
-static struct sched_group *group_of(int cpu)
-{
- struct sched_domain *sd = rcu_dereference(cpu_rq(cpu)->sd);
-
- if (!sd)
- return NULL;
-
- return sd->groups;
-}
-
-static unsigned long power_of(int cpu)
-{
- struct sched_group *group = group_of(cpu);
-
- if (!group)
- return SCHED_LOAD_SCALE;
-
- return group->cpu_power;
-}
-
/*
* find_busiest_queue - find the busiest runqueue among the cpus in group.
*/
diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
index 09d19f7..eaa0001 100644
--- a/kernel/sched_fair.c
+++ b/kernel/sched_fair.c
@@ -1333,10 +1333,25 @@ static int select_task_rq_fair(struct task_struct *p, int flag, int sync)
for_each_domain(cpu, tmp) {
/*
- * If power savings logic is enabled for a domain, stop there.
+ * If power savings logic is enabled for a domain, see if we
+ * are not overloaded, if so, don't balance wider.
*/
- if (tmp->flags & SD_POWERSAVINGS_BALANCE)
- break;
+ if (tmp->flags & SD_POWERSAVINGS_BALANCE) {
+ unsigned long power = 0;
+ unsigned long nr_running = 0;
+ unsigned long capacity;
+ int i;
+
+ for_each_cpu(i, sched_domain_span(tmp)) {
+ power += power_of(i);
+ nr_running += cpu_rq(i)->cfs.nr_running;
+ }
+
+ capacity = DIV_ROUND_CLOSEST(power, SCHED_LOAD_SCALE);
+
+ if (nr_running/2 < capacity)
+ break;
+ }
switch (flag) {
case SD_BALANCE_WAKE:
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists