[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <tip-f93e65c186ab3c05ce2068733ca10e34fd00125e@git.kernel.org>
Date: Fri, 4 Sep 2009 08:54:58 GMT
From: tip-bot for Peter Zijlstra <a.p.zijlstra@...llo.nl>
To: linux-tip-commits@...r.kernel.org
Cc: linux-kernel@...r.kernel.org, ego@...ibm.com, hpa@...or.com,
mingo@...hat.com, andreas.herrmann3@....com,
a.p.zijlstra@...llo.nl, balbir@...ibm.com, tglx@...utronix.de,
mingo@...e.hu
Subject: [tip:sched/balancing] sched: Restore __cpu_power to a straight sum of power
Commit-ID: f93e65c186ab3c05ce2068733ca10e34fd00125e
Gitweb: http://git.kernel.org/tip/f93e65c186ab3c05ce2068733ca10e34fd00125e
Author: Peter Zijlstra <a.p.zijlstra@...llo.nl>
AuthorDate: Tue, 1 Sep 2009 10:34:32 +0200
Committer: Ingo Molnar <mingo@...e.hu>
CommitDate: Fri, 4 Sep 2009 10:09:53 +0200
sched: Restore __cpu_power to a straight sum of power
cpu_power is supposed to be a representation of the process
capacity of the cpu, not a value to randomly tweak in order to
affect placement.
Remove the placement hacks.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@...llo.nl>
Tested-by: Andreas Herrmann <andreas.herrmann3@....com>
Acked-by: Andreas Herrmann <andreas.herrmann3@....com>
Acked-by: Gautham R Shenoy <ego@...ibm.com>
Cc: Balbir Singh <balbir@...ibm.com>
LKML-Reference: <20090901083825.810860576@...llo.nl>
Signed-off-by: Ingo Molnar <mingo@...e.hu>
---
kernel/sched.c | 28 ++++++++++++----------------
1 files changed, 12 insertions(+), 16 deletions(-)
diff --git a/kernel/sched.c b/kernel/sched.c
index da1edc8..584a122 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -8464,15 +8464,13 @@ static void free_sched_groups(const struct cpumask *cpu_map,
* there are asymmetries in the topology. If there are asymmetries, group
* having more cpu_power will pickup more load compared to the group having
* less cpu_power.
- *
- * cpu_power will be a multiple of SCHED_LOAD_SCALE. This multiple represents
- * the maximum number of tasks a group can handle in the presence of other idle
- * or lightly loaded groups in the same sched domain.
*/
static void init_sched_groups_power(int cpu, struct sched_domain *sd)
{
struct sched_domain *child;
struct sched_group *group;
+ long power;
+ int weight;
WARN_ON(!sd || !sd->groups);
@@ -8483,22 +8481,20 @@ static void init_sched_groups_power(int cpu, struct sched_domain *sd)
sd->groups->__cpu_power = 0;
- /*
- * For perf policy, if the groups in child domain share resources
- * (for example cores sharing some portions of the cache hierarchy
- * or SMT), then set this domain groups cpu_power such that each group
- * can handle only one task, when there are other idle groups in the
- * same sched domain.
- */
- if (!child || (!(sd->flags & SD_POWERSAVINGS_BALANCE) &&
- (child->flags &
- (SD_SHARE_CPUPOWER | SD_SHARE_PKG_RESOURCES)))) {
- sg_inc_cpu_power(sd->groups, SCHED_LOAD_SCALE);
+ if (!child) {
+ power = SCHED_LOAD_SCALE;
+ weight = cpumask_weight(sched_domain_span(sd));
+ /*
+ * SMT siblings share the power of a single core.
+ */
+ if ((sd->flags & SD_SHARE_CPUPOWER) && weight > 1)
+ power /= weight;
+ sg_inc_cpu_power(sd->groups, power);
return;
}
/*
- * add cpu_power of each child group to this groups cpu_power
+ * Add cpu_power of each child group to this groups cpu_power.
*/
group = child->groups;
do {
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists