[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <tip-a52bfd73589eaf88d9c95ad2c1de0b38a6b27972@git.kernel.org>
Date: Fri, 4 Sep 2009 08:55:36 GMT
From: tip-bot for Peter Zijlstra <a.p.zijlstra@...llo.nl>
To: linux-tip-commits@...r.kernel.org
Cc: linux-kernel@...r.kernel.org, ego@...ibm.com, hpa@...or.com,
mingo@...hat.com, andreas.herrmann3@....com,
a.p.zijlstra@...llo.nl, balbir@...ibm.com, tglx@...utronix.de,
mingo@...e.hu
Subject: [tip:sched/balancing] sched: Add smt_gain
Commit-ID: a52bfd73589eaf88d9c95ad2c1de0b38a6b27972
Gitweb: http://git.kernel.org/tip/a52bfd73589eaf88d9c95ad2c1de0b38a6b27972
Author: Peter Zijlstra <a.p.zijlstra@...llo.nl>
AuthorDate: Tue, 1 Sep 2009 10:34:35 +0200
Committer: Ingo Molnar <mingo@...e.hu>
CommitDate: Fri, 4 Sep 2009 10:09:54 +0200
sched: Add smt_gain
The idea is that multi-threading a core yields more work
capacity than a single thread, provide a way to express a
static gain for threads.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@...llo.nl>
Tested-by: Andreas Herrmann <andreas.herrmann3@....com>
Acked-by: Andreas Herrmann <andreas.herrmann3@....com>
Acked-by: Gautham R Shenoy <ego@...ibm.com>
Cc: Balbir Singh <balbir@...ibm.com>
LKML-Reference: <20090901083826.073345955@...llo.nl>
Signed-off-by: Ingo Molnar <mingo@...e.hu>
---
include/linux/sched.h | 1 +
include/linux/topology.h | 1 +
kernel/sched.c | 8 +++++++-
3 files changed, 9 insertions(+), 1 deletions(-)
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 651dded..9c81c92 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -921,6 +921,7 @@ struct sched_domain {
unsigned int newidle_idx;
unsigned int wake_idx;
unsigned int forkexec_idx;
+ unsigned int smt_gain;
int flags; /* See SD_* */
enum sched_domain_level level;
diff --git a/include/linux/topology.h b/include/linux/topology.h
index 7402c1a..6203ae5 100644
--- a/include/linux/topology.h
+++ b/include/linux/topology.h
@@ -99,6 +99,7 @@ int arch_update_cpu_topology(void);
| SD_SHARE_CPUPOWER, \
.last_balance = jiffies, \
.balance_interval = 1, \
+ .smt_gain = 1178, /* 15% */ \
}
#endif
#endif /* CONFIG_SCHED_SMT */
diff --git a/kernel/sched.c b/kernel/sched.c
index ecb4a47..5511226 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -8523,9 +8523,15 @@ static void init_sched_groups_power(int cpu, struct sched_domain *sd)
weight = cpumask_weight(sched_domain_span(sd));
/*
* SMT siblings share the power of a single core.
+ * Usually multiple threads get a better yield out of
+ * that one core than a single thread would have,
+ * reflect that in sd->smt_gain.
*/
- if ((sd->flags & SD_SHARE_CPUPOWER) && weight > 1)
+ if ((sd->flags & SD_SHARE_CPUPOWER) && weight > 1) {
+ power *= sd->smt_gain;
power /= weight;
+ power >>= SCHED_LOAD_SHIFT;
+ }
sg_inc_cpu_power(sd->groups, power);
return;
}
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists