[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1364654108-16307-22-git-send-email-alex.shi@intel.com>
Date: Sat, 30 Mar 2013 22:35:08 +0800
From: Alex Shi <alex.shi@...el.com>
To: mingo@...hat.com, peterz@...radead.org, tglx@...utronix.de,
akpm@...ux-foundation.org, arjan@...ux.intel.com, bp@...en8.de,
pjt@...gle.com, namhyung@...nel.org, efault@....de
Cc: vincent.guittot@...aro.org, gregkh@...uxfoundation.org,
preeti@...ux.vnet.ibm.com, viresh.kumar@...aro.org,
linux-kernel@...r.kernel.org, alex.shi@...el.com
Subject: [patch v6 21/21] sched: don't do power balance on share cpu power domain
Packing tasks among such domain can't save power, just performance
losing. So no power balance on them.
Signed-off-by: Alex Shi <alex.shi@...el.com>
---
kernel/sched/fair.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 430904b..88c8bd6 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3513,7 +3513,7 @@ static int get_cpu_for_power_policy(struct sched_domain *sd, int cpu,
policy = get_sd_sched_balance_policy(sd, cpu, p, sds);
if (policy != SCHED_POLICY_PERFORMANCE && sds->group_leader) {
- if (wakeup)
+ if (wakeup && !(sd->flags & SD_SHARE_CPUPOWER))
new_cpu = find_leader_cpu(sds->group_leader,
p, cpu, policy);
/* for fork balancing and a little busy task */
@@ -4420,8 +4420,9 @@ static unsigned long task_h_load(struct task_struct *p)
static inline void init_sd_lb_power_stats(struct lb_env *env,
struct sd_lb_stats *sds)
{
- if (sched_balance_policy == SCHED_POLICY_PERFORMANCE ||
- env->idle == CPU_NOT_IDLE) {
+ if (sched_balance_policy == SCHED_POLICY_PERFORMANCE
+ || env->sd->flags & SD_SHARE_CPUPOWER
+ || env->idle == CPU_NOT_IDLE) {
env->flags &= ~LBF_POWER_BAL;
env->flags |= LBF_PERF_BAL;
return;
--
1.7.12
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists