[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1393293054-11378-9-git-send-email-alex.shi@linaro.org>
Date: Tue, 25 Feb 2014 09:50:51 +0800
From: Alex Shi <alex.shi@...aro.org>
To: mingo@...hat.com, peterz@...radead.org, morten.rasmussen@....com
Cc: vincent.guittot@...aro.org, daniel.lezcano@...aro.org,
fweisbec@...il.com, linux@....linux.org.uk, tony.luck@...el.com,
fenghua.yu@...el.com, james.hogan@...tec.com, alex.shi@...aro.org,
jason.low2@...com, viresh.kumar@...aro.org, hanjun.guo@...aro.org,
linux-kernel@...r.kernel.org, tglx@...utronix.de,
akpm@...ux-foundation.org, arjan@...ux.intel.com, pjt@...gle.com,
fengguang.wu@...el.com, linaro-kernel@...ts.linaro.org,
wangyun@...ux.vnet.ibm.com, mgorman@...e.de
Subject: [PATCH 08/11] sched: replace target_load by biased_load
There is no source_load now, It is better to change the target_load
function name to original meaning: biased_load.
Suggested-by: Morten Rasmussen <morten.rasmussen@....com>
Signed-off-by: Alex Shi <alex.shi@...aro.org>
---
kernel/sched/fair.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 5feb51b..b8423dc 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1015,7 +1015,7 @@ bool should_numa_migrate_memory(struct task_struct *p, struct page * page,
}
static unsigned long weighted_cpuload(const int cpu);
-static unsigned long target_load(int cpu, int imbalance_pct);
+static unsigned long biased_load(int cpu, int imbalance_pct);
static unsigned long power_of(int cpu);
static long effective_load(struct task_group *tg, int cpu, long wl, long wg);
@@ -3958,7 +3958,7 @@ static unsigned long weighted_cpuload(const int cpu)
* Return a high guess at the load of a migration-target cpu weighted
* according to the runnable time and "nice" value.
*/
-static unsigned long target_load(int cpu, int imbalance_pct)
+static unsigned long biased_load(int cpu, int imbalance_pct)
{
unsigned long total = weighted_cpuload(cpu);
@@ -4286,7 +4286,7 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p, int this_cpu)
if (local_group)
load = weighted_cpuload(i);
else
- load = target_load(i, imbalance);
+ load = biased_load(i, imbalance);
avg_load += load;
}
@@ -5737,7 +5737,7 @@ static inline void update_sg_lb_stats(struct lb_env *env,
/* Bias balancing toward cpus of our domain */
if (local_group && env->idle != CPU_IDLE)
- load = target_load(i, bias);
+ load = biased_load(i, bias);
else
load = weighted_cpuload(i);
--
1.8.1.2
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists