[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <51BF1D3E.6050407@intel.com>
Date: Mon, 17 Jun 2013 22:29:18 +0800
From: Alex Shi <alex.shi@...el.com>
To: Peter Zijlstra <peterz@...radead.org>
CC: mingo@...hat.com, tglx@...utronix.de, akpm@...ux-foundation.org,
bp@...en8.de, pjt@...gle.com, namhyung@...nel.org, efault@....de,
morten.rasmussen@....com, vincent.guittot@...aro.org,
preeti@...ux.vnet.ibm.com, viresh.kumar@...aro.org,
linux-kernel@...r.kernel.org, mgorman@...e.de, riel@...hat.com,
wangyun@...ux.vnet.ibm.com, Jason Low <jason.low2@...com>,
Changlong Xie <changlongx.xie@...el.com>, sgruszka@...hat.com,
fweisbec@...il.com
Subject: Re: [patch v8 8/9] sched: consider runnable load average in move_tasks
On 06/17/2013 09:59 PM, Peter Zijlstra wrote:
>> > #else
>> > static inline void update_blocked_averages(int cpu)
> Should we not also change the !FAIR_GROUP_SCHED version of task_h_load()
> for this?
yes, it is needed. Sorry for omit this.
---
>From dc6a4b052c05d793938f6fbf40190d8cec959ca4 Mon Sep 17 00:00:00 2001
From: Alex Shi <alex.shi@...el.com>
Date: Mon, 3 Dec 2012 23:00:53 +0800
Subject: [PATCH 1/2] sched: consider runnable load average in move_tasks
Except using runnable load average in background, move_tasks is also
the key functions in load balance. We need consider the runnable load
average in it in order to the apple to apple load comparison.
Morten had caught a div u64 bug on ARM, thanks!
Signed-off-by: Alex Shi <alex.shi@...el.com>
---
kernel/sched/fair.c | 18 +++++++++---------
1 files changed, 9 insertions(+), 9 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 4b68f9f..0ca9cc3 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4179,11 +4179,14 @@ static int tg_load_down(struct task_group *tg, void *data)
long cpu = (long)data;
if (!tg->parent) {
- load = cpu_rq(cpu)->load.weight;
+ load = cpu_rq(cpu)->avg.load_avg_contrib;
} else {
+ unsigned long tmp_rla;
+ tmp_rla = tg->parent->cfs_rq[cpu]->runnable_load_avg + 1;
+
load = tg->parent->cfs_rq[cpu]->h_load;
- load *= tg->se[cpu]->load.weight;
- load /= tg->parent->cfs_rq[cpu]->load.weight + 1;
+ load *= tg->se[cpu]->avg.load_avg_contrib;
+ load /= tmp_rla;
}
tg->cfs_rq[cpu]->h_load = load;
@@ -4209,12 +4212,9 @@ static void update_h_load(long cpu)
static unsigned long task_h_load(struct task_struct *p)
{
struct cfs_rq *cfs_rq = task_cfs_rq(p);
- unsigned long load;
-
- load = p->se.load.weight;
- load = div_u64(load * cfs_rq->h_load, cfs_rq->load.weight + 1);
- return load;
+ return div64_ul(p->se.avg.load_avg_contrib * cfs_rq->h_load,
+ cfs_rq->runnable_load_avg + 1);
}
#else
static inline void update_blocked_averages(int cpu)
@@ -4227,7 +4227,7 @@ static inline void update_h_load(long cpu)
static unsigned long task_h_load(struct task_struct *p)
{
- return p->se.load.weight;
+ return p->se.avg.load_avg_contrib;
}
#endif
--
1.7.5.4
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists