lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Tue, 07 May 2013 13:17:59 +0800 From: Alex Shi <alex.shi@...el.com> To: Paul Turner <pjt@...gle.com> CC: Peter Zijlstra <a.p.zijlstra@...llo.nl>, Ingo Molnar <mingo@...hat.com>, Thomas Gleixner <tglx@...utronix.de>, Andrew Morton <akpm@...ux-foundation.org>, Borislav Petkov <bp@...en8.de>, Namhyung Kim <namhyung@...nel.org>, Mike Galbraith <efault@....de>, Morten Rasmussen <morten.rasmussen@....com>, Vincent Guittot <vincent.guittot@...aro.org>, Preeti U Murthy <preeti@...ux.vnet.ibm.com>, Viresh Kumar <viresh.kumar@...aro.org>, LKML <linux-kernel@...r.kernel.org>, Mel Gorman <mgorman@...e.de>, Rik van Riel <riel@...hat.com>, Michael Wang <wangyun@...ux.vnet.ibm.com> Subject: Re: [PATCH v5 6/7] sched: consider runnable load average in move_tasks On 05/07/2013 04:59 AM, Paul Turner wrote: >>> Similarly, I think you also want to at least include blocked_load_avg here. >> > >> > I'm puzzled, this is an entity weight. Entity's don't have blocked_load_avg. >> > >> > The purpose here is to compute the amount of weight that's being moved by this >> > task; to subtract from the imbalance. > Sorry, what I meant to say here is: > If we're going to be using a runnable average based load here the > fraction we take (currently instantaneous) in tg_load_down should be > consistent. yes. I think so. So, here is the patch, could you like take a look? --- >From 8a98af9578154ce5d755b2c6ea7da0109cd6efa8 Mon Sep 17 00:00:00 2001 From: Alex Shi <alex.shi@...el.com> Date: Mon, 3 Dec 2012 23:00:53 +0800 Subject: [PATCH 6/7] sched: consider runnable load average in move_tasks Except using runnable load average in background, move_tasks is also the key functions in load balance. We need consider the runnable load average in it in order to the apple to apple load comparison. Signed-off-by: Alex Shi <alex.shi@...el.com> --- kernel/sched/fair.c | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 407ef61..ca0e051 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4121,11 +4121,12 @@ static int tg_load_down(struct task_group *tg, void *data) long cpu = (long)data; if (!tg->parent) { - load = cpu_rq(cpu)->load.weight; + load = cpu_rq(cpu)->avg.load_avg_contrib; } else { load = tg->parent->cfs_rq[cpu]->h_load; - load *= tg->se[cpu]->load.weight; - load /= tg->parent->cfs_rq[cpu]->load.weight + 1; + load *= tg->se[cpu]->avg.load_avg_contrib; + load /= tg->parent->cfs_rq[cpu]->runnable_load_avg + + tg->parent->cfs_rq[cpu]->blocked_load_avg + 1; } tg->cfs_rq[cpu]->h_load = load; @@ -4153,8 +4154,9 @@ static unsigned long task_h_load(struct task_struct *p) struct cfs_rq *cfs_rq = task_cfs_rq(p); unsigned long load; - load = p->se.load.weight; - load = div_u64(load * cfs_rq->h_load, cfs_rq->load.weight + 1); + load = p->se.avg.load_avg_contrib; + load = div_u64(load * cfs_rq->h_load, + cfs_rq->runnable_load_avg + cfs_rq->blocked_load_avg + 1); return load; } -- 1.7.12 -- Thanks Alex -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists