lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 17 Jun 2013 15:59:09 +0200
From:	Peter Zijlstra <peterz@...radead.org>
To:	Alex Shi <alex.shi@...el.com>
Cc:	mingo@...hat.com, tglx@...utronix.de, akpm@...ux-foundation.org,
	bp@...en8.de, pjt@...gle.com, namhyung@...nel.org, efault@....de,
	morten.rasmussen@....com, vincent.guittot@...aro.org,
	preeti@...ux.vnet.ibm.com, viresh.kumar@...aro.org,
	linux-kernel@...r.kernel.org, mgorman@...e.de, riel@...hat.com,
	wangyun@...ux.vnet.ibm.com, Jason Low <jason.low2@...com>,
	Changlong Xie <changlongx.xie@...el.com>, sgruszka@...hat.com,
	fweisbec@...il.com
Subject: Re: [patch v8 8/9] sched: consider runnable load average in
 move_tasks

On Fri, Jun 07, 2013 at 03:20:51PM +0800, Alex Shi wrote:
> Except using runnable load average in background, move_tasks is also
> the key functions in load balance. We need consider the runnable load
> average in it in order to the apple to apple load comparison.
> 
> Morten had caught a div u64 bug on ARM, thanks!
> 
> Signed-off-by: Alex Shi <alex.shi@...el.com>
> ---
>  kernel/sched/fair.c | 16 ++++++++--------
>  1 file changed, 8 insertions(+), 8 deletions(-)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index eadd2e7..3aa1dc0 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -4178,11 +4178,14 @@ static int tg_load_down(struct task_group *tg, void *data)
>  	long cpu = (long)data;
>  
>  	if (!tg->parent) {
> -		load = cpu_rq(cpu)->load.weight;
> +		load = cpu_rq(cpu)->avg.load_avg_contrib;
>  	} else {
> +		unsigned long tmp_rla;
> +		tmp_rla = tg->parent->cfs_rq[cpu]->runnable_load_avg + 1;
> +
>  		load = tg->parent->cfs_rq[cpu]->h_load;
> -		load *= tg->se[cpu]->load.weight;
> -		load /= tg->parent->cfs_rq[cpu]->load.weight + 1;
> +		load *= tg->se[cpu]->avg.load_avg_contrib;
> +		load /= tmp_rla;
>  	}
>  
>  	tg->cfs_rq[cpu]->h_load = load;
> @@ -4208,12 +4211,9 @@ static void update_h_load(long cpu)
>  static unsigned long task_h_load(struct task_struct *p)
>  {
>  	struct cfs_rq *cfs_rq = task_cfs_rq(p);
> -	unsigned long load;
> -
> -	load = p->se.load.weight;
> -	load = div_u64(load * cfs_rq->h_load, cfs_rq->load.weight + 1);
>  
> -	return load;
> +	return div64_ul(p->se.avg.load_avg_contrib * cfs_rq->h_load,
> +			cfs_rq->runnable_load_avg + 1);
>  }
>  #else
>  static inline void update_blocked_averages(int cpu)

Should we not also change the !FAIR_GROUP_SCHED version of task_h_load()
for this?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ