lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 13 May 2016 05:48:11 +0800
From:	Yuyang Du <yuyang.du@...el.com>
To:	peterz@...radead.org, linux-kernel@...r.kernel.org, efault@....de,
	mingo@...nel.org, morten.rasmussen@....com,
	dietmar.eggemann@....com, vincent.guittot@...aro.org
Subject: Re: [tip:sched/core] sched/fair: Correct unit of load_above_capacity

On Thu, May 12, 2016 at 03:31:51AM -0700, tip-bot for Morten Rasmussen wrote:
> Commit-ID:  cfa10334318d8212d007da8c771187643c9cef35
> Gitweb:     http://git.kernel.org/tip/cfa10334318d8212d007da8c771187643c9cef35
> Author:     Morten Rasmussen <morten.rasmussen@....com>
> AuthorDate: Fri, 29 Apr 2016 20:32:40 +0100
> Committer:  Ingo Molnar <mingo@...nel.org>
> CommitDate: Thu, 12 May 2016 09:55:33 +0200
> 
> sched/fair: Correct unit of load_above_capacity
> 
> In calculate_imbalance() load_above_capacity currently has the unit
> [capacity] while it is used as being [load/capacity]. Not only is it
> wrong it also makes it unlikely that load_above_capacity is ever used
> as the subsequent code picks the smaller of load_above_capacity and
> the avg_load
> 
> This patch ensures that load_above_capacity has the right unit
> [load/capacity].
> 
> Signed-off-by: Morten Rasmussen <morten.rasmussen@....com>
> [ Changed changelog to note it was in capacity unit; +rebase. ]
> Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
> Cc: Dietmar Eggemann <dietmar.eggemann@....com>
> Cc: Linus Torvalds <torvalds@...ux-foundation.org>
> Cc: Mike Galbraith <efault@....de>
> Cc: Peter Zijlstra <peterz@...radead.org>
> Cc: Thomas Gleixner <tglx@...utronix.de>
> Cc: linux-kernel@...r.kernel.org
> Link: http://lkml.kernel.org/r/1461958364-675-4-git-send-email-dietmar.eggemann@arm.com
> Signed-off-by: Ingo Molnar <mingo@...nel.org>
> ---
>  kernel/sched/fair.c | 6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 2338105..218f8e8 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -7067,9 +7067,11 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
>  	if (busiest->group_type == group_overloaded &&
>  	    local->group_type   == group_overloaded) {
>  		load_above_capacity = busiest->sum_nr_running * SCHED_CAPACITY_SCALE;
> -		if (load_above_capacity > busiest->group_capacity)
> +		if (load_above_capacity > busiest->group_capacity) {
>  			load_above_capacity -= busiest->group_capacity;
> -		else
> +			load_above_capacity *= NICE_0_LOAD;
> +			load_above_capacity /= busiest->group_capacity;
> +		} else
>  			load_above_capacity = ~0UL;
>  	}
  
Hi Morten,

I got the feeling this might be wrong, the NICE_0_LOAD should be scaled down.
But I hope I am wrong.

Vincent, could you take a look?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ