lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 15 Feb 2010 23:29:33 +0100
From:	Peter Zijlstra <peterz@...radead.org>
To:	Suresh Siddha <suresh.b.siddha@...el.com>
Cc:	Ingo Molnar <mingo@...e.hu>, LKML <linux-kernel@...r.kernel.org>,
	"Ma, Ling" <ling.ma@...el.com>,
	"Zhang, Yanmin" <yanmin_zhang@...ux.intel.com>, ego@...ibm.com,
	svaidy@...ux.vnet.ibm.com
Subject: Re: [patch] sched: fix SMT scheduler regression in
 find_busiest_queue()

On Fri, 2010-02-12 at 17:14 -0800, Suresh Siddha wrote:

> From: Suresh Siddha <suresh.b.siddha@...el.com>
> Subject: sched: fix SMT scheduler regression in find_busiest_queue()
> 
> Fix a SMT scheduler performance regression that is leading to a scenario
> where SMT threads in one core are completely idle while both the SMT threads
> in another core (on the same socket) are busy.
> 
> This is caused by this commit (with the problematic code highlighted)
> 
>    commit bdb94aa5dbd8b55e75f5a50b61312fe589e2c2d1
>    Author: Peter Zijlstra <a.p.zijlstra@...llo.nl>
>    Date:   Tue Sep 1 10:34:38 2009 +0200
> 
>    sched: Try to deal with low capacity
> 
>    @@ -4203,15 +4223,18 @@ find_busiest_queue()
>    ...
> 	for_each_cpu(i, sched_group_cpus(group)) {
>    +	unsigned long power = power_of(i);
> 
>    ...
> 
>    -	wl = weighted_cpuload(i);
>    +	wl = weighted_cpuload(i) * SCHED_LOAD_SCALE;
>    +	wl /= power;
> 
>    -	if (rq->nr_running == 1 && wl > imbalance)
>    +	if (capacity && rq->nr_running == 1 && wl > imbalance)
> 		continue;
> 
> On a SMT system, power of the HT logical cpu will be 589 and
> the scheduler load imbalance (for scenarios like the one mentioned above)
> can be approximately 1024 (SCHED_LOAD_SCALE). The above change of scaling
> the weighted load with the power will result in "wl > imbalance" and
> ultimately resulting in find_busiest_queue() return NULL, causing
> load_balance() to think that the load is well balanced. But infact
> one of the tasks can be moved to the idle core for optimal performance.
> 
> We don't need to use the weighted load (wl) scaled by the cpu power to
> compare with  imabalance. In that condition, we already know there is only a
> single task "rq->nr_running == 1" and the comparison between imbalance,
> wl is to make sure that we select the correct priority thread which matches
> imbalance. So we really need to compare the imabalnce with the original
> weighted load of the cpu and not the scaled load.
> 
> But in other conditions where we want the most hammered(busiest) cpu, we can
> use scaled load to ensure that we consider the cpu power in addition to the
> actual load on that cpu, so that we can move the load away from the
> guy that is getting most hammered with respect to the actual capacity,
> as compared with the rest of the cpu's in that busiest group.
> 
> Fix it.
> 
> Reported-by: Ma Ling <ling.ma@...el.com>
> Initial-Analysis-by: Zhang, Yanmin <yanmin_zhang@...ux.intel.com>
> Signed-off-by: Suresh Siddha <suresh.b.siddha@...el.com>
> Cc: stable@...nel.org [2.6.32.x]

A reproduction case would have been nice, I've been playing with busy
loops and plotting the cpus on paper, but I didn't manage to reproduce.

Still, I went through the logic and it seems to make sense, so:

Acked-by: Peter Zijlstra <a.p.zijlstra@...llo.nl>

Ingo, sed -e 's/sched\.c/sched_fair.c/g', makes it apply to tip/master
and should provide means of solving the rebase/merge conflict.

> ---
> 
> diff --git a/kernel/sched.c b/kernel/sched.c
> index 3a8fb30..bef5369 100644
> --- a/kernel/sched.c
> +++ b/kernel/sched.c
> @@ -4119,12 +4119,23 @@ find_busiest_queue(struct sched_group *group, enum cpu_idle_type idle,
>  			continue;
>  
>  		rq = cpu_rq(i);
> -		wl = weighted_cpuload(i) * SCHED_LOAD_SCALE;
> -		wl /= power;
> +		wl = weighted_cpuload(i);
>  
> +		/*
> +		 * When comparing with imbalance, use weighted_cpuload()
> +		 * which is not scaled with the cpu power.
> +		 */
>  		if (capacity && rq->nr_running == 1 && wl > imbalance)
>  			continue;
>  
> +		/*
> + 		 * For the load comparisons with the other cpu's, consider
> + 		 * the weighted_cpuload() scaled with the cpu power, so that
> + 		 * the load can be moved away from the cpu that is potentially
> + 		 * running at a lower capacity.
> + 		 */
> +		wl = (wl * SCHED_LOAD_SCALE) / power;
> +
>  		if (wl > max_load) {
>  			max_load = wl;
>  			busiest = rq;
> 
> 


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ