lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 27 Aug 2007 12:19:05 -0700
From:	"Siddha, Suresh B" <suresh.b.siddha@...el.com>
To:	Ingo Molnar <mingo@...e.hu>
Cc:	"Siddha, Suresh B" <suresh.b.siddha@...el.com>,
	nickpiggin@...oo.com.au, linux-kernel@...r.kernel.org,
	akpm@...ux-foundation.org
Subject: Re: [patch] sched: fix broken smt/mc optimizations with CFS

On Thu, Aug 23, 2007 at 02:13:41PM +0200, Ingo Molnar wrote:
> 
> * Ingo Molnar <mingo@...e.hu> wrote:
> 
> > [...] So how about the patch below instead?
> 
> the right patch attached.
> 
> -------------------------------->
> Subject: sched: fix broken SMT/MC optimizations
> From: "Siddha, Suresh B" <suresh.b.siddha@...el.com>
> 
> On a four package system with HT - HT load balancing optimizations were
> broken.  For example, if two tasks end up running on two logical threads
> of one of the packages, scheduler is not able to pull one of the tasks
> to a completely idle package.
> 
> In this scenario, for nice-0 tasks, imbalance calculated by scheduler
> will be 512 and find_busiest_queue() will return 0 (as each cpu's load
> is 1024 > imbalance and has only one task running).
> 
> Similarly MC scheduler optimizations also get fixed with this patch.
> 
> [ mingo@...e.hu: restored fair balancing by increasing the fuzz and
>                  adding it back to the power decision, without the /2
>                  factor. ]
> 
> Signed-off-by: Suresh Siddha <suresh.b.siddha@...el.com>
> Signed-off-by: Ingo Molnar <mingo@...e.hu>
> ---
> 
>  include/linux/sched.h |    2 +-
>  kernel/sched.c        |    2 +-
>  2 files changed, 2 insertions(+), 2 deletions(-)
> 
> Index: linux/include/linux/sched.h
> ===================================================================
> --- linux.orig/include/linux/sched.h
> +++ linux/include/linux/sched.h
> @@ -681,7 +681,7 @@ enum cpu_idle_type {
>  #define SCHED_LOAD_SHIFT	10
>  #define SCHED_LOAD_SCALE	(1L << SCHED_LOAD_SHIFT)
>  
> -#define SCHED_LOAD_SCALE_FUZZ	(SCHED_LOAD_SCALE >> 1)
> +#define SCHED_LOAD_SCALE_FUZZ	SCHED_LOAD_SCALE
>  
>  #ifdef CONFIG_SMP
>  #define SD_LOAD_BALANCE		1	/* Do load balancing on this domain. */
> Index: linux/kernel/sched.c
> ===================================================================
> --- linux.orig/kernel/sched.c
> +++ linux/kernel/sched.c
> @@ -2517,7 +2517,7 @@ group_next:
>  	 * a think about bumping its value to force at least one task to be
>  	 * moved
>  	 */
> -	if (*imbalance + SCHED_LOAD_SCALE_FUZZ < busiest_load_per_task/2) {
> +	if (*imbalance + SCHED_LOAD_SCALE_FUZZ < busiest_load_per_task) {

Ingo, this is still broken. This condition is always false for nice-0 tasks..

thanks,
suresh
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ