lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1185387605.14095.16.camel@tongli.jf.intel.com>
Date:	Wed, 25 Jul 2007 11:20:05 -0700
From:	"Li, Tong N" <tong.n.li@...el.com>
To:	Ingo Molnar <mingo@...e.hu>
Cc:	linux-kernel@...r.kernel.org, Chris Snook <csnook@...hat.com>
Subject: Re: [RFC] scheduler: improve SMP fairness in CFS

On Wed, 2007-07-25 at 14:03 +0200, Ingo Molnar wrote:

> Signed-off-by: Ingo Molnar <mingo@...e.hu>
> ---
>  include/linux/sched.h |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> Index: linux/include/linux/sched.h
> ===================================================================
> --- linux.orig/include/linux/sched.h
> +++ linux/include/linux/sched.h
> @@ -681,7 +681,7 @@ enum cpu_idle_type {
>  #define SCHED_LOAD_SHIFT	10
>  #define SCHED_LOAD_SCALE	(1L << SCHED_LOAD_SHIFT)
>  
> -#define SCHED_LOAD_SCALE_FUZZ	(SCHED_LOAD_SCALE >> 5)
> +#define SCHED_LOAD_SCALE_FUZZ	(SCHED_LOAD_SCALE >> 1)
>  
>  #ifdef CONFIG_SMP
>  #define SD_LOAD_BALANCE		1	/* Do load balancing on this domain. */

Hi Ingo,

The problem I see is that the current load balancing code is quite
hueristic and can be quite inaccurate sometimes. Its goal is to maintain
roughly equal load on each core, where load is defined to be the sum of
the weights of all tasks on a core. If all tasks have the same weight,
this is a simple problem. If different weights exist, this is an NP-hard
problem and our current hueristic can perform badly under various
workloads. A simple example, if we have four tasks on two cores and they
have weights 1, 5, 7, 7. The balanced partition would have 1 and 7 on
core 1 and 5 and 7 on core 2. But you can see the load isn't evenly
distributed; in fact, it's not possible to evenly distribute the load in
this case. Thus, my opinion is that only balancing the load as we do now
is not enough to achieve SMP fairness. My patch was intended to address
this problem.

  tong

PS. I now have a lockless implementation of my patch. Will post later
today.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ