lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sun, 25 Oct 2009 09:01:26 +0100
From:	Peter Zijlstra <peterz@...radead.org>
To:	Arjan van de Ven <arjan@...radead.org>
Cc:	mingo@...e.hu, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/3] sched: Add aggressive load balancing for certain
 situations

On Sat, 2009-10-24 at 13:04 -0700, Arjan van de Ven wrote:
> Subject: sched: Add aggressive load balancing for certain situations
> From: Arjan van de Ven <arjan@...ux.intel.com>
> 
> The scheduler, in it's "find idlest group" function currently has an unconditional
> threshold for an imbalance, before it will consider moving a task.
> 
> However, there are situations where this is undesireable, and we want to opt in to a
> more aggressive load balancing algorithm to minimize latencies.
> 
> This patch adds the infrastructure for this and also adds two cases for which
> we select the aggressive approach
> 1) From interrupt context. Events that happen in irq context are very likely,
>    as a heuristic, to show latency sensitive behavior
> 2) When doing a wake_up() and the scheduler domain we're investigating has the
>    flag set that opts in to load balancing during wake_up()
>    (for example the SMT/HT domain)
> 
> 
> Signed-off-by: Arjan van de Ven <arjan@...ux.intel.com>



> diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
> index 4e777b4..fe9b95b 100644
> --- a/kernel/sched_fair.c
> +++ b/kernel/sched_fair.c
> @@ -1246,7 +1246,7 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
>   */
>  static struct sched_group *
>  find_idlest_group(struct sched_domain *sd, struct task_struct *p,
> -		  int this_cpu, int load_idx)
> +		  int this_cpu, int load_idx, int agressive)
>  {

can't we fold that into load_idx? like -1 or something?

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ