lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 24 Oct 2009 13:04:32 -0700
From:	Arjan van de Ven <arjan@...radead.org>
To:	Arjan van de Ven <arjan@...radead.org>,
	Peter Zijlstra <peterz@...radead.org>
Cc:	mingo@...e.hu, linux-kernel@...r.kernel.org
Subject: [PATCH 2/3] sched: Add aggressive load balancing for certain
 situations

Subject: sched: Add aggressive load balancing for certain situations
From: Arjan van de Ven <arjan@...ux.intel.com>

The scheduler, in it's "find idlest group" function currently has an unconditional
threshold for an imbalance, before it will consider moving a task.

However, there are situations where this is undesireable, and we want to opt in to a
more aggressive load balancing algorithm to minimize latencies.

This patch adds the infrastructure for this and also adds two cases for which
we select the aggressive approach
1) From interrupt context. Events that happen in irq context are very likely,
   as a heuristic, to show latency sensitive behavior
2) When doing a wake_up() and the scheduler domain we're investigating has the
   flag set that opts in to load balancing during wake_up()
   (for example the SMT/HT domain)


Signed-off-by: Arjan van de Ven <arjan@...ux.intel.com>


diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
index 4e777b4..fe9b95b 100644
--- a/kernel/sched_fair.c
+++ b/kernel/sched_fair.c
@@ -1246,7 +1246,7 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
  */
 static struct sched_group *
 find_idlest_group(struct sched_domain *sd, struct task_struct *p,
-		  int this_cpu, int load_idx)
+		  int this_cpu, int load_idx, int agressive)
 {
 	struct sched_group *idlest = NULL, *this = NULL, *group = sd->groups;
 	unsigned long min_load = ULONG_MAX, this_load = 0;
@@ -1290,7 +1290,9 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p,
 		}
 	} while (group = group->next, group != sd->groups);
 
-	if (!idlest || 100*this_load < imbalance*min_load)
+	if (!idlest)
+		return NULL;
+	if (!agressive &&  100*this_load < imbalance*min_load)
 		return NULL;
 	return idlest;
 }
@@ -1412,6 +1414,7 @@ static int select_task_rq_fair(struct task_struct *p, int sd_flag, int wake_flag
 		int load_idx = sd->forkexec_idx;
 		struct sched_group *group;
 		int weight;
+		int agressive;
 
 		if (!(sd->flags & sd_flag)) {
 			sd = sd->child;
@@ -1421,7 +1424,13 @@ static int select_task_rq_fair(struct task_struct *p, int sd_flag, int wake_flag
 		if (sd_flag & SD_BALANCE_WAKE)
 			load_idx = sd->wake_idx;
 
-		group = find_idlest_group(sd, p, cpu, load_idx);
+		agressive = 0;
+		if (in_irq())
+			agressive = 1;
+		if (sd_flag & SD_BALANCE_WAKE)
+			agressive = 1;
+
+		group = find_idlest_group(sd, p, cpu, load_idx, agressive);
 		if (!group) {
 			sd = sd->child;
 			continue;



-- 
Arjan van de Ven 	Intel Open Source Technology Centre
For development, discussion and tips for power savings, 
visit http://www.lesswatts.org
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ