lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140722144559.382c5243@annuminas.surriel.com>
Date:	Tue, 22 Jul 2014 14:45:59 -0400
From:	Rik van Riel <riel@...hat.com>
To:	linux-kernel@...r.kernel.org
Cc:	peterz@...radead.org, mikey@...ling.org, mingo@...nel.org,
	pjt@...gle.com, jhladky@...hat.com, ktkhai@...allels.com,
	tim.c.chen@...ux.intel.com, nicolas.pitre@...aro.org
Subject: [PATCH] sched: make update_sd_pick_busiest return true on a busier
 sd

Currently update_sd_pick_busiest only returns true when an sd
is overloaded, or for SD_ASYM_PACKING when a domain is busier
than average and a higher numbered domain than the target.

This breaks load balancing between domains that are not overloaded,
in the !SD_ASYM_PACKING case. This patch makes update_sd_pick_busiest
return true when the busiest sd yet is encountered.

On a 4 node system, this seems to result in the load balancer finally
putting 1 thread of a 4 thread test run of "perf bench numa mem" on
each node, where before the load was generally not spread across all
nodes.

Behaviour for SD_ASYM_PACKING does not seem to match the comment,
in that groups with below average load average are ignored, but I
have no hardware to test that so I have left the behaviour of that
code unchanged.

Cc: mikey@...ling.org
Cc: peterz@...radead.org
Signed-off-by: Rik van Riel <riel@...hat.com>
---
 kernel/sched/fair.c | 18 +++++++++++-------
 1 file changed, 11 insertions(+), 7 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index fea7d33..ff4ddba 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5942,16 +5942,20 @@ static bool update_sd_pick_busiest(struct lb_env *env,
 	 * numbered CPUs in the group, therefore mark all groups
 	 * higher than ourself as busy.
 	 */
-	if ((env->sd->flags & SD_ASYM_PACKING) && sgs->sum_nr_running &&
-	    env->dst_cpu < group_first_cpu(sg)) {
-		if (!sds->busiest)
-			return true;
+	if (env->sd->flags & SD_ASYM_PACKING) {
+		if (sgs->sum_nr_running && env->dst_cpu < group_first_cpu(sg)) {
+			if (!sds->busiest)
+				return true;
 
-		if (group_first_cpu(sds->busiest) > group_first_cpu(sg))
-			return true;
+			if (group_first_cpu(sds->busiest) > group_first_cpu(sg))
+				return true;
+		}
+
+		return false;
 	}
 
-	return false;
+	/* See above: sgs->avg_load > sds->busiest_stat.avg_load */
+	return true;
 }
 
 #ifdef CONFIG_NUMA_BALANCING

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ