lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140623183011.28555a7c@annuminas.surriel.com>
Date:	Mon, 23 Jun 2014 18:30:11 -0400
From:	Rik van Riel <riel@...hat.com>
To:	linux-kernel@...r.kernel.org
Cc:	chegu_vinod@...com, peterz@...radead.org, mgorman@...e.de,
	mingo@...nel.org
Subject: [PATCH 8/7] sched,numa: do not let a move increase the imbalance

The HP DL980 system has a different NUMA topology from the 8 node
system I am testing on, and showed some bad behaviour I have not
managed to reproduce. This patch makes sure workloads converge.

When both a task swap and a task move are possible, do not let the
task move cause an increase in the load imbalance. Forcing task swaps
can help untangle workloads that have gotten stuck fighting over the
same nodes, like this run of "perf bench numa -m -0 -p 1000 -p 16 -t 15":

Per-node process memory usage (in MBs)
38035 (process 0      2      0      0      1   1000      0      0      0  1003
38036 (process 1      2      0      0      1      0   1000      0      0  1003
38037 (process 2    230    772      0      1      0      0      0      0  1003
38038 (process 3      1      0      0   1003      0      0      0      0  1004
38039 (process 4      2      0      0      1      0      0    994      6  1003
38040 (process 5      2      0      0      1    994      0      0      6  1003
38041 (process 6      2      0   1000      1      0      0      0      0  1003
38042 (process 7   1003      0      0      1      0      0      0      0  1004
38043 (process 8      2      0      0      1      0   1000      0      0  1003
38044 (process 9      2      0      0      1      0      0      0   1000  1003
38045 (process 1   1002      0      0      1      0      0      0      0  1003
38046 (process 1      3      0    954      1      0      0      0     46  1004
38047 (process 1      2   1000      0      1      0      0      0      0  1003
38048 (process 1      2      0      0      1      0      0   1000      0  1003
38049 (process 1      2      0      0   1001      0      0      0      0  1003
38050 (process 1      2    934      0     67      0      0      0      0  1003

Allowing task moves to increase the imbalance even slightly causes
tasks to move towards node 1, and not towards node 7, which prevents
the workload from converging once the above scenario has been reached.

Reported-and-tested-by: Vinod Chegu <chegu_vinod@...com>
Signed-off-by: Rik van Riel <riel@...hat.com>
---
 kernel/sched/fair.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 4723234..e98d290 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1314,6 +1314,12 @@ static void task_numa_compare(struct task_numa_env *env,
 
 	if (moveimp > imp && moveimp > env->best_imp) {
 		/*
+		 * A task swap is possible, do not let a task move
+		 * increase the imbalance.
+		 */
+		int imbalance_pct = env->imbalance_pct;
+		env->imbalance_pct = 100;
+		/*
 		 * If the improvement from just moving env->p direction is
 		 * better than swapping tasks around, check if a move is
 		 * possible. Store a slightly smaller score than moveimp,
@@ -1324,6 +1330,8 @@ static void task_numa_compare(struct task_numa_env *env,
 			cur = NULL;
 			goto assign;
 		}
+
+		env->imbalance_pct = imbalance_pct;
 	}
 
 	if (imp <= env->best_imp)

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ