lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140624143820.GA28774@twins.programming.kicks-ass.net>
Date:	Tue, 24 Jun 2014 16:38:20 +0200
From:	Peter Zijlstra <peterz@...radead.org>
To:	Rik van Riel <riel@...hat.com>
Cc:	linux-kernel@...r.kernel.org, chegu_vinod@...com, mgorman@...e.de,
	mingo@...nel.org
Subject: Re: [PATCH 8/7] sched,numa: do not let a move increase the imbalance

On Mon, Jun 23, 2014 at 06:30:11PM -0400, Rik van Riel wrote:
> The HP DL980 system has a different NUMA topology from the 8 node
> system I am testing on, and showed some bad behaviour I have not
> managed to reproduce. This patch makes sure workloads converge.
> 
> When both a task swap and a task move are possible, do not let the
> task move cause an increase in the load imbalance. Forcing task swaps
> can help untangle workloads that have gotten stuck fighting over the
> same nodes, like this run of "perf bench numa -m -0 -p 1000 -p 16 -t 15":
> 
> Per-node process memory usage (in MBs)
> 38035 (process 0      2      0      0      1   1000      0      0      0  1003
> 38036 (process 1      2      0      0      1      0   1000      0      0  1003
> 38037 (process 2    230    772      0      1      0      0      0      0  1003
> 38038 (process 3      1      0      0   1003      0      0      0      0  1004
> 38039 (process 4      2      0      0      1      0      0    994      6  1003
> 38040 (process 5      2      0      0      1    994      0      0      6  1003
> 38041 (process 6      2      0   1000      1      0      0      0      0  1003
> 38042 (process 7   1003      0      0      1      0      0      0      0  1004
> 38043 (process 8      2      0      0      1      0   1000      0      0  1003
> 38044 (process 9      2      0      0      1      0      0      0   1000  1003
> 38045 (process 1   1002      0      0      1      0      0      0      0  1003
> 38046 (process 1      3      0    954      1      0      0      0     46  1004
> 38047 (process 1      2   1000      0      1      0      0      0      0  1003
> 38048 (process 1      2      0      0      1      0      0   1000      0  1003
> 38049 (process 1      2      0      0   1001      0      0      0      0  1003
> 38050 (process 1      2    934      0     67      0      0      0      0  1003
> 
> Allowing task moves to increase the imbalance even slightly causes
> tasks to move towards node 1, and not towards node 7, which prevents
> the workload from converging once the above scenario has been reached.
> 
> Reported-and-tested-by: Vinod Chegu <chegu_vinod@...com>
> Signed-off-by: Rik van Riel <riel@...hat.com>
> ---
>  kernel/sched/fair.c | 8 ++++++++
>  1 file changed, 8 insertions(+)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 4723234..e98d290 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -1314,6 +1314,12 @@ static void task_numa_compare(struct task_numa_env *env,
>  
>  	if (moveimp > imp && moveimp > env->best_imp) {
>  		/*
> +		 * A task swap is possible, do not let a task move
> +		 * increase the imbalance.
> +		 */
> +		int imbalance_pct = env->imbalance_pct;
> +		env->imbalance_pct = 100;
> +		/*

I would feel so much better if we could say _why_ this is so.


Content of type "application/pgp-signature" skipped

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ