lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5550A446.2050005@redhat.com>
Date:	Mon, 11 May 2015 14:44:54 +0200
From:	Jirka Hladky <jhladky@...hat.com>
To:	Rik van Riel <riel@...hat.com>,
	Artem Bityutskiy <dedekind1@...il.com>
CC:	linux-kernel@...r.kernel.org, mgorman@...e.de, peterz@...radead.org
Subject: Re: [PATCH] numa,sched: only consider less busy nodes as numa balancing
 destination

Hi Rik,

we have results for SPECjbb2005 and Linpack&Stream benchmarks with

4.1.0-0.rc1.git0.1.el7.x86_64 (without patch)
4.1.0-0.rc2.git0.3.el7.x86_64 with your patch
4.1.0-0.rc2.git0.3.el7.x86_64 with your patch and AUTONUMA disabled

The tests has been conducted on 3 different systems with 4 NUMA nodes 
and different versions of Intel processors and different amount of RAM.


For SPECjbb benchmark we see
-with your latest proposed patch applied
   * gains in range 7-15% !! for single instance  SPECjbb (tested on 
variety of systems, biggest gains on brickland system, gains are growing 
with growing number of threads)
   * for multi-instance SPECjbb run (4 parallel jobs on 4 NUMA node 
system) on change in results
   * for linpack no change
   * for stream bench slight improvements (but very close to error margin)
- with AUTONUMA disabled
   * with SPECjbb (both single and 4 parallel jobs) performance drop to 
1/2 of performance with AUTONUMA enabled
   * for linpack and stream performance drop by 30% compared with 
AUTONUMA enabled

In summary:
* the proposed patch improves performance for single process SPECjbb 
bench without hurting anything
* With AUTUNUMA disabled, performance drop is huge

Please let me know if you need more details.

Thanks
Jirka

On 05/06/2015 05:41 PM, Rik van Riel wrote:
> On Wed, 06 May 2015 13:35:30 +0300
> Artem Bityutskiy <dedekind1@...il.com> wrote:
>
>> we observe a tremendous regression between kernel version 3.16 and 3.17
>> (and up), and I've bisected it to this commit:
>>
>> a43455a sched/numa: Ensure task_numa_migrate() checks the preferred node
> Artem, Jirka, does this patch fix (or at least improve) the issues you
> have been seeing?  Does it introduce any new regressions?
>
> Peter, Mel, I think it may be time to stop waiting for the impedance
> mismatch between the load balancer and NUMA balancing to be resolved,
> and try to just avoid the issue in the NUMA balancing code...
>
> ----8<----
>
> Subject: numa,sched: only consider less busy nodes as numa balancing destination
>
> Changeset a43455a1 ("sched/numa: Ensure task_numa_migrate() checks the
> preferred node") fixes an issue where workloads would never converge
> on a fully loaded (or overloaded) system.
>
> However, it introduces a regression on less than fully loaded systems,
> where workloads converge on a few NUMA nodes, instead of properly staying
> spread out across the whole system. This leads to a reduction in available
> memory bandwidth, and usable CPU cache, with predictable performance problems.
>
> The root cause appears to be an interaction between the load balancer and
> NUMA balancing, where the short term load represented by the load balancer
> differs from the long term load the NUMA balancing code would like to base
> its decisions on.
>
> Simply reverting a43455a1 would re-introduce the non-convergence of
> workloads on fully loaded systems, so that is not a good option. As
> an aside, the check done before a43455a1 only applied to a task's
> preferred node, not to other candidate nodes in the system, so the
> converge-on-too-few-nodes problem still happens, just to a lesser
> degree.
>
> Instead, try to compensate for the impedance mismatch between the
> load balancer and NUMA balancing by only ever considering a lesser
> loaded node as a destination for NUMA balancing, regardless of
> whether the task is trying to move to the preferred node, or to another
> node.
>
> Signed-off-by: Rik van Riel <riel@...hat.com>
> Reported-by: Artem Bityutski <dedekind1@...il.com>
> Reported-by: Jirka Hladky <jhladky@...hat.com>
> ---
>   kernel/sched/fair.c | 30 ++++++++++++++++++++++++++++--
>   1 file changed, 28 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index ffeaa4105e48..480e6a35ab35 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -1409,6 +1409,30 @@ static void task_numa_find_cpu(struct task_numa_env *env,
>   	}
>   }
>   
> +/* Only move tasks to a NUMA node less busy than the current node. */
> +static bool numa_has_capacity(struct task_numa_env *env)
> +{
> +	struct numa_stats *src = &env->src_stats;
> +	struct numa_stats *dst = &env->dst_stats;
> +
> +	if (src->has_free_capacity && !dst->has_free_capacity)
> +		return false;
> +
> +	/*
> +	 * Only consider a task move if the source has a higher destination
> +	 * than the destination, corrected for CPU capacity on each node.
> +	 *
> +	 *      src->load                dst->load
> +	 * --------------------- vs ---------------------
> +	 * src->compute_capacity    dst->compute_capacity
> +	 */
> +	if (src->load * dst->compute_capacity >
> +	    dst->load * src->compute_capacity)
> +		return true;
> +
> +	return false;
> +}
> +
>   static int task_numa_migrate(struct task_struct *p)
>   {
>   	struct task_numa_env env = {
> @@ -1463,7 +1487,8 @@ static int task_numa_migrate(struct task_struct *p)
>   	update_numa_stats(&env.dst_stats, env.dst_nid);
>   
>   	/* Try to find a spot on the preferred nid. */
> -	task_numa_find_cpu(&env, taskimp, groupimp);
> +	if (numa_has_capacity(&env))
> +		task_numa_find_cpu(&env, taskimp, groupimp);
>   
>   	/*
>   	 * Look at other nodes in these cases:
> @@ -1494,7 +1519,8 @@ static int task_numa_migrate(struct task_struct *p)
>   			env.dist = dist;
>   			env.dst_nid = nid;
>   			update_numa_stats(&env.dst_stats, env.dst_nid);
> -			task_numa_find_cpu(&env, taskimp, groupimp);
> +			if (numa_has_capacity(&env))
> +				task_numa_find_cpu(&env, taskimp, groupimp);
>   		}
>   	}
>   

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ