lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150506170038.GB23123@twins.programming.kicks-ass.net>
Date:	Wed, 6 May 2015 19:00:38 +0200
From:	Peter Zijlstra <peterz@...radead.org>
To:	Rik van Riel <riel@...hat.com>
Cc:	Artem Bityutskiy <dedekind1@...il.com>,
	linux-kernel@...r.kernel.org, mgorman@...e.de, jhladky@...hat.com
Subject: Re: [PATCH] numa,sched: only consider less busy nodes as numa
 balancing destination

On Wed, May 06, 2015 at 11:41:28AM -0400, Rik van Riel wrote:

> Peter, Mel, I think it may be time to stop waiting for the impedance
> mismatch between the load balancer and NUMA balancing to be resolved,
> and try to just avoid the issue in the NUMA balancing code...

That's a wee bit unfair since we 'all' decided to let the numa thing
rest for a while. So obviously that issue didn't get resolved.

>  kernel/sched/fair.c | 30 ++++++++++++++++++++++++++++--
>  1 file changed, 28 insertions(+), 2 deletions(-)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index ffeaa4105e48..480e6a35ab35 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -1409,6 +1409,30 @@ static void task_numa_find_cpu(struct task_numa_env *env,
>  	}
>  }
>  
> +/* Only move tasks to a NUMA node less busy than the current node. */
> +static bool numa_has_capacity(struct task_numa_env *env)
> +{
> +	struct numa_stats *src = &env->src_stats;
> +	struct numa_stats *dst = &env->dst_stats;
> +
> +	if (src->has_free_capacity && !dst->has_free_capacity)
> +		return false;
> +
> +	/*
> +	 * Only consider a task move if the source has a higher destination
> +	 * than the destination, corrected for CPU capacity on each node.
> +	 *
> +	 *      src->load                dst->load
> +	 * --------------------- vs ---------------------
> +	 * src->compute_capacity    dst->compute_capacity
> +	 */
> +	if (src->load * dst->compute_capacity >
> +	    dst->load * src->compute_capacity)
> +		return true;
> +
> +	return false;
> +}
> +
>  static int task_numa_migrate(struct task_struct *p)
>  {
>  	struct task_numa_env env = {
> @@ -1463,7 +1487,8 @@ static int task_numa_migrate(struct task_struct *p)
>  	update_numa_stats(&env.dst_stats, env.dst_nid);
>  
>  	/* Try to find a spot on the preferred nid. */
> -	task_numa_find_cpu(&env, taskimp, groupimp);
> +	if (numa_has_capacity(&env))
> +		task_numa_find_cpu(&env, taskimp, groupimp);
>  
>  	/*
>  	 * Look at other nodes in these cases:
> @@ -1494,7 +1519,8 @@ static int task_numa_migrate(struct task_struct *p)
>  			env.dist = dist;
>  			env.dst_nid = nid;
>  			update_numa_stats(&env.dst_stats, env.dst_nid);
> -			task_numa_find_cpu(&env, taskimp, groupimp);
> +			if (numa_has_capacity(&env))
> +				task_numa_find_cpu(&env, taskimp, groupimp);
>  		}
>  	}

Does this not 'duplicate' the logic that we tried for with
task_numa_compare():balance section? That is where we try to avoid
making a decision that the regular load-balancer will dislike and undo.

Alternatively; you can view that as a cpu guard and the proposed as a
node guard, in which case, should it not live inside
task_numa_find_cpu()? Instead of guarding all call sites.

In any case, should we mix a bit of imbalance_pct in there?

/me goes ponder this a bit further..
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ