[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <554D3E1B.6010501@redhat.com>
Date: Fri, 08 May 2015 18:52:11 -0400
From: Rik van Riel <riel@...hat.com>
To: dedekind1@...il.com
CC: linux-kernel@...r.kernel.org, mgorman@...e.de,
peterz@...radead.org, jhladky@...hat.com
Subject: Re: [PATCH] numa,sched: only consider less busy nodes as numa balancing
destination
On 05/08/2015 04:03 PM, Rik van Riel wrote:
> If the normal scheduler load balancer is moving tasks the
> other way the NUMA balancer is moving them, things will
> not converge, and tasks will have worse memory locality
> than not doing NUMA balancing at all.
>
> Currently the load balancer has a preference for moving
> tasks to their preferred nodes (NUMA_FAVOUR_HIGHER, true),
> but there is no resistance to moving tasks away from their
> preferred nodes (NUMA_RESIST_LOWER, false). That setting
> was arrived at after a fair amount of experimenting, and
> is probably correct.
Never mind that. After reading the code several times after
that earlier post, it looks like having NUMA_FAVOR_HIGHER
enabled does absolutely nothing without also having
NUMA_RESIST_LOWER enabled, at least not for idle balancing.
At first glance, this code looks correct, and even useful:
/*
* Aggressive migration if:
* 1) destination numa is preferred
* 2) task is cache cold, or
* 3) too many balance attempts have failed.
*/
tsk_cache_hot = task_hot(p, env);
if (!tsk_cache_hot)
tsk_cache_hot = migrate_degrades_locality(p, env);
if (migrate_improves_locality(p, env) || !tsk_cache_hot ||
env->sd->nr_balance_failed > env->sd->cache_nice_tries) {
if (tsk_cache_hot) {
schedstat_inc(env->sd, lb_hot_gained[env->idle]);
schedstat_inc(p,
se.statistics.nr_forced_migrations);
}
return 1;
}
However, with NUMA_RESIST_LOWER disabled (default),
migrate_degrades_locality always returns 0.
Furthermore, sched_migrate_latency_ns, which influences task_hot,
is set so small (.5 us) that task_hot is likely to always return
false for workloads with frequent sleeps and network latencies,
like a web workload...
In other words, the idle balancing code will treat tasks moving
towards their preferred NUMA node the same as tasks moving away
from their preferred NUMA node. It will move tasks regardless of
NUMA affinity, and can end up in a big fight with the NUMA
balancing code, as you have observed.
I am not sure what to do about this.
Peter?
--
All rights reversed
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists