[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5550BA9D.3030104@redhat.com>
Date: Mon, 11 May 2015 10:20:13 -0400
From: Rik van Riel <riel@...hat.com>
To: dedekind1@...il.com
CC: linux-kernel@...r.kernel.org, mgorman@...e.de,
peterz@...radead.org, jhladky@...hat.com
Subject: Re: [PATCH] numa,sched: only consider less busy nodes as numa balancing
destination
On 05/11/2015 07:11 AM, Artem Bityutskiy wrote:
> On Fri, 2015-05-08 at 16:03 -0400, Rik van Riel wrote:
>> This works well when dealing with tasks that are constantly
>> running, but fails catastrophically when dealing with tasks
>> that go to sleep, wake back up, go back to sleep, wake back
>> up, and generally mess up the load statistics that the NUMA
>> balancing code use in a random way.
>
> Sleeping is what happens a lot I believe in this workload: processes do
> a lot of network I/O, file I/O too, and a lot of IPC.
>
> Would you please expand on this a bit more - why would this scenario
> "mess up load statistics" ?
>
>> If the normal scheduler load balancer is moving tasks the
>> other way the NUMA balancer is moving them, things will
>> not converge, and tasks will have worse memory locality
>> than not doing NUMA balancing at all.
>
> Are the regular and NUMA balancers independent?
>
> Are there mechanisms to detect ping-pong situations? I'd like to verify
> your theory, and these kinds of mechanisms would be helpful.
>
>> Currently the load balancer has a preference for moving
>> tasks to their preferred nodes (NUMA_FAVOUR_HIGHER, true),
>> but there is no resistance to moving tasks away from their
>> preferred nodes (NUMA_RESIST_LOWER, false). That setting
>> was arrived at after a fair amount of experimenting, and
>> is probably correct.
>
> I guess I can try making NUMA_RESIST_LOWER to be true and see what
> happens. But probably first I need to confirm that your theory
> (balancers playing ping-pong) is correct, any hints on how would I do
> this?
Funny thing, for your workload, the kernel only keeps statistics
on forced migrations when NUMA_RESIST_LOWER is enabled.
The reason is that the tasks on your system probably sleep too
long to hit the task_hot() test most of the time.
/*
* Aggressive migration if:
* 1) destination numa is preferred
* 2) task is cache cold, or
* 3) too many balance attempts have failed.
*/
tsk_cache_hot = task_hot(p, env);
if (!tsk_cache_hot)
tsk_cache_hot = migrate_degrades_locality(p, env);
if (migrate_improves_locality(p, env) || !tsk_cache_hot ||
env->sd->nr_balance_failed > env->sd->cache_nice_tries) {
if (tsk_cache_hot) {
schedstat_inc(env->sd, lb_hot_gained[env->idle]);
schedstat_inc(p,
se.statistics.nr_forced_migrations);
}
return 1;
}
schedstat_inc(p, se.statistics.nr_failed_migrations_hot);
return 0;
I am also not sure where the se.statistics.nr_forced_migrations
statistic is exported.
--
All rights reversed
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists