[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180605035616.GD30328@linux.vnet.ibm.com>
Date: Mon, 4 Jun 2018 20:56:16 -0700
From: Srikar Dronamraju <srikar@...ux.vnet.ibm.com>
To: Rik van Riel <riel@...riel.com>
Cc: Ingo Molnar <mingo@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
LKML <linux-kernel@...r.kernel.org>,
Mel Gorman <mgorman@...hsingularity.net>,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH 16/19] sched/numa: Detect if node actively handling
migration
* Rik van Riel <riel@...riel.com> [2018-06-04 16:05:55]:
> On Mon, 2018-06-04 at 15:30 +0530, Srikar Dronamraju wrote:
>
> > @@ -1554,6 +1562,9 @@ static void task_numa_compare(struct
> > task_numa_env *env,
> > if (READ_ONCE(dst_rq->numa_migrate_on))
> > return;
> >
> > + if (*move && READ_ONCE(pgdat->active_node_migrate))
> > + *move = false;
>
> Why not do this check in task_numa_find_cpu?
>
> That way you won't have to pass this in as a
> pointer, and you also will not have to recalculate
> NODE_DATA(cpu_to_node(env->dst_cpu)) for every CPU.
>
I thought about this. Lets say we evaluated that destination node can
allow movement. While we iterate through the list of cpus trying to find
the best cpu node, we find a idle cpu towards the end of the list.
However if another task as already raced with us to move a task to this
node, then we should bail out. Keeping the check in task_numa_compare
will allow us to do this.
> > /*
> > + * If the numa importance is less than SMALLIMP,
>
> ^^^ numa improvement
>
okay
> > + * task migration might only result in ping pong
> > + * of tasks and also hurt performance due to cache
> > + * misses.
> > + */
> > + if (imp < SMALLIMP || imp <= env->best_imp + SMALLIMP / 2)
> > + goto unlock;
>
> I can see a use for the first test, but why limit the
> search for the best score once you are past the
> threshold?
>
> I don't understand the use for that second test.
>
Lets say few threads are racing with each other to find a cpu on the
node. The first thread has already found a task/cpu 'A' to swap and
finds another task/cpu 'B' thats slightly better than the current
best_cpu which is 'A'. Currently we allow the second task/cpu 'B' to be
set as best_cpu. However the second or subsequent threads cannot find
the first task/cpu A because its suppose to be in active migration. By
the time it reaches task/cpu B even that may look to be in active
migration. It may never know that task/cpu A was cleared. In this way,
the second and subsequent threads may not get a task/cpu in the
preferred node to swap just because the first task kept hopping task/cpu
as its choice of migration.
While we can't complete avoid this, the second check will try to make
sure we don't hop on/hop off just for small incremental numa
improvement.
> What workload benefits from it?
>
> --
> All Rights Reversed.
Powered by blists - more mailing lists