[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191202132204.GK2844@hirez.programming.kicks-ass.net>
Date: Mon, 2 Dec 2019 14:22:04 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Vincent Guittot <vincent.guittot@...aro.org>
Cc: mingo@...hat.com, juri.lelli@...hat.com, dietmar.eggemann@....com,
rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] sched/cfs: fix spurious active migration
On Fri, Nov 29, 2019 at 03:04:47PM +0100, Vincent Guittot wrote:
> The load balance can fail to find a suitable task during the periodic check
> because the imbalance is smaller than half of the load of the waiting
> tasks. This results in the increase of the number of failed load balance,
> which can end up to start an active migration. This active migration is
> useless because the current running task is not a better choice than the
> waiting ones. In fact, the current task was probably not running but
> waiting for the CPU during one of the previous attempts and it had already
> not been selected.
>
> When load balance fails too many times to migrate a task, we should relax
> the contraint on the maximum load of the tasks that can be migrated
> similarly to what is done with cache hotness.
>
> Before the rework, load balance used to set the imbalance to the average
> load_per_task in order to mitigate such situation. This increased the
> likelihood of migrating a task but also of selecting a larger task than
> needed while more appropriate ones were in the list.
>
> Signed-off-by: Vincent Guittot <vincent.guittot@...aro.org>
> ---
>
> I haven't seen any noticable performance changes on the benchmarks that I
> usually run but the problem can be easily highlight with a simple test
> with 9 always running tasks on 8 cores.
>
> kernel/sched/fair.c | 9 ++++++++-
> 1 file changed, 8 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index e0d662a..d1b4fa7 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -7433,7 +7433,14 @@ static int detach_tasks(struct lb_env *env)
> load < 16 && !env->sd->nr_balance_failed)
> goto next;
>
> - if (load/2 > env->imbalance)
> + /*
> + * Make sure that we don't migrate too much load.
> + * Nevertheless, let relax the constraint if
> + * scheduler fails to find a good waiting task to
> + * migrate.
> + */
> + if (load/2 > env->imbalance &&
> + env->sd->nr_balance_failed <= env->sd->cache_nice_tries)
> goto next;
>
> env->imbalance -= load;
The alternative is carrying a flag that inhibits incrementing
nr_balance_failed.
Not migrating anything when doing so would make the imbalance worse is
not a failure after all.
Powered by blists - more mailing lists