[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKfTPtBgZZWUonqdkOMJCyJSxSkGtbiWji=bR4LaZZJ=mVW-zQ@mail.gmail.com>
Date: Mon, 2 Dec 2019 14:51:43 +0100
From: Vincent Guittot <vincent.guittot@...aro.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Ingo Molnar <mingo@...hat.com>, Juri Lelli <juri.lelli@...hat.com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] sched/cfs: fix spurious active migration
On Mon, 2 Dec 2019 at 14:22, Peter Zijlstra <peterz@...radead.org> wrote:
>
> On Fri, Nov 29, 2019 at 03:04:47PM +0100, Vincent Guittot wrote:
> > The load balance can fail to find a suitable task during the periodic check
> > because the imbalance is smaller than half of the load of the waiting
> > tasks. This results in the increase of the number of failed load balance,
> > which can end up to start an active migration. This active migration is
> > useless because the current running task is not a better choice than the
> > waiting ones. In fact, the current task was probably not running but
> > waiting for the CPU during one of the previous attempts and it had already
> > not been selected.
> >
> > When load balance fails too many times to migrate a task, we should relax
> > the contraint on the maximum load of the tasks that can be migrated
> > similarly to what is done with cache hotness.
> >
> > Before the rework, load balance used to set the imbalance to the average
> > load_per_task in order to mitigate such situation. This increased the
> > likelihood of migrating a task but also of selecting a larger task than
> > needed while more appropriate ones were in the list.
> >
> > Signed-off-by: Vincent Guittot <vincent.guittot@...aro.org>
> > ---
> >
> > I haven't seen any noticable performance changes on the benchmarks that I
> > usually run but the problem can be easily highlight with a simple test
> > with 9 always running tasks on 8 cores.
> >
> > kernel/sched/fair.c | 9 ++++++++-
> > 1 file changed, 8 insertions(+), 1 deletion(-)
> >
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index e0d662a..d1b4fa7 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -7433,7 +7433,14 @@ static int detach_tasks(struct lb_env *env)
> > load < 16 && !env->sd->nr_balance_failed)
> > goto next;
> >
> > - if (load/2 > env->imbalance)
> > + /*
> > + * Make sure that we don't migrate too much load.
> > + * Nevertheless, let relax the constraint if
> > + * scheduler fails to find a good waiting task to
> > + * migrate.
> > + */
> > + if (load/2 > env->imbalance &&
> > + env->sd->nr_balance_failed <= env->sd->cache_nice_tries)
> > goto next;
> >
> > env->imbalance -= load;
>
> The alternative is carrying a flag that inhibits incrementing
> nr_balance_failed.
>
> Not migrating anything when doing so would make the imbalance worse is
> not a failure after all.
Yeah I thought about this possibility but this behavior will make a
big difference compared to legacy load balance and i'm not sure about
the impact on performance because we can generate significant
unfairness with 2 tasks sharing a CPU while others have a full CPU in
the example that I mentioned above.
Powered by blists - more mailing lists