lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKfTPtCVn+Ep4iMNZZi_MKfO9vnXt3G+LKR70LcLnyEAvfhoiw@mail.gmail.com>
Date:   Mon, 2 Dec 2019 16:36:39 +0100
From:   Vincent Guittot <vincent.guittot@...aro.org>
To:     Dietmar Eggemann <dietmar.eggemann@....com>
Cc:     Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Juri Lelli <juri.lelli@...hat.com>,
        Steven Rostedt <rostedt@...dmis.org>,
        Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
        linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] sched/cfs: fix spurious active migration

On Mon, 2 Dec 2019 at 16:13, Dietmar Eggemann <dietmar.eggemann@....com> wrote:
>
> On 29/11/2019 15:04, Vincent Guittot wrote:
> > The load balance can fail to find a suitable task during the periodic check
> > because  the imbalance is smaller than half of the load of the waiting
> > tasks. This results in the increase of the number of failed load balance,
> > which can end up to start an active migration. This active migration is
> > useless because the current running task is not a better choice than the
> > waiting ones. In fact, the current task was probably not running but
> > waiting for the CPU during one of the previous attempts and it had already
> > not been selected.
> >
> > When load balance fails too many times to migrate a task, we should relax
> > the contraint on the maximum load of the tasks that can be migrated
> > similarly to what is done with cache hotness.
> >
> > Before the rework, load balance used to set the imbalance to the average
> > load_per_task in order to mitigate such situation. This increased the
> > likelihood of migrating a task but also of selecting a larger task than
> > needed while more appropriate ones were in the list.
>
> Why not use '&& !env->sd->nr_balance_failed' then? Too aggressive? But

Yeah I have thought and tried !env->sd->nr_balance_failed. In fact, I
have tried both and I haven't seen any visible difference in my tests.
Nevertheless, using !env->sd->nr_balance_failed condition doesn't let
a chance to the current running task with a possibly correct load to
become a waiting task during the next load balance and to be selected
instead of a larger task, which will create another imbalance. Also
cache_nice_tries increase the number of trials to large domain span to
select different rqs before migrating a large task. Then,
env->sd->nr_balance_failed <= env->sd->cache_nice_tries is used in
can_migrate() for preventing migration because of cache_hotness so we
are aligned with this condition. At the opposite,
!env->sd->nr_balance_failed is used with LB_MIN which is disable by
default.

TBH, I don't have a strong opinion between !env->sd->nr_balance_failed
and env->sd->nr_balance_failed <= env->sd->cache_nice_tries.


> the average load_per_task was calculated at each load balance attempt,
> so it would have led to a migration at the first load balance.
>
> This would be in sync with the LB_MIN check in the same switch case
> (migrate_load). Although LB_MIN is false by default.
>
> > Signed-off-by: Vincent Guittot <vincent.guittot@...aro.org>
> > ---
> >
> > I haven't seen any noticable performance changes on the benchmarks that I
> > usually run but the problem can be easily highlight with a simple test
> > with 9 always running tasks on 8 cores.
> >
> >  kernel/sched/fair.c | 9 ++++++++-
> >  1 file changed, 8 insertions(+), 1 deletion(-)
> >
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index e0d662a..d1b4fa7 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -7433,7 +7433,14 @@ static int detach_tasks(struct lb_env *env)
> >                           load < 16 && !env->sd->nr_balance_failed)
> >                               goto next;
> >
> > -                     if (load/2 > env->imbalance)
> > +                     /*
> > +                      * Make sure that we don't migrate too much load.
> > +                      * Nevertheless, let relax the constraint if
> > +                      * scheduler fails to find a good waiting task to
> > +                      * migrate.
> > +                      */
> > +                     if (load/2 > env->imbalance &&
> > +                         env->sd->nr_balance_failed <= env->sd->cache_nice_tries)
> >                               goto next;
> >
> >                       env->imbalance -= load;
> >

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ