lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKfTPtBZgvTBYR+kYjj9dHq8_25mG19CZmYzY5s33ijSHdLGyQ@mail.gmail.com>
Date:   Fri, 13 Mar 2020 15:26:20 +0100
From:   Vincent Guittot <vincent.guittot@...aro.org>
To:     Valentin Schneider <valentin.schneider@....com>
Cc:     Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Juri Lelli <juri.lelli@...hat.com>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Steven Rostedt <rostedt@...dmis.org>,
        Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
        linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] sched/fair: improve spreading of utilization

On Fri, 13 Mar 2020 at 13:55, Vincent Guittot
<vincent.guittot@...aro.org> wrote:
>
> On Fri, 13 Mar 2020 at 13:42, Valentin Schneider
> <valentin.schneider@....com> wrote:
> >
> >
> > On Fri, Mar 13 2020, Valentin Schneider wrote:
> > > On Fri, Mar 13 2020, Vincent Guittot wrote:
> > >>> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > >>> > index 3c8a379c357e..97a0307312d9 100644
> > >>> > --- a/kernel/sched/fair.c
> > >>> > +++ b/kernel/sched/fair.c
> > >>> > @@ -9025,6 +9025,14 @@ static struct rq *find_busiest_queue(struct lb_env *env,
> > >>> >               case migrate_util:
> > >>> >                       util = cpu_util(cpu_of(rq));
> > >>> >
> > >>> > +                     /*
> > >>> > +                      * Don't try to pull utilization from a CPU with one
> > >>> > +                      * running task. Whatever its utilization, we will fail
> > >>> > +                      * detach the task.
> > >>> > +                      */
> > >>> > +                     if (nr_running <= 1)
> > >>> > +                             continue;
> > >>> > +
> > >>>
> > >>> Doesn't this break misfit? If the busiest group is group_misfit_task, it
> > >>> is totally valid for the runqueues to have a single running task -
> > >>> that's the CPU-bound task we want to upmigrate.
> > >>
> > >>  group_misfit_task has its dedicated migrate_misfit case
> > >>
> > >
> > > Doh, yes, sorry. I think my rambling on ASYM_PACKING / reduced capacity
> > > migration is still relevant, though.
> > >
> >
> > And with more coffee that's another Doh, ASYM_PACKING would end up as
> > migrate_task. So this only affects the reduced capacity migration, which
>
> yes  ASYM_PACKING uses migrate_task and the case of reduced capacity
> would use it too and would not be impacted by this patch. I say
> "would" because the original rework of load balance got rid of this
> case. I'm going to prepare a separate fix  for this

After more thought, I think that we are safe for reduced capacity too
because this is handled in the migrate_load case. In my previous
reply, I was thinking of  the case where rq is not overloaded but cpu
has reduced capacity which is not handled. But in such case, we don't
have to force the migration of the task because there is still enough
capacity otherwise rq would be overloaded and we are back to the case
already handled

>
> > might be hard to notice in benchmarks.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ