lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <CAKfTPtD1ES2-Jd1cW2XctRmhuJ_2Nh+oJeA8jF9UYgBW8+8-Yg@mail.gmail.com>
Date:   Mon, 27 Apr 2020 11:03:58 +0200
From:   Vincent Guittot <vincent.guittot@...aro.org>
To:     Hillf Danton <hdanton@...a.com>
Cc:     Xing Zhengjun <zhengjun.xing@...ux.intel.com>,
        kernel test robot <rong.a.chen@...el.com>,
        Tao Zhou <ouwen210@...mail.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Mel Gorman <mgorman@...e.de>,
        LKML <linux-kernel@...r.kernel.org>
Subject: Re: [LKP] [sched/fair] 6c8116c914: stress-ng.mmapfork.ops_per_sec
 -38.0% regression

On Sun, 26 Apr 2020 at 14:42, Hillf Danton <hdanton@...a.com> wrote:
>
>
> On 4/21/2020 8:47 AM, kernel test robot wrote:
> >
> > Greeting,
> >
> > FYI, we noticed a 56.4% improvement of stress-ng.fifo.ops_per_sec due to commit:
> >
> >
> > commit: 6c8116c914b65be5e4d6f66d69c8142eb0648c22 ("sched/fair: Fix condition of avg_load calculation")
> > https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
> >
> > in testcase: stress-ng
> > on test machine: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
> > with following parameters:
> >
> >     nr_threads: 100%
> >     disk: 1HDD
> >     testtime: 1s
> >     class: scheduler
> >     cpufreq_governor: performance
> >     ucode: 0xb000038
> >     sc_pid_max: 4194304
> >
>
> We need to handle group_fully_busy in a different way from
> group_overloaded as task push does not help grow load balance
> in the former case.

Have you tested this patch for the UC above ? Do you have figures ?

>
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -8744,30 +8744,20 @@ find_idlest_group(struct sched_domain *s
>
>         switch (local_sgs.group_type) {
>         case group_overloaded:
> -       case group_fully_busy:
> -               /*
> -                * When comparing groups across NUMA domains, it's possible for
> -                * the local domain to be very lightly loaded relative to the
> -                * remote domains but "imbalance" skews the comparison making
> -                * remote CPUs look much more favourable. When considering
> -                * cross-domain, add imbalance to the load on the remote node
> -                * and consider staying local.
> -                */
> -
> -               if ((sd->flags & SD_NUMA) &&
> -                   ((idlest_sgs.avg_load + imbalance) >= local_sgs.avg_load))
> +               if (100 * local_sgs.avg_load <= sd->imbalance_pct * (idlest_sgs.avg_load + imbalance))
> +                       return idlest;

So you have completely removed the NUMA special case without explaining why.

And you have also removed the tests for small load.

Could you explain the rationale behind all these changes ?

Also keep in mind that the current version provide +58% improvement
for  stress-ng.fifo

> +               if (local_sgs.avg_load > idlest_sgs.avg_load + imbalance)
> +                       return idlest;
> +               else
>                         return NULL;
>
> +       case group_fully_busy:
>                 /*
> -                * If the local group is less loaded than the selected
> -                * idlest group don't try and push any tasks.
> +                * Pushing task to the idlest group will make the target group
> +                * overloaded, leaving the local group that is overloaded fully busy,
> +                * thus we earn nothing except for the exchange of group types.

For this case both local and idlest are fully busy and in this case
one will become overloaded so you must compare the load to be fair in
the spread of load

>                  */
> -               if (idlest_sgs.avg_load >= (local_sgs.avg_load + imbalance))
> -                       return NULL;
> -
> -               if (100 * local_sgs.avg_load <= sd->imbalance_pct * idlest_sgs.avg_load)
> -                       return NULL;
> -               break;
> +               return NULL;
>
>         case group_imbalanced:
>         case group_asym_packing:
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ