lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 21 Oct 2019 09:50:38 +0200
From:   Ingo Molnar <mingo@...nel.org>
To:     Vincent Guittot <vincent.guittot@...aro.org>,
        Mel Gorman <mgorman@...hsingularity.net>
Cc:     linux-kernel@...r.kernel.org, mingo@...hat.com,
        peterz@...radead.org, pauld@...hat.com, valentin.schneider@....com,
        srikar@...ux.vnet.ibm.com, quentin.perret@....com,
        dietmar.eggemann@....com, Morten.Rasmussen@....com,
        hdanton@...a.com, parth@...ux.ibm.com, riel@...riel.com
Subject: Re: [PATCH v4 00/10] sched/fair: rework the CFS load balance


* Vincent Guittot <vincent.guittot@...aro.org> wrote:

> Several wrong task placement have been raised with the current load
> balance algorithm but their fixes are not always straight forward and
> end up with using biased values to force migrations. A cleanup and rework
> of the load balance will help to handle such UCs and enable to fine grain
> the behavior of the scheduler for other cases.
> 
> Patch 1 has already been sent separately and only consolidate asym policy
> in one place and help the review of the changes in load_balance.
> 
> Patch 2 renames the sum of h_nr_running in stats.
> 
> Patch 3 removes meaningless imbalance computation to make review of
> patch 4 easier.
> 
> Patch 4 reworks load_balance algorithm and fixes some wrong task placement
> but try to stay conservative.
> 
> Patch 5 add the sum of nr_running to monitor non cfs tasks and take that
> into account when pulling tasks.
> 
> Patch 6 replaces runnable_load by load now that the signal is only used
> when overloaded.
> 
> Patch 7 improves the spread of tasks at the 1st scheduling level.
> 
> Patch 8 uses utilization instead of load in all steps of misfit task
> path.
> 
> Patch 9 replaces runnable_load_avg by load_avg in the wake up path.
> 
> Patch 10 optimizes find_idlest_group() that was using both runnable_load
> and load. This has not been squashed with previous patch to ease the
> review.
> 
> Patch 11 reworks find_idlest_group() to follow the same steps as
> find_busiest_group()
> 
> Some benchmarks results based on 8 iterations of each tests:
> - small arm64 dual quad cores system
> 
>            tip/sched/core        w/ this patchset    improvement
> schedpipe      53125 +/-0.18%        53443 +/-0.52%   (+0.60%)
> 
> hackbench -l (2560/#grp) -g #grp
>  1 groups      1.579 +/-29.16%       1.410 +/-13.46% (+10.70%)
>  4 groups      1.269 +/-9.69%        1.205 +/-3.27%   (+5.00%)
>  8 groups      1.117 +/-1.51%        1.123 +/-1.27%   (+4.57%)
> 16 groups      1.176 +/-1.76%        1.164 +/-2.42%   (+1.07%)
> 
> Unixbench shell8
>   1 test     1963.48 +/-0.36%       1902.88 +/-0.73%    (-3.09%)
> 224 tests    2427.60 +/-0.20%       2469.80 +/-0.42%  (1.74%)
> 
> - large arm64 2 nodes / 224 cores system
> 
>            tip/sched/core        w/ this patchset    improvement
> schedpipe     124084 +/-1.36%       124445 +/-0.67%   (+0.29%)
> 
> hackbench -l (256000/#grp) -g #grp
>   1 groups    15.305 +/-1.50%       14.001 +/-1.99%   (+8.52%)
>   4 groups     5.959 +/-0.70%        5.542 +/-3.76%   (+6.99%)
>  16 groups     3.120 +/-1.72%        3.253 +/-0.61%   (-4.92%)
>  32 groups     2.911 +/-0.88%        2.837 +/-1.16%   (+2.54%)
>  64 groups     2.805 +/-1.90%        2.716 +/-1.18%   (+3.17%)
> 128 groups     3.166 +/-7.71%        3.891 +/-6.77%   (+5.82%)
> 256 groups     3.655 +/-10.09%       3.185 +/-6.65%  (+12.87%)
> 
> dbench
>   1 groups   328.176 +/-0.29%      330.217 +/-0.32%   (+0.62%)
>   4 groups   930.739 +/-0.50%      957.173 +/-0.66%   (+2.84%)
>  16 groups  1928.292 +/-0.36%     1978.234 +/-0.88%   (+0.92%)
>  32 groups  2369.348 +/-1.72%     2454.020 +/-0.90%   (+3.57%)
>  64 groups  2583.880 +/-3.39%     2618.860 +/-0.84%   (+1.35%)
> 128 groups  2256.406 +/-10.67%    2392.498 +/-2.13%   (+6.03%)
> 256 groups  1257.546 +/-3.81%     1674.684 +/-4.97%  (+33.17%)
> 
> Unixbench shell8
>   1 test     6944.16 +/-0.02     6605.82 +/-0.11      (-4.87%)
> 224 tests   13499.02 +/-0.14    13637.94 +/-0.47%     (+1.03%)
> lkp reported a -10% regression on shell8 (1 test) for v3 that 
> seems that is partially recovered on my platform with v4.
> 
> tip/sched/core sha1:
>   commit 563c4f85f9f0 ("Merge branch 'sched/rt' into sched/core, to pick up -rt changes")
>   
> Changes since v3:
> - small typo and variable ordering fixes
> - add some acked/reviewed tag
> - set 1 instead of load for migrate_misfit
> - use nr_h_running instead of load for asym_packing
> - update the optimization of find_idlest_group() and put back somes
>  conditions when comparing load
> - rework find_idlest_group() to match find_busiest_group() behavior
> 
> Changes since v2:
> - fix typo and reorder code
> - some minor code fixes
> - optimize the find_idles_group()
> 
> Not covered in this patchset:
> - Better detection of overloaded and fully busy state, especially for cases
>   when nr_running > nr CPUs.
> 
> Vincent Guittot (11):
>   sched/fair: clean up asym packing
>   sched/fair: rename sum_nr_running to sum_h_nr_running
>   sched/fair: remove meaningless imbalance calculation
>   sched/fair: rework load_balance
>   sched/fair: use rq->nr_running when balancing load
>   sched/fair: use load instead of runnable load in load_balance
>   sched/fair: evenly spread tasks when not overloaded
>   sched/fair: use utilization to select misfit task
>   sched/fair: use load instead of runnable load in wakeup path
>   sched/fair: optimize find_idlest_group
>   sched/fair: rework find_idlest_group
> 
>  kernel/sched/fair.c | 1181 +++++++++++++++++++++++++++++----------------------
>  1 file changed, 682 insertions(+), 499 deletions(-)

Thanks, that's an excellent series!

I've queued it up in sched/core with a handful of readability edits to 
comments and changelogs.

There are some upstreaming caveats though, I expect this series to be a 
performance regression magnet:

 - load_balance() and wake-up changes invariably are such: some workloads 
   only work/scale well by accident, and if we touch the logic it might 
   flip over into a less advantageous scheduling pattern.

 - In particular the changes from balancing and waking on runnable load 
   to full load that includes blocking *will* shift IO-intensive 
   workloads that you tests don't fully capture I believe. You also made 
   idle balancing more aggressive in essence - which might reduce cache 
   locality for some workloads.

A full run on Mel Gorman's magic scalability test-suite would be super 
useful ...

Anyway, please be on the lookout for such performance regression reports.

Also, we seem to have grown a fair amount of these TODO entries:

  kernel/sched/fair.c: * XXX borrowed from update_sg_lb_stats
  kernel/sched/fair.c: * XXX: only do this for the part of runnable > running ?
  kernel/sched/fair.c:     * XXX illustrate
  kernel/sched/fair.c:    } else if (sd_flag & SD_BALANCE_WAKE) { /* XXX always ? */
  kernel/sched/fair.c: * can also include other factors [XXX].
  kernel/sched/fair.c: * [XXX expand on:
  kernel/sched/fair.c: * [XXX more?]
  kernel/sched/fair.c: * [XXX write more on how we solve this.. _after_ merging pjt's patches that
  kernel/sched/fair.c:             * XXX for now avg_load is not computed and always 0 so we
  kernel/sched/fair.c:            /* XXX broken for overlapping NUMA groups */

:-)

Thanks,

	Ingo

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ