lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 20 May 2020 15:04:48 +0200
From:   Vincent Guittot <vincent.guittot@...aro.org>
To:     kernel test robot <oliver.sang@...el.com>
Cc:     Ingo Molnar <mingo@...nel.org>, Ben Segall <bsegall@...gle.com>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Juri Lelli <juri.lelli@...hat.com>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Mel Gorman <mgorman@...e.de>, Mike Galbraith <efault@....de>,
        Peter Zijlstra <peterz@...radead.org>,
        Steven Rostedt <rostedt@...dmis.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        LKML <linux-kernel@...r.kernel.org>, lkp@...ts.01.org,
        OTC LSE PnP <otc.lse.pnp@...el.com>
Subject: Re: [sched/fair] 0b0695f2b3: phoronix-test-suite.compress-gzip.0.seconds
 19.8% regression

On Thu, 14 May 2020 at 19:09, Vincent Guittot
<vincent.guittot@...aro.org> wrote:
>
> Hi Oliver,
>
> On Thu, 14 May 2020 at 16:05, kernel test robot <oliver.sang@...el.com> wrote:
> >
> > Hi Vincent Guittot,
> >
> > Below report FYI.
> > Last year, we actually reported an improvement "[sched/fair] 0b0695f2b3:
> > vm-scalability.median 3.1% improvement" on link [1].
> > but now we found the regression on pts.compress-gzip.
> > This seems align with what showed in "[v4,00/10] sched/fair: rework the CFS
> > load balance" (link [2]), where showed the reworked load balance could have
> > both positive and negative effect for different test suites.
>
> We have tried to run  all possible use cases but it's impossible to
> covers all so there were a possibility that one that is not covered,
> would regressed.
>
> > And also from link [3], the patch set risks regressions.
> >
> > We also confirmed this regression on another platform
> > (Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz with 8G memory),
> > below is the data (lower is better).
> > v5.4    4.1
> > fcf0553db6f4c79387864f6e4ab4a891601f395e    4.01
> > 0b0695f2b34a4afa3f6e9aa1ff0e5336d8dad912    4.89
> > v5.5    5.18
> > v5.6    4.62
> > v5.7-rc2    4.53
> > v5.7-rc3    4.59
> >
> > It seems there are some recovery on latest kernels, but not fully back.
> > We were just wondering whether you could share some lights the further works
> > on the load balance after patch set [2] which could cause the performance
> > change?
> > And whether you have plan to refine the load balance algorithm further?
>
> I'm going to have a look at your regression to understand what is
> going wrong and how it can be fixed

I have run the benchmark on my local setups to try to reproduce the
regression and I don't see the regression. But my setups are different
from your so it might be a problem specific to yours

After analysing the benchmark, it doesn't overload the system and is
mainly based on 1 main gzip thread with few others waking up and
sleeping around.

I thought that scheduler could be too aggressive when trying to
balance the threads on your system, which could generate more task
migrations and impact the performance. But this doesn't seem to be the
case because perf-stat.i.cpu-migrations is -8%. On the other side, the
context switch is +16% and more interestingly idle state C1E and C6
usages increase more than 50%. I don't know if we can rely or this
value or not but I wonder if it could be that threads are now spread
on different CPUs which generates idle time on the busy CPUs but the
added time to enter/leave these states hurts the performance.

Could you make some traces of both kernels ? Tracing sched events
should be enough to understand the behavior

Regards,
Vincent

>
> Thanks
> Vincent
>
> > thanks
> >
> > [1] https://lists.01.org/hyperkitty/list/lkp@lists.01.org/thread/SANC7QLYZKUNMM6O7UNR3OAQAKS5BESE/
> > [2] https://lore.kernel.org/patchwork/cover/1141687/
> > [3] https://www.phoronix.com/scan.php?page=news_item&px=Linux-5.5-Scheduler
> >
> >
> >
> > Below is the detail regression report FYI.
> >
> > Greeting,
> >
> > FYI, we noticed a 19.8% regression of phoronix-test-suite.compress-gzip.0.seconds due to commit:
> >
> >
> > commit: 0b0695f2b34a4afa3f6e9aa1ff0e5336d8dad912 ("sched/fair: Rework load_balance()")
> > https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
> >
> > in testcase: phoronix-test-suite
> > on test machine: 16 threads Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz with 32G memory
> > with following parameters:
> >
> >         test: compress-gzip-1.2.0
> >         cpufreq_governor: performance
> >         ucode: 0xca
> >
> > test-description: The Phoronix Test Suite is the most comprehensive testing and benchmarking platform available that provides an extensible framework for which new tests can be easily added.
> > test-url: http://www.phoronix-test-suite.com/
> >
> > In addition to that, the commit also has significant impact on the following tests:
> >
> > +------------------+-----------------------------------------------------------------------+
> > | testcase: change | phoronix-test-suite:                                                  |
> > | test machine     | 12 threads Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz with 8G memory     |
> > | test parameters  | cpufreq_governor=performance                                          |
> > |                  | test=compress-gzip-1.2.0                                              |
> > +------------------+-----------------------------------------------------------------------+
> > | testcase: change | vm-scalability: vm-scalability.median 3.1% improvement                |
> > | test machine     | 104 threads Skylake with 192G memory                                  |
> > | test parameters  | cpufreq_governor=performance                                          |
> > |                  | runtime=300s                                                          |
> > |                  | size=8T                                                               |
> > |                  | test=anon-cow-seq                                                     |
> > |                  | ucode=0x2000064                                                       |
> > +------------------+-----------------------------------------------------------------------+
> > | testcase: change | stress-ng: stress-ng.fault.ops_per_sec -23.1% regression              |
> > | test machine     | 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory |
> > | test parameters  | class=scheduler                                                       |
> > |                  | cpufreq_governor=performance                                          |
> > |                  | disk=1HDD                                                             |
> > |                  | nr_threads=100%                                                       |
> > |                  | sc_pid_max=4194304                                                    |
> > |                  | testtime=1s                                                           |
> > |                  | ucode=0xb000038                                                       |
> > +------------------+-----------------------------------------------------------------------+
> > | testcase: change | stress-ng: stress-ng.schedpolicy.ops_per_sec -33.3% regression        |
> > | test machine     | 96 threads Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 192G memory  |
> > | test parameters  | class=interrupt                                                       |
> > |                  | cpufreq_governor=performance                                          |
> > |                  | disk=1HDD                                                             |
> > |                  | nr_threads=100%                                                       |
> > |                  | testtime=1s                                                           |
> > |                  | ucode=0x500002c                                                       |
> > +------------------+-----------------------------------------------------------------------+
> > | testcase: change | stress-ng: stress-ng.schedpolicy.ops_per_sec 42.3% improvement        |
> > | test machine     | 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory |
> > | test parameters  | class=interrupt                                                       |
> > |                  | cpufreq_governor=performance                                          |
> > |                  | disk=1HDD                                                             |
> > |                  | nr_threads=100%                                                       |
> > |                  | testtime=30s                                                          |
> > |                  | ucode=0xb000038                                                       |
> > +------------------+-----------------------------------------------------------------------+
> > | testcase: change | stress-ng: stress-ng.schedpolicy.ops_per_sec 35.1% improvement        |
> > | test machine     | 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory |
> > | test parameters  | class=interrupt                                                       |
> > |                  | cpufreq_governor=performance                                          |
> > |                  | disk=1HDD                                                             |
> > |                  | nr_threads=100%                                                       |
> > |                  | testtime=1s                                                           |
> > |                  | ucode=0xb000038                                                       |
> > +------------------+-----------------------------------------------------------------------+
> > | testcase: change | stress-ng: stress-ng.ioprio.ops_per_sec -20.7% regression             |
> > | test machine     | 96 threads Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 192G memory  |
> > | test parameters  | class=os                                                              |
> > |                  | cpufreq_governor=performance                                          |
> > |                  | disk=1HDD                                                             |
> > |                  | fs=ext4                                                               |
> > |                  | nr_threads=100%                                                       |
> > |                  | testtime=1s                                                           |
> > |                  | ucode=0x500002b                                                       |
> > +------------------+-----------------------------------------------------------------------+
> > | testcase: change | stress-ng: stress-ng.schedpolicy.ops_per_sec 43.0% improvement        |
> > | test machine     | 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory |
> > | test parameters  | class=interrupt                                                       |
> > |                  | cpufreq_governor=performance                                          |
> > |                  | disk=1HDD                                                             |
> > |                  | nr_threads=100%                                                       |
> > |                  | testtime=30s                                                          |
> > |                  | ucode=0xb000038                                                       |
> > +------------------+-----------------------------------------------------------------------+
> >
> >
> > If you fix the issue, kindly add following tag
> > Reported-by: kernel test robot <oliver.sang@...el.com>
> >
> >
> > Details are as below:
> > -------------------------------------------------------------------------------------------------->
> >
> >
> > To reproduce:
> >
> >         git clone https://github.com/intel/lkp-tests.git
> >         cd lkp-tests
> >         bin/lkp install job.yaml  # job file is attached in this email
> >         bin/lkp run     job.yaml
> >
> > =========================================================================================
> > compiler/cpufreq_governor/kconfig/rootfs/tbox_group/test/testcase/ucode:
> >   gcc-7/performance/x86_64-lck-7983/clear-x86_64-phoronix-30140/lkp-cfl-e1/compress-gzip-1.2.0/phoronix-test-suite/0xca
> >
> > commit:
> >   fcf0553db6 ("sched/fair: Remove meaningless imbalance calculation")
> >   0b0695f2b3 ("sched/fair: Rework load_balance()")
> >
> > fcf0553db6f4c793 0b0695f2b34a4afa3f6e9aa1ff0
> > ---------------- ---------------------------
> >        fail:runs  %reproduction    fail:runs
> >            |             |             |
> >            :4            4%           0:7     perf-profile.children.cycles-pp.error_entry
> >          %stddev     %change         %stddev
> >              \          |                \
> >       6.01           +19.8%       7.20        phoronix-test-suite.compress-gzip.0.seconds
> >     147.57 ą  8%     +25.1%     184.54        phoronix-test-suite.time.elapsed_time
> >     147.57 ą  8%     +25.1%     184.54        phoronix-test-suite.time.elapsed_time.max
> >      52926 ą  8%     -23.8%      40312        meminfo.max_used_kB
> >       0.11 ą  7%      -0.0        0.09 ą  3%  mpstat.cpu.all.soft%
> >     242384            -1.4%     238931        proc-vmstat.nr_inactive_anon
> >     242384            -1.4%     238931        proc-vmstat.nr_zone_inactive_anon
> >  1.052e+08 ą 27%     +56.5%  1.647e+08 ą 10%  cpuidle.C1E.time
> >    1041078 ą 22%     +54.7%    1610786 ą  7%  cpuidle.C1E.usage
> >  3.414e+08 ą  6%     +57.6%  5.381e+08 ą 28%  cpuidle.C6.time
> >     817897 ą  3%     +50.1%    1227607 ą 11%  cpuidle.C6.usage
> >       2884            -4.2%       2762        turbostat.Avg_MHz
> >    1041024 ą 22%     +54.7%    1610657 ą  7%  turbostat.C1E
> >     817802 ą  3%     +50.1%    1227380 ą 11%  turbostat.C6
> >      66.75            -2.0%      65.42        turbostat.CorWatt
> >      67.28            -2.0%      65.94        turbostat.PkgWatt
> >      32.50            +6.2%      34.50        vmstat.cpu.id
> >      62.50            -2.4%      61.00        vmstat.cpu.us
> >       2443 ą  2%     -28.9%       1738 ą  2%  vmstat.io.bi
> >      23765 ą  4%     +16.5%      27685        vmstat.system.cs
> >      37860            -7.1%      35180 ą  2%  vmstat.system.in
> >  3.474e+09 ą  3%     -12.7%  3.032e+09        perf-stat.i.branch-instructions
> >  1.344e+08 ą  2%     -11.6%  1.188e+08        perf-stat.i.branch-misses
> >   13033225 ą  4%     -19.0%   10561032        perf-stat.i.cache-misses
> >  5.105e+08 ą  3%     -15.3%  4.322e+08        perf-stat.i.cache-references
> >      24205 ą  4%     +16.3%      28161        perf-stat.i.context-switches
> >      30.25 ą  2%     +39.7%      42.27 ą  2%  perf-stat.i.cpi
> >   4.63e+10            -4.7%  4.412e+10        perf-stat.i.cpu-cycles
> >       3147 ą  4%      -8.4%       2882 ą  2%  perf-stat.i.cpu-migrations
> >      16724 ą  2%     +45.9%      24406 ą  5%  perf-stat.i.cycles-between-cache-misses
> >       0.18 ą 13%      -0.1        0.12 ą  4%  perf-stat.i.dTLB-load-miss-rate%
> >  4.822e+09 ą  3%     -11.9%  4.248e+09        perf-stat.i.dTLB-loads
> >       0.07 ą  8%      -0.0        0.05 ą 16%  perf-stat.i.dTLB-store-miss-rate%
> >  1.623e+09 ą  2%     -11.5%  1.436e+09        perf-stat.i.dTLB-stores
> >    1007120 ą  3%      -8.9%     917854 ą  2%  perf-stat.i.iTLB-load-misses
> >  1.816e+10 ą  3%     -12.2%  1.594e+10        perf-stat.i.instructions
> >       2.06 ą 54%     -66.0%       0.70        perf-stat.i.major-faults
> >      29896 ą 13%     -35.2%      19362 ą  8%  perf-stat.i.minor-faults
> >       0.00 ą  9%      -0.0        0.00 ą  6%  perf-stat.i.node-load-miss-rate%
> >    1295134 ą  3%     -14.2%    1111173        perf-stat.i.node-loads
> >    3064949 ą  4%     -18.7%    2491063 ą  2%  perf-stat.i.node-stores
> >      29898 ą 13%     -35.2%      19363 ą  8%  perf-stat.i.page-faults
> >      28.10            -3.5%      27.12        perf-stat.overall.MPKI
> >       2.55            -0.1        2.44 ą  2%  perf-stat.overall.cache-miss-rate%
> >       2.56 ą  3%      +8.5%       2.77        perf-stat.overall.cpi
> >       3567 ą  5%     +17.3%       4186        perf-stat.overall.cycles-between-cache-misses
> >       0.02 ą  3%      +0.0        0.02 ą  3%  perf-stat.overall.dTLB-load-miss-rate%
> >      18031            -3.6%      17375 ą  2%  perf-stat.overall.instructions-per-iTLB-miss
> >       0.39 ą  3%      -7.9%       0.36        perf-stat.overall.ipc
> >  3.446e+09 ą  3%     -12.6%  3.011e+09        perf-stat.ps.branch-instructions
> >  1.333e+08 ą  2%     -11.5%   1.18e+08        perf-stat.ps.branch-misses
> >   12927998 ą  4%     -18.8%   10491818        perf-stat.ps.cache-misses
> >  5.064e+08 ą  3%     -15.2%  4.293e+08        perf-stat.ps.cache-references
> >      24011 ą  4%     +16.5%      27973        perf-stat.ps.context-switches
> >  4.601e+10            -4.6%  4.391e+10        perf-stat.ps.cpu-cycles
> >       3121 ą  4%      -8.3%       2863 ą  2%  perf-stat.ps.cpu-migrations
> >  4.783e+09 ą  3%     -11.8%  4.219e+09        perf-stat.ps.dTLB-loads
> >   1.61e+09 ą  2%     -11.4%  1.426e+09        perf-stat.ps.dTLB-stores
> >     999100 ą  3%      -8.7%     911974 ą  2%  perf-stat.ps.iTLB-load-misses
> >  1.802e+10 ą  3%     -12.1%  1.584e+10        perf-stat.ps.instructions
> >       2.04 ą 54%     -65.9%       0.70        perf-stat.ps.major-faults
> >      29656 ą 13%     -35.1%      19237 ą  8%  perf-stat.ps.minor-faults
> >    1284601 ą  3%     -14.1%    1103823        perf-stat.ps.node-loads
> >    3039931 ą  4%     -18.6%    2474451 ą  2%  perf-stat.ps.node-stores
> >      29658 ą 13%     -35.1%      19238 ą  8%  perf-stat.ps.page-faults
> >      50384 ą  2%     +16.5%      58713 ą  4%  softirqs.CPU0.RCU
> >      33143 ą  2%     +19.9%      39731 ą  2%  softirqs.CPU0.SCHED
> >      72672           +24.0%      90109        softirqs.CPU0.TIMER
> >      22182 ą  4%     +26.3%      28008 ą  4%  softirqs.CPU1.SCHED
> >      74465 ą  4%     +26.3%      94027 ą  3%  softirqs.CPU1.TIMER
> >      18680 ą  7%     +29.2%      24135 ą  3%  softirqs.CPU10.SCHED
> >      75941 ą  2%     +21.8%      92486 ą  7%  softirqs.CPU10.TIMER
> >      48991 ą  4%     +22.7%      60105 ą  5%  softirqs.CPU11.RCU
> >      18666 ą  6%     +28.4%      23976 ą  4%  softirqs.CPU11.SCHED
> >      74896 ą  6%     +24.4%      93173 ą  3%  softirqs.CPU11.TIMER
> >      49490           +20.5%      59659 ą  2%  softirqs.CPU12.RCU
> >      18973 ą  7%     +26.0%      23909 ą  3%  softirqs.CPU12.SCHED
> >      50620           +19.9%      60677 ą  6%  softirqs.CPU13.RCU
> >      19136 ą  6%     +23.2%      23577 ą  4%  softirqs.CPU13.SCHED
> >      74812           +33.3%      99756 ą  7%  softirqs.CPU13.TIMER
> >      50824           +15.9%      58881 ą  3%  softirqs.CPU14.RCU
> >      19550 ą  5%     +24.1%      24270 ą  4%  softirqs.CPU14.SCHED
> >      76801           +22.8%      94309 ą  4%  softirqs.CPU14.TIMER
> >      51844           +11.5%      57795 ą  3%  softirqs.CPU15.RCU
> >      19204 ą  8%     +28.4%      24662 ą  2%  softirqs.CPU15.SCHED
> >      74751           +29.9%      97127 ą  3%  softirqs.CPU15.TIMER
> >      50307           +17.4%      59062 ą  4%  softirqs.CPU2.RCU
> >      22150           +12.2%      24848        softirqs.CPU2.SCHED
> >      79653 ą  2%     +21.6%      96829 ą 10%  softirqs.CPU2.TIMER
> >      50833           +21.1%      61534 ą  4%  softirqs.CPU3.RCU
> >      18935 ą  2%     +32.0%      25002 ą  3%  softirqs.CPU3.SCHED
> >      50569           +15.8%      58570 ą  4%  softirqs.CPU4.RCU
> >      20509 ą  5%     +18.3%      24271        softirqs.CPU4.SCHED
> >      80942 ą  2%     +15.3%      93304 ą  3%  softirqs.CPU4.TIMER
> >      50692           +16.5%      59067 ą  4%  softirqs.CPU5.RCU
> >      20237 ą  3%     +18.2%      23914 ą  3%  softirqs.CPU5.SCHED
> >      78963           +21.8%      96151 ą  2%  softirqs.CPU5.TIMER
> >      19709 ą  7%     +20.1%      23663        softirqs.CPU6.SCHED
> >      81250           +15.9%      94185        softirqs.CPU6.TIMER
> >      51379           +15.0%      59108        softirqs.CPU7.RCU
> >      19642 ą  5%     +28.4%      25227 ą  3%  softirqs.CPU7.SCHED
> >      78299 ą  2%     +30.3%     102021 ą  4%  softirqs.CPU7.TIMER
> >      49723           +19.0%      59169 ą  4%  softirqs.CPU8.RCU
> >      20138 ą  6%     +21.7%      24501 ą  2%  softirqs.CPU8.SCHED
> >      75256 ą  3%     +22.8%      92419 ą  2%  softirqs.CPU8.TIMER
> >      50406 ą  2%     +17.4%      59178 ą  4%  softirqs.CPU9.RCU
> >      19182 ą  9%     +24.2%      23831 ą  6%  softirqs.CPU9.SCHED
> >      73572 ą  5%     +30.4%      95951 ą  8%  softirqs.CPU9.TIMER
> >     812363           +16.6%     946858 ą  3%  softirqs.RCU
> >     330042 ą  4%     +23.5%     407533        softirqs.SCHED
> >    1240046           +22.5%    1519539        softirqs.TIMER
> >     251015 ą 21%     -84.2%      39587 ą106%  sched_debug.cfs_rq:/.MIN_vruntime.avg
> >     537847 ą  4%     -44.8%     297100 ą 66%  sched_debug.cfs_rq:/.MIN_vruntime.max
> >     257990 ą  5%     -63.4%      94515 ą 82%  sched_debug.cfs_rq:/.MIN_vruntime.stddev
> >      38935           +47.9%      57601        sched_debug.cfs_rq:/.exec_clock.avg
> >      44119           +40.6%      62013        sched_debug.cfs_rq:/.exec_clock.max
> >      37622           +49.9%      56404        sched_debug.cfs_rq:/.exec_clock.min
> >      47287 ą  7%     -70.3%      14036 ą 88%  sched_debug.cfs_rq:/.load.min
> >      67.17           -52.9%      31.62 ą 31%  sched_debug.cfs_rq:/.load_avg.min
> >     251015 ą 21%     -84.2%      39588 ą106%  sched_debug.cfs_rq:/.max_vruntime.avg
> >     537847 ą  4%     -44.8%     297103 ą 66%  sched_debug.cfs_rq:/.max_vruntime.max
> >     257991 ą  5%     -63.4%      94516 ą 82%  sched_debug.cfs_rq:/.max_vruntime.stddev
> >     529078 ą  3%     +45.2%     768398        sched_debug.cfs_rq:/.min_vruntime.avg
> >     547175 ą  2%     +44.1%     788582        sched_debug.cfs_rq:/.min_vruntime.max
> >     496420           +48.3%     736148 ą  2%  sched_debug.cfs_rq:/.min_vruntime.min
> >       1.33 ą 15%     -44.0%       0.75 ą 32%  sched_debug.cfs_rq:/.nr_running.avg
> >       0.83 ą 20%     -70.0%       0.25 ą 70%  sched_debug.cfs_rq:/.nr_running.min
> >       0.54 ą  8%     -15.9%       0.45 ą  7%  sched_debug.cfs_rq:/.nr_running.stddev
> >       0.33           +62.9%       0.54 ą  8%  sched_debug.cfs_rq:/.nr_spread_over.avg
> >       1.33           +54.7%       2.06 ą 17%  sched_debug.cfs_rq:/.nr_spread_over.max
> >       0.44 ą  7%     +56.4%       0.69 ą  6%  sched_debug.cfs_rq:/.nr_spread_over.stddev
> >     130.83 ą 14%     -25.6%      97.37 ą 15%  sched_debug.cfs_rq:/.runnable_load_avg.avg
> >      45.33 ą  6%     -79.3%       9.38 ą 70%  sched_debug.cfs_rq:/.runnable_load_avg.min
> >      47283 ą  7%     -70.9%      13741 ą 89%  sched_debug.cfs_rq:/.runnable_weight.min
> >       1098 ą  8%     -27.6%     795.02 ą 24%  sched_debug.cfs_rq:/.util_avg.avg
> >     757.50 ą  9%     -51.3%     369.25 ą 10%  sched_debug.cfs_rq:/.util_avg.min
> >     762.39 ą 11%     -44.4%     424.04 ą 46%  sched_debug.cfs_rq:/.util_est_enqueued.avg
> >     314.00 ą 18%     -78.5%      67.38 ą100%  sched_debug.cfs_rq:/.util_est_enqueued.min
> >     142951 ą  5%     +22.8%     175502 ą  3%  sched_debug.cpu.avg_idle.avg
> >      72112           -18.3%      58937 ą 13%  sched_debug.cpu.avg_idle.stddev
> >     127638 ą  7%     +39.3%     177858 ą  5%  sched_debug.cpu.clock.avg
> >     127643 ą  7%     +39.3%     177862 ą  5%  sched_debug.cpu.clock.max
> >     127633 ą  7%     +39.3%     177855 ą  5%  sched_debug.cpu.clock.min
> >     126720 ą  7%     +39.4%     176681 ą  5%  sched_debug.cpu.clock_task.avg
> >     127168 ą  7%     +39.3%     177179 ą  5%  sched_debug.cpu.clock_task.max
> >     125240 ą  7%     +39.5%     174767 ą  5%  sched_debug.cpu.clock_task.min
> >     563.60 ą  2%     +25.9%     709.78 ą  9%  sched_debug.cpu.clock_task.stddev
> >       1.66 ą 18%     -37.5%       1.04 ą 32%  sched_debug.cpu.nr_running.avg
> >       0.83 ą 20%     -62.5%       0.31 ą 87%  sched_debug.cpu.nr_running.min
> >     127617 ą  3%     +52.9%     195080        sched_debug.cpu.nr_switches.avg
> >     149901 ą  6%     +45.2%     217652        sched_debug.cpu.nr_switches.max
> >     108182 ą  5%     +61.6%     174808        sched_debug.cpu.nr_switches.min
> >       0.20 ą  5%     -62.5%       0.07 ą 67%  sched_debug.cpu.nr_uninterruptible.avg
> >     -29.33           -13.5%     -25.38        sched_debug.cpu.nr_uninterruptible.min
> >      92666 ą  8%     +66.8%     154559        sched_debug.cpu.sched_count.avg
> >     104565 ą 11%     +57.2%     164374        sched_debug.cpu.sched_count.max
> >      80272 ą 10%     +77.2%     142238        sched_debug.cpu.sched_count.min
> >      38029 ą 10%     +80.4%      68608        sched_debug.cpu.sched_goidle.avg
> >      43413 ą 11%     +68.5%      73149        sched_debug.cpu.sched_goidle.max
> >      32420 ą 11%     +94.5%      63069        sched_debug.cpu.sched_goidle.min
> >      51567 ą  8%     +60.7%      82878        sched_debug.cpu.ttwu_count.avg
> >      79015 ą  9%     +45.2%     114717 ą  4%  sched_debug.cpu.ttwu_count.max
> >      42919 ą  9%     +63.3%      70086        sched_debug.cpu.ttwu_count.min
> >     127632 ą  7%     +39.3%     177854 ą  5%  sched_debug.cpu_clk
> >     125087 ą  7%     +40.1%     175285 ą  5%  sched_debug.ktime
> >     127882 ą  6%     +39.3%     178163 ą  5%  sched_debug.sched_clk
> >     146.00 ą 13%    +902.9%       1464 ą143%  interrupts.133:IR-PCI-MSI.2097153-edge.eth1-TxRx-0
> >       3375 ą 93%     -94.8%     174.75 ą 26%  interrupts.134:IR-PCI-MSI.2097154-edge.eth1-TxRx-1
> >     297595 ą  8%     +22.8%     365351 ą  2%  interrupts.CPU0.LOC:Local_timer_interrupts
> >       8402           -21.7%       6577 ą 25%  interrupts.CPU0.NMI:Non-maskable_interrupts
> >       8402           -21.7%       6577 ą 25%  interrupts.CPU0.PMI:Performance_monitoring_interrupts
> >     937.00 ą  2%     +18.1%       1106 ą 11%  interrupts.CPU0.RES:Rescheduling_interrupts
> >     146.00 ą 13%    +902.9%       1464 ą143%  interrupts.CPU1.133:IR-PCI-MSI.2097153-edge.eth1-TxRx-0
> >     297695 ą  8%     +22.7%     365189 ą  2%  interrupts.CPU1.LOC:Local_timer_interrupts
> >       8412           -20.9%       6655 ą 25%  interrupts.CPU1.NMI:Non-maskable_interrupts
> >       8412           -20.9%       6655 ą 25%  interrupts.CPU1.PMI:Performance_monitoring_interrupts
> >     297641 ą  8%     +22.7%     365268 ą  2%  interrupts.CPU10.LOC:Local_timer_interrupts
> >       8365           -10.9%       7455 ą  3%  interrupts.CPU10.NMI:Non-maskable_interrupts
> >       8365           -10.9%       7455 ą  3%  interrupts.CPU10.PMI:Performance_monitoring_interrupts
> >     297729 ą  8%     +22.7%     365238 ą  2%  interrupts.CPU11.LOC:Local_timer_interrupts
> >       8376           -21.8%       6554 ą 26%  interrupts.CPU11.NMI:Non-maskable_interrupts
> >       8376           -21.8%       6554 ą 26%  interrupts.CPU11.PMI:Performance_monitoring_interrupts
> >     297394 ą  8%     +22.8%     365274 ą  2%  interrupts.CPU12.LOC:Local_timer_interrupts
> >       8393           -10.5%       7512 ą  3%  interrupts.CPU12.NMI:Non-maskable_interrupts
> >       8393           -10.5%       7512 ą  3%  interrupts.CPU12.PMI:Performance_monitoring_interrupts
> >     297744 ą  8%     +22.7%     365243 ą  2%  interrupts.CPU13.LOC:Local_timer_interrupts
> >       8353           -10.6%       7469 ą  3%  interrupts.CPU13.NMI:Non-maskable_interrupts
> >       8353           -10.6%       7469 ą  3%  interrupts.CPU13.PMI:Performance_monitoring_interrupts
> >     148.50 ą 17%     -24.2%     112.50 ą  8%  interrupts.CPU13.TLB:TLB_shootdowns
> >     297692 ą  8%     +22.7%     365311 ą  2%  interrupts.CPU14.LOC:Local_timer_interrupts
> >       8374           -10.4%       7501 ą  4%  interrupts.CPU14.NMI:Non-maskable_interrupts
> >       8374           -10.4%       7501 ą  4%  interrupts.CPU14.PMI:Performance_monitoring_interrupts
> >     297453 ą  8%     +22.8%     365311 ą  2%  interrupts.CPU15.LOC:Local_timer_interrupts
> >       8336           -22.8%       6433 ą 26%  interrupts.CPU15.NMI:Non-maskable_interrupts
> >       8336           -22.8%       6433 ą 26%  interrupts.CPU15.PMI:Performance_monitoring_interrupts
> >     699.50 ą 21%     +51.3%       1058 ą 10%  interrupts.CPU15.RES:Rescheduling_interrupts
> >       3375 ą 93%     -94.8%     174.75 ą 26%  interrupts.CPU2.134:IR-PCI-MSI.2097154-edge.eth1-TxRx-1
> >     297685 ą  8%     +22.7%     365273 ą  2%  interrupts.CPU2.LOC:Local_timer_interrupts
> >       8357           -21.2%       6584 ą 25%  interrupts.CPU2.NMI:Non-maskable_interrupts
> >       8357           -21.2%       6584 ą 25%  interrupts.CPU2.PMI:Performance_monitoring_interrupts
> >     164.00 ą 30%     -23.0%     126.25 ą 32%  interrupts.CPU2.TLB:TLB_shootdowns
> >     297352 ą  8%     +22.9%     365371 ą  2%  interrupts.CPU3.LOC:Local_timer_interrupts
> >       8383           -10.6%       7493 ą  4%  interrupts.CPU3.NMI:Non-maskable_interrupts
> >       8383           -10.6%       7493 ą  4%  interrupts.CPU3.PMI:Performance_monitoring_interrupts
> >     780.50 ą  3%     +32.7%       1035 ą  6%  interrupts.CPU3.RES:Rescheduling_interrupts
> >     297595 ą  8%     +22.8%     365415 ą  2%  interrupts.CPU4.LOC:Local_timer_interrupts
> >       8382           -21.4%       6584 ą 25%  interrupts.CPU4.NMI:Non-maskable_interrupts
> >       8382           -21.4%       6584 ą 25%  interrupts.CPU4.PMI:Performance_monitoring_interrupts
> >     297720 ą  8%     +22.7%     365347 ą  2%  interrupts.CPU5.LOC:Local_timer_interrupts
> >       8353           -32.0%       5679 ą 34%  interrupts.CPU5.NMI:Non-maskable_interrupts
> >       8353           -32.0%       5679 ą 34%  interrupts.CPU5.PMI:Performance_monitoring_interrupts
> >     727.00 ą 16%     +98.3%       1442 ą 47%  interrupts.CPU5.RES:Rescheduling_interrupts
> >     297620 ą  8%     +22.8%     365343 ą  2%  interrupts.CPU6.LOC:Local_timer_interrupts
> >       8388           -10.3%       7526 ą  4%  interrupts.CPU6.NMI:Non-maskable_interrupts
> >       8388           -10.3%       7526 ą  4%  interrupts.CPU6.PMI:Performance_monitoring_interrupts
> >     156.50 ą  3%     -27.6%     113.25 ą 16%  interrupts.CPU6.TLB:TLB_shootdowns
> >     297690 ą  8%     +22.7%     365363 ą  2%  interrupts.CPU7.LOC:Local_timer_interrupts
> >       8390           -23.1%       6449 ą 25%  interrupts.CPU7.NMI:Non-maskable_interrupts
> >       8390           -23.1%       6449 ą 25%  interrupts.CPU7.PMI:Performance_monitoring_interrupts
> >     918.00 ą 16%     +49.4%       1371 ą  7%  interrupts.CPU7.RES:Rescheduling_interrupts
> >     120.00 ą 35%     +70.8%     205.00 ą 17%  interrupts.CPU7.TLB:TLB_shootdowns
> >     297731 ą  8%     +22.7%     365368 ą  2%  interrupts.CPU8.LOC:Local_timer_interrupts
> >       8393           -32.5%       5668 ą 35%  interrupts.CPU8.NMI:Non-maskable_interrupts
> >       8393           -32.5%       5668 ą 35%  interrupts.CPU8.PMI:Performance_monitoring_interrupts
> >     297779 ą  8%     +22.7%     365399 ą  2%  interrupts.CPU9.LOC:Local_timer_interrupts
> >       8430           -10.8%       7517 ą  2%  interrupts.CPU9.NMI:Non-maskable_interrupts
> >       8430           -10.8%       7517 ą  2%  interrupts.CPU9.PMI:Performance_monitoring_interrupts
> >     956.50           +13.5%       1085 ą  4%  interrupts.CPU9.RES:Rescheduling_interrupts
> >    4762118 ą  8%     +22.7%    5845069 ą  2%  interrupts.LOC:Local_timer_interrupts
> >     134093           -18.2%     109662 ą 11%  interrupts.NMI:Non-maskable_interrupts
> >     134093           -18.2%     109662 ą 11%  interrupts.PMI:Performance_monitoring_interrupts
> >      66.97 ą  9%     -29.9       37.12 ą 49%  perf-profile.calltrace.cycles-pp.deflate
> >      66.67 ą  9%     -29.7       36.97 ą 50%  perf-profile.calltrace.cycles-pp.deflate_medium.deflate
> >      43.24 ą  9%     -18.6       24.61 ą 49%  perf-profile.calltrace.cycles-pp.longest_match.deflate_medium.deflate
> >       2.29 ą 14%      -1.2        1.05 ą 58%  perf-profile.calltrace.cycles-pp.deflateSetDictionary
> >       0.74 ą  6%      -0.5        0.27 ą100%  perf-profile.calltrace.cycles-pp.read.__libc_start_main
> >       0.74 ą  7%      -0.5        0.27 ą100%  perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.read.__libc_start_main
> >       0.73 ą  7%      -0.5        0.27 ą100%  perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.read.__libc_start_main
> >       0.73 ą  7%      -0.5        0.27 ą100%  perf-profile.calltrace.cycles-pp.__x64_sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read.__libc_start_main
> >       0.73 ą  7%      -0.5        0.27 ą100%  perf-profile.calltrace.cycles-pp.ksys_read.__x64_sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
> >       0.26 ą100%      +0.6        0.88 ą 45%  perf-profile.calltrace.cycles-pp.vfs_statx.__do_sys_newfstatat.__x64_sys_newfstatat.do_syscall_64.entry_SYSCALL_64_after_hwframe
> >       0.34 ą100%      +0.7        1.02 ą 42%  perf-profile.calltrace.cycles-pp.do_sys_open.__x64_sys_openat.do_syscall_64.entry_SYSCALL_64_after_hwframe
> >       0.28 ą100%      +0.7        0.96 ą 44%  perf-profile.calltrace.cycles-pp.__do_sys_newfstatat.__x64_sys_newfstatat.do_syscall_64.entry_SYSCALL_64_after_hwframe
> >       0.28 ą100%      +0.7        0.96 ą 44%  perf-profile.calltrace.cycles-pp.__x64_sys_newfstatat.do_syscall_64.entry_SYSCALL_64_after_hwframe
> >       0.34 ą100%      +0.7        1.03 ą 42%  perf-profile.calltrace.cycles-pp.__x64_sys_openat.do_syscall_64.entry_SYSCALL_64_after_hwframe
> >       0.00            +0.8        0.77 ą 35%  perf-profile.calltrace.cycles-pp.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter
> >       0.56 ą  9%      +0.8        1.40 ą 45%  perf-profile.calltrace.cycles-pp.__schedule.schedule.futex_wait_queue_me.futex_wait.do_futex
> >       0.58 ą 10%      +0.9        1.43 ą 45%  perf-profile.calltrace.cycles-pp.schedule.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex
> >       0.33 ą100%      +0.9        1.21 ą 28%  perf-profile.calltrace.cycles-pp.menu_select.cpuidle_select.do_idle.cpu_startup_entry.start_secondary
> >       0.34 ą 99%      +0.9        1.27 ą 30%  perf-profile.calltrace.cycles-pp.cpuidle_select.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
> >       0.00            +1.0        0.96 ą 62%  perf-profile.calltrace.cycles-pp.poll_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
> >       0.62 ą  9%      +1.0        1.60 ą 52%  perf-profile.calltrace.cycles-pp.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex.do_syscall_64
> >       0.68 ą 10%      +1.0        1.73 ą 51%  perf-profile.calltrace.cycles-pp.futex_wait.do_futex.__x64_sys_futex.do_syscall_64.entry_SYSCALL_64_after_hwframe
> >       0.46 ą100%      +1.1        1.60 ą 43%  perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle
> >       0.47 ą100%      +1.2        1.62 ą 43%  perf-profile.calltrace.cycles-pp.apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
> >      17.73 ą 21%     +19.1       36.84 ą 25%  perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
> >      17.75 ą 20%     +19.9       37.63 ą 26%  perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary
> >      17.84 ą 20%     +20.0       37.82 ą 26%  perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
> >      18.83 ą 20%     +21.2       40.05 ą 27%  perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
> >      18.89 ą 20%     +21.2       40.11 ą 27%  perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
> >      18.89 ą 20%     +21.2       40.12 ą 27%  perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
> >      20.14 ą 20%     +22.5       42.66 ą 27%  perf-profile.calltrace.cycles-pp.secondary_startup_64
> >      66.97 ą  9%     -29.9       37.12 ą 49%  perf-profile.children.cycles-pp.deflate
> >      66.83 ą  9%     -29.8       37.06 ą 49%  perf-profile.children.cycles-pp.deflate_medium
> >      43.58 ą  9%     -18.8       24.80 ą 49%  perf-profile.children.cycles-pp.longest_match
> >       2.29 ą 14%      -1.2        1.12 ą 43%  perf-profile.children.cycles-pp.deflateSetDictionary
> >       0.84            -0.3        0.58 ą 19%  perf-profile.children.cycles-pp.read
> >       0.52 ą 13%      -0.2        0.27 ą 43%  perf-profile.children.cycles-pp.fill_window
> >       0.06            +0.0        0.08 ą 13%  perf-profile.children.cycles-pp.update_stack_state
> >       0.07 ą 14%      +0.0        0.11 ą 24%  perf-profile.children.cycles-pp._raw_spin_unlock_irqrestore
> >       0.03 ą100%      +0.1        0.09 ą 19%  perf-profile.children.cycles-pp.find_next_and_bit
> >       0.00            +0.1        0.06 ą 15%  perf-profile.children.cycles-pp.refcount_inc_not_zero_checked
> >       0.03 ą100%      +0.1        0.08 ą 33%  perf-profile.children.cycles-pp.free_pcppages_bulk
> >       0.07 ą  7%      +0.1        0.12 ą 36%  perf-profile.children.cycles-pp.syscall_return_via_sysret
> >       0.00            +0.1        0.06 ą 28%  perf-profile.children.cycles-pp.rb_erase
> >       0.03 ą100%      +0.1        0.09 ą 24%  perf-profile.children.cycles-pp.shmem_undo_range
> >       0.03 ą100%      +0.1        0.09 ą 28%  perf-profile.children.cycles-pp.unlinkat
> >       0.03 ą100%      +0.1        0.09 ą 28%  perf-profile.children.cycles-pp.__x64_sys_unlinkat
> >       0.03 ą100%      +0.1        0.09 ą 28%  perf-profile.children.cycles-pp.do_unlinkat
> >       0.03 ą100%      +0.1        0.09 ą 28%  perf-profile.children.cycles-pp.ovl_destroy_inode
> >       0.03 ą100%      +0.1        0.09 ą 28%  perf-profile.children.cycles-pp.shmem_evict_inode
> >       0.03 ą100%      +0.1        0.09 ą 28%  perf-profile.children.cycles-pp.shmem_truncate_range
> >       0.05            +0.1        0.12 ą 38%  perf-profile.children.cycles-pp.unmap_vmas
> >       0.00            +0.1        0.07 ą 30%  perf-profile.children.cycles-pp.timerqueue_del
> >       0.00            +0.1        0.07 ą 26%  perf-profile.children.cycles-pp.idle_cpu
> >       0.09 ą 17%      +0.1        0.15 ą 19%  perf-profile.children.cycles-pp.__update_load_avg_se
> >       0.00            +0.1        0.07 ą 33%  perf-profile.children.cycles-pp.unmap_region
> >       0.00            +0.1        0.07 ą 33%  perf-profile.children.cycles-pp.__alloc_fd
> >       0.03 ą100%      +0.1        0.10 ą 31%  perf-profile.children.cycles-pp.destroy_inode
> >       0.03 ą100%      +0.1        0.10 ą 30%  perf-profile.children.cycles-pp.evict
> >       0.06 ą 16%      +0.1        0.13 ą 35%  perf-profile.children.cycles-pp.ovl_override_creds
> >       0.00            +0.1        0.07 ą 26%  perf-profile.children.cycles-pp.kernel_text_address
> >       0.00            +0.1        0.07 ą 41%  perf-profile.children.cycles-pp.file_remove_privs
> >       0.07 ą 23%      +0.1        0.14 ą 47%  perf-profile.children.cycles-pp.hrtimer_next_event_without
> >       0.03 ą100%      +0.1        0.10 ą 24%  perf-profile.children.cycles-pp.__dentry_kill
> >       0.03 ą100%      +0.1        0.10 ą 29%  perf-profile.children.cycles-pp.dentry_unlink_inode
> >       0.03 ą100%      +0.1        0.10 ą 29%  perf-profile.children.cycles-pp.iput
> >       0.03 ą100%      +0.1        0.10 ą 54%  perf-profile.children.cycles-pp.__close_fd
> >       0.08 ą 25%      +0.1        0.15 ą 35%  perf-profile.children.cycles-pp.__switch_to
> >       0.00            +0.1        0.07 ą 29%  perf-profile.children.cycles-pp.__switch_to_asm
> >       0.00            +0.1        0.08 ą 24%  perf-profile.children.cycles-pp.__kernel_text_address
> >       0.03 ą100%      +0.1        0.11 ą 51%  perf-profile.children.cycles-pp.enqueue_hrtimer
> >       0.03 ą100%      +0.1        0.11 ą 33%  perf-profile.children.cycles-pp.rcu_gp_kthread_wake
> >       0.03 ą100%      +0.1        0.11 ą 33%  perf-profile.children.cycles-pp.swake_up_one
> >       0.03 ą100%      +0.1        0.11 ą 33%  perf-profile.children.cycles-pp.swake_up_locked
> >       0.10 ą 30%      +0.1        0.18 ą 17%  perf-profile.children.cycles-pp.irqtime_account_irq
> >       0.03 ą100%      +0.1        0.11 ą 38%  perf-profile.children.cycles-pp.unmap_page_range
> >       0.00            +0.1        0.09 ą 37%  perf-profile.children.cycles-pp.putname
> >       0.03 ą100%      +0.1        0.11 ą 28%  perf-profile.children.cycles-pp.filemap_map_pages
> >       0.07 ą 28%      +0.1        0.16 ą 35%  perf-profile.children.cycles-pp.getname
> >       0.03 ą100%      +0.1        0.11 ą 40%  perf-profile.children.cycles-pp.unmap_single_vma
> >       0.08 ą 29%      +0.1        0.17 ą 38%  perf-profile.children.cycles-pp.queued_spin_lock_slowpath
> >       0.03 ą100%      +0.1        0.12 ą 54%  perf-profile.children.cycles-pp.setlocale
> >       0.03 ą100%      +0.1        0.12 ą 60%  perf-profile.children.cycles-pp.__open64_nocancel
> >       0.00            +0.1        0.09 ą 31%  perf-profile.children.cycles-pp.__hrtimer_get_next_event
> >       0.00            +0.1        0.10 ą 28%  perf-profile.children.cycles-pp.get_unused_fd_flags
> >       0.00            +0.1        0.10 ą 65%  perf-profile.children.cycles-pp.timerqueue_add
> >       0.07 ą 28%      +0.1        0.17 ą 42%  perf-profile.children.cycles-pp.__hrtimer_next_event_base
> >       0.03 ą100%      +0.1        0.12 ą 51%  perf-profile.children.cycles-pp.__x64_sys_close
> >       0.00            +0.1        0.10 ą 38%  perf-profile.children.cycles-pp.do_lookup_x
> >       0.03 ą100%      +0.1        0.12 ą 23%  perf-profile.children.cycles-pp.kmem_cache_free
> >       0.04 ą100%      +0.1        0.14 ą 53%  perf-profile.children.cycles-pp.__do_munmap
> >       0.00            +0.1        0.10 ą 35%  perf-profile.children.cycles-pp.unwind_get_return_address
> >       0.00            +0.1        0.10 ą 49%  perf-profile.children.cycles-pp.shmem_add_to_page_cache
> >       0.07 ą 20%      +0.1        0.18 ą 25%  perf-profile.children.cycles-pp.find_next_bit
> >       0.08 ą 25%      +0.1        0.18 ą 34%  perf-profile.children.cycles-pp.dput
> >       0.11 ą 33%      +0.1        0.21 ą 37%  perf-profile.children.cycles-pp.perf_mux_hrtimer_handler
> >       0.08 ą  5%      +0.1        0.19 ą 27%  perf-profile.children.cycles-pp._raw_spin_lock_irqsave
> >       0.00            +0.1        0.11 ą 52%  perf-profile.children.cycles-pp.rcu_idle_exit
> >       0.03 ą100%      +0.1        0.14 ą 18%  perf-profile.children.cycles-pp.entry_SYSCALL_64
> >       0.08            +0.1        0.19 ą 43%  perf-profile.children.cycles-pp.exit_mmap
> >       0.09 ą 22%      +0.1        0.20 ą 57%  perf-profile.children.cycles-pp.set_next_entity
> >       0.07 ą  7%      +0.1        0.18 ą 60%  perf-profile.children.cycles-pp.switch_mm_irqs_off
> >       0.10 ą 26%      +0.1        0.21 ą 32%  perf-profile.children.cycles-pp.sched_clock
> >       0.12 ą 25%      +0.1        0.23 ą 39%  perf-profile.children.cycles-pp.update_cfs_group
> >       0.07 ą 14%      +0.1        0.18 ą 45%  perf-profile.children.cycles-pp.lapic_next_deadline
> >       0.08 ą  5%      +0.1        0.20 ą 44%  perf-profile.children.cycles-pp.mmput
> >       0.11 ą 18%      +0.1        0.23 ą 41%  perf-profile.children.cycles-pp.clockevents_program_event
> >       0.07 ą  7%      +0.1        0.18 ą 40%  perf-profile.children.cycles-pp.strncpy_from_user
> >       0.00            +0.1        0.12 ą 48%  perf-profile.children.cycles-pp.flush_old_exec
> >       0.11 ą 18%      +0.1        0.23 ą 37%  perf-profile.children.cycles-pp.native_sched_clock
> >       0.09 ą 17%      +0.1        0.21 ą 46%  perf-profile.children.cycles-pp._dl_sysdep_start
> >       0.12 ą 19%      +0.1        0.26 ą 37%  perf-profile.children.cycles-pp.tick_program_event
> >       0.09 ą 33%      +0.1        0.23 ą 61%  perf-profile.children.cycles-pp.mmap_region
> >       0.14 ą 21%      +0.1        0.28 ą 39%  perf-profile.children.cycles-pp.sched_clock_cpu
> >       0.11 ą 27%      +0.1        0.25 ą 56%  perf-profile.children.cycles-pp.do_mmap
> >       0.11 ą 36%      +0.1        0.26 ą 57%  perf-profile.children.cycles-pp.ksys_mmap_pgoff
> >       0.09 ą 17%      +0.1        0.23 ą 48%  perf-profile.children.cycles-pp.lookup_fast
> >       0.04 ą100%      +0.2        0.19 ą 48%  perf-profile.children.cycles-pp.open_path
> >       0.11 ą 30%      +0.2        0.27 ą 58%  perf-profile.children.cycles-pp.vm_mmap_pgoff
> >       0.11 ą 27%      +0.2        0.28 ą 37%  perf-profile.children.cycles-pp.update_blocked_averages
> >       0.11            +0.2        0.29 ą 38%  perf-profile.children.cycles-pp.search_binary_handler
> >       0.11 ą  4%      +0.2        0.29 ą 39%  perf-profile.children.cycles-pp.load_elf_binary
> >       0.11 ą 30%      +0.2        0.30 ą 50%  perf-profile.children.cycles-pp.inode_permission
> >       0.04 ą100%      +0.2        0.24 ą 55%  perf-profile.children.cycles-pp.__GI___open64_nocancel
> >       0.15 ą 29%      +0.2        0.35 ą 34%  perf-profile.children.cycles-pp.getname_flags
> >       0.16 ą 25%      +0.2        0.38 ą 26%  perf-profile.children.cycles-pp.get_next_timer_interrupt
> >       0.18 ą 11%      +0.2        0.41 ą 39%  perf-profile.children.cycles-pp.execve
> >       0.19 ą  5%      +0.2        0.42 ą 37%  perf-profile.children.cycles-pp.__x64_sys_execve
> >       0.18 ą  2%      +0.2        0.42 ą 37%  perf-profile.children.cycles-pp.__do_execve_file
> >       0.32 ą 18%      +0.3        0.57 ą 33%  perf-profile.children.cycles-pp.__account_scheduler_latency
> >       0.21 ą 17%      +0.3        0.48 ą 47%  perf-profile.children.cycles-pp.schedule_idle
> >       0.20 ą 19%      +0.3        0.49 ą 33%  perf-profile.children.cycles-pp.tick_nohz_next_event
> >       0.21 ą 26%      +0.3        0.55 ą 41%  perf-profile.children.cycles-pp.find_busiest_group
> >       0.32 ą 26%      +0.3        0.67 ą 52%  perf-profile.children.cycles-pp.__handle_mm_fault
> >       0.22 ą 25%      +0.4        0.57 ą 49%  perf-profile.children.cycles-pp.filename_lookup
> >       0.34 ą 27%      +0.4        0.72 ą 50%  perf-profile.children.cycles-pp.handle_mm_fault
> >       0.42 ą 32%      +0.4        0.80 ą 43%  perf-profile.children.cycles-pp.shmem_getpage_gfp
> >       0.36 ą 23%      +0.4        0.77 ą 41%  perf-profile.children.cycles-pp.load_balance
> >       0.41 ą 30%      +0.4        0.82 ą 50%  perf-profile.children.cycles-pp.do_page_fault
> >       0.39 ą 30%      +0.4        0.80 ą 50%  perf-profile.children.cycles-pp.__do_page_fault
> >       0.28 ą 22%      +0.4        0.70 ą 37%  perf-profile.children.cycles-pp.tick_nohz_get_sleep_length
> >       0.43 ą 31%      +0.4        0.86 ą 49%  perf-profile.children.cycles-pp.page_fault
> >       0.31 ą 25%      +0.5        0.77 ą 46%  perf-profile.children.cycles-pp.user_path_at_empty
> >       0.36 ą 20%      +0.5        0.84 ą 34%  perf-profile.children.cycles-pp.newidle_balance
> >       0.45 ą 21%      +0.5        0.95 ą 44%  perf-profile.children.cycles-pp.vfs_statx
> >       0.46 ą 20%      +0.5        0.97 ą 43%  perf-profile.children.cycles-pp.__do_sys_newfstatat
> >       0.47 ą 20%      +0.5        0.98 ą 44%  perf-profile.children.cycles-pp.__x64_sys_newfstatat
> >       0.29 ą 37%      +0.5        0.81 ą 32%  perf-profile.children.cycles-pp.io_serial_in
> >       0.53 ą 25%      +0.5        1.06 ą 49%  perf-profile.children.cycles-pp.path_openat
> >       0.55 ą 24%      +0.5        1.09 ą 50%  perf-profile.children.cycles-pp.do_filp_open
> >       0.35 ą  2%      +0.5        0.90 ą 60%  perf-profile.children.cycles-pp.uart_console_write
> >       0.35 ą  4%      +0.6        0.91 ą 60%  perf-profile.children.cycles-pp.console_unlock
> >       0.35 ą  4%      +0.6        0.91 ą 60%  perf-profile.children.cycles-pp.univ8250_console_write
> >       0.35 ą  4%      +0.6        0.91 ą 60%  perf-profile.children.cycles-pp.serial8250_console_write
> >       0.82 ą 23%      +0.6        1.42 ą 31%  perf-profile.children.cycles-pp.__hrtimer_run_queues
> >       0.47 ą 28%      +0.6        1.10 ą 39%  perf-profile.children.cycles-pp.irq_work_interrupt
> >       0.47 ą 28%      +0.6        1.10 ą 39%  perf-profile.children.cycles-pp.smp_irq_work_interrupt
> >       0.47 ą 28%      +0.6        1.10 ą 39%  perf-profile.children.cycles-pp.irq_work_run
> >       0.47 ą 28%      +0.6        1.10 ą 39%  perf-profile.children.cycles-pp.perf_duration_warn
> >       0.47 ą 28%      +0.6        1.10 ą 39%  perf-profile.children.cycles-pp.printk
> >       0.47 ą 28%      +0.6        1.10 ą 39%  perf-profile.children.cycles-pp.vprintk_func
> >       0.47 ą 28%      +0.6        1.10 ą 39%  perf-profile.children.cycles-pp.vprintk_default
> >       0.47 ą 28%      +0.6        1.11 ą 39%  perf-profile.children.cycles-pp.irq_work_run_list
> >       0.49 ą 31%      +0.6        1.13 ą 39%  perf-profile.children.cycles-pp.vprintk_emit
> >       0.54 ą 19%      +0.6        1.17 ą 38%  perf-profile.children.cycles-pp.pick_next_task_fair
> >       0.32 ą  7%      +0.7        1.02 ą 56%  perf-profile.children.cycles-pp.poll_idle
> >       0.60 ą 15%      +0.7        1.31 ą 29%  perf-profile.children.cycles-pp.menu_select
> >       0.65 ą 27%      +0.7        1.36 ą 45%  perf-profile.children.cycles-pp.do_sys_open
> >       0.62 ą 15%      +0.7        1.36 ą 31%  perf-profile.children.cycles-pp.cpuidle_select
> >       0.66 ą 26%      +0.7        1.39 ą 44%  perf-profile.children.cycles-pp.__x64_sys_openat
> >       1.11 ą 22%      +0.9        2.03 ą 31%  perf-profile.children.cycles-pp.hrtimer_interrupt
> >       0.89 ą 24%      +0.9        1.81 ą 42%  perf-profile.children.cycles-pp.futex_wait_queue_me
> >       1.16 ą 27%      +1.0        2.13 ą 36%  perf-profile.children.cycles-pp.schedule
> >       0.97 ą 23%      +1.0        1.97 ą 42%  perf-profile.children.cycles-pp.futex_wait
> >       1.33 ą 25%      +1.2        2.55 ą 39%  perf-profile.children.cycles-pp.__schedule
> >       1.84 ą 26%      +1.6        3.42 ą 36%  perf-profile.children.cycles-pp.smp_apic_timer_interrupt
> >       1.76 ą 22%      +1.6        3.41 ą 40%  perf-profile.children.cycles-pp.do_futex
> >       1.79 ą 22%      +1.7        3.49 ą 41%  perf-profile.children.cycles-pp.__x64_sys_futex
> >       2.23 ą 20%      +1.7        3.98 ą 37%  perf-profile.children.cycles-pp.apic_timer_interrupt
> >      17.73 ą 21%     +19.1       36.86 ą 25%  perf-profile.children.cycles-pp.intel_idle
> >      19.00 ą 21%     +21.1       40.13 ą 26%  perf-profile.children.cycles-pp.cpuidle_enter_state
> >      19.02 ą 21%     +21.2       40.19 ą 26%  perf-profile.children.cycles-pp.cpuidle_enter
> >      18.89 ą 20%     +21.2       40.12 ą 27%  perf-profile.children.cycles-pp.start_secondary
> >      20.14 ą 20%     +22.5       42.65 ą 27%  perf-profile.children.cycles-pp.cpu_startup_entry
> >      20.08 ą 20%     +22.5       42.59 ą 27%  perf-profile.children.cycles-pp.do_idle
> >      20.14 ą 20%     +22.5       42.66 ą 27%  perf-profile.children.cycles-pp.secondary_startup_64
> >      43.25 ą  9%     -18.6       24.63 ą 49%  perf-profile.self.cycles-pp.longest_match
> >      22.74 ą 11%     -10.8       11.97 ą 50%  perf-profile.self.cycles-pp.deflate_medium
> >       2.26 ą 14%      -1.2        1.11 ą 44%  perf-profile.self.cycles-pp.deflateSetDictionary
> >       0.51 ą 12%      -0.3        0.24 ą 57%  perf-profile.self.cycles-pp.fill_window
> >       0.07 ą  7%      +0.0        0.10 ą 24%  perf-profile.self.cycles-pp._raw_spin_unlock_irqrestore
> >       0.07 ą  7%      +0.1        0.12 ą 36%  perf-profile.self.cycles-pp.syscall_return_via_sysret
> >       0.08 ą 12%      +0.1        0.14 ą 15%  perf-profile.self.cycles-pp.__update_load_avg_se
> >       0.06            +0.1        0.13 ą 27%  perf-profile.self.cycles-pp._raw_spin_lock_irqsave
> >       0.08 ą 25%      +0.1        0.15 ą 37%  perf-profile.self.cycles-pp.__switch_to
> >       0.06 ą 16%      +0.1        0.13 ą 29%  perf-profile.self.cycles-pp.get_page_from_freelist
> >       0.00            +0.1        0.07 ą 29%  perf-profile.self.cycles-pp.__switch_to_asm
> >       0.05            +0.1        0.13 ą 57%  perf-profile.self.cycles-pp.switch_mm_irqs_off
> >       0.00            +0.1        0.08 ą 41%  perf-profile.self.cycles-pp.interrupt_entry
> >       0.00            +0.1        0.08 ą 61%  perf-profile.self.cycles-pp.run_timer_softirq
> >       0.07 ą 23%      +0.1        0.15 ą 43%  perf-profile.self.cycles-pp.__hrtimer_next_event_base
> >       0.03 ą100%      +0.1        0.12 ą 43%  perf-profile.self.cycles-pp.update_cfs_group
> >       0.08 ą 29%      +0.1        0.17 ą 38%  perf-profile.self.cycles-pp.queued_spin_lock_slowpath
> >       0.00            +0.1        0.09 ą 29%  perf-profile.self.cycles-pp.strncpy_from_user
> >       0.06 ą 16%      +0.1        0.15 ą 24%  perf-profile.self.cycles-pp.find_next_bit
> >       0.00            +0.1        0.10 ą 35%  perf-profile.self.cycles-pp.do_lookup_x
> >       0.00            +0.1        0.10 ą 13%  perf-profile.self.cycles-pp.kmem_cache_free
> >       0.06 ą 16%      +0.1        0.16 ą 30%  perf-profile.self.cycles-pp.do_idle
> >       0.03 ą100%      +0.1        0.13 ą 18%  perf-profile.self.cycles-pp.entry_SYSCALL_64
> >       0.03 ą100%      +0.1        0.14 ą 41%  perf-profile.self.cycles-pp.update_blocked_averages
> >       0.11 ą 18%      +0.1        0.22 ą 37%  perf-profile.self.cycles-pp.native_sched_clock
> >       0.07 ą 14%      +0.1        0.18 ą 46%  perf-profile.self.cycles-pp.lapic_next_deadline
> >       0.00            +0.1        0.12 ą 65%  perf-profile.self.cycles-pp.set_next_entity
> >       0.12 ą 28%      +0.1        0.27 ą 32%  perf-profile.self.cycles-pp.cpuidle_enter_state
> >       0.15 ą  3%      +0.2        0.32 ą 39%  perf-profile.self.cycles-pp.io_serial_out
> >       0.25 ą  4%      +0.2        0.48 ą 19%  perf-profile.self.cycles-pp.menu_select
> >       0.15 ą 22%      +0.3        0.42 ą 46%  perf-profile.self.cycles-pp.find_busiest_group
> >       0.29 ą 37%      +0.4        0.71 ą 42%  perf-profile.self.cycles-pp.io_serial_in
> >       0.32 ą  7%      +0.7        1.02 ą 56%  perf-profile.self.cycles-pp.poll_idle
> >      17.73 ą 21%     +19.1       36.79 ą 25%  perf-profile.self.cycles-pp.intel_idle
> >
> >
> >
> >                    phoronix-test-suite.compress-gzip.0.seconds
> >
> >   8 +-----------------------------------------------------------------------+
> >     |                       O   O    O   O                 O   O   O    O   |
> >   7 |-+ O  O   O   O    O              O     O    O   O                     |
> >   6 |-+      +                     +                    +                   |
> >     |   +    :   +   +             :       +    +   +   :                   |
> >   5 |-+ :    :   :   :            ::       :    :   :   :                   |
> >     |   ::  : :  :   ::           : :      :   ::   :  :                    |
> >   4 |:+: :  : : : : : :           : :     : :  : : : : :                    |
> >     |: : :  : : : : : :   +   +   : :  +  : :  : : : : :                    |
> >   3 |:+:  : : : : : :  :  :   :  :  :  :  : :  : : : : :                    |
> >   2 |:+:  : : : : : :  : : : : : :  : : : : : :  : : : :                    |
> >     |: :  : : : : : :  : : : : : :  : : : : : :  : : : :                    |
> >   1 |-:   ::   :   :   : : : : : :   :: ::   ::   :   :                     |
> >     | :    :   :   :    :   :   :    :   :   :    :   :                     |
> >   0 +-----------------------------------------------------------------------+
> >
> >
> > [*] bisect-good sample
> > [O] bisect-bad  sample
> >
> > ***************************************************************************************************
> > lkp-cfl-d1: 12 threads Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz with 8G memory
> >
> >
> > ***************************************************************************************************
> > lkp-skl-fpga01: 104 threads Skylake with 192G memory
> > =========================================================================================
> > compiler/cpufreq_governor/kconfig/rootfs/runtime/size/tbox_group/test/testcase/ucode:
> >   gcc-7/performance/x86_64-rhel-7.6/debian-x86_64-2019-09-23.cgz/300s/8T/lkp-skl-fpga01/anon-cow-seq/vm-scalability/0x2000064
> >
> > commit:
> >   fcf0553db6 ("sched/fair: Remove meaningless imbalance calculation")
> >   0b0695f2b3 ("sched/fair: Rework load_balance()")
> >
> > fcf0553db6f4c793 0b0695f2b34a4afa3f6e9aa1ff0
> > ---------------- ---------------------------
> >          %stddev     %change         %stddev
> >              \          |                \
> >     413301            +3.1%     426103        vm-scalability.median
> >       0.04 ą  2%     -34.0%       0.03 ą 12%  vm-scalability.median_stddev
> >   43837589            +2.4%   44902458        vm-scalability.throughput
> >     181085           -18.7%     147221        vm-scalability.time.involuntary_context_switches
> >   12762365 ą  2%      +3.9%   13262025        vm-scalability.time.minor_page_faults
> >       7773            +2.9%       7997        vm-scalability.time.percent_of_cpu_this_job_got
> >      11449            +1.2%      11589        vm-scalability.time.system_time
> >      12024            +4.7%      12584        vm-scalability.time.user_time
> >     439194 ą  2%     +46.0%     641402 ą  2%  vm-scalability.time.voluntary_context_switches
> >  1.148e+10            +5.0%  1.206e+10        vm-scalability.workload
> >       0.00 ą 54%      +0.0        0.00 ą 17%  mpstat.cpu.all.iowait%
> >    4767597           +52.5%    7268430 ą 41%  numa-numastat.node1.local_node
> >    4781030           +52.3%    7280347 ą 41%  numa-numastat.node1.numa_hit
> >      24.75            -9.1%      22.50 ą  2%  vmstat.cpu.id
> >      37.50            +4.7%      39.25        vmstat.cpu.us
> >       6643 ą  3%     +15.1%       7647        vmstat.system.cs
> >   12220504           +33.4%   16298593 ą  4%  cpuidle.C1.time
> >     260215 ą  6%     +55.3%     404158 ą  3%  cpuidle.C1.usage
> >    4986034 ą  3%     +56.2%    7786811 ą  2%  cpuidle.POLL.time
> >     145941 ą  3%     +61.2%     235218 ą  2%  cpuidle.POLL.usage
> >       1990            +3.0%       2049        turbostat.Avg_MHz
> >     254633 ą  6%     +56.7%     398892 ą  4%  turbostat.C1
> >       0.04            +0.0        0.05        turbostat.C1%
> >     309.99            +1.5%     314.75        turbostat.RAMWatt
> >       1688 ą 11%     +17.4%       1983 ą  5%  slabinfo.UNIX.active_objs
> >       1688 ą 11%     +17.4%       1983 ą  5%  slabinfo.UNIX.num_objs
> >       2460 ą  3%     -15.8%       2072 ą 11%  slabinfo.dmaengine-unmap-16.active_objs
> >       2460 ą  3%     -15.8%       2072 ą 11%  slabinfo.dmaengine-unmap-16.num_objs
> >       2814 ą  9%     +14.6%       3225 ą  4%  slabinfo.sock_inode_cache.active_objs
> >       2814 ą  9%     +14.6%       3225 ą  4%  slabinfo.sock_inode_cache.num_objs
> >       0.67 ą  5%      +0.1        0.73 ą  3%  perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.alloc_pages_vma.do_huge_pmd_wp_page.__handle_mm_fault.handle_mm_fault
> >       0.68 ą  6%      +0.1        0.74 ą  2%  perf-profile.calltrace.cycles-pp.alloc_pages_vma.do_huge_pmd_wp_page.__handle_mm_fault.handle_mm_fault.__do_page_fault
> >       0.05            +0.0        0.07 ą  7%  perf-profile.children.cycles-pp.schedule
> >       0.06            +0.0        0.08 ą  6%  perf-profile.children.cycles-pp.__wake_up_common
> >       0.06 ą  7%      +0.0        0.08 ą  6%  perf-profile.children.cycles-pp.wake_up_page_bit
> >       0.23 ą  7%      +0.0        0.28 ą  5%  perf-profile.children.cycles-pp._raw_spin_lock_irqsave
> >       0.00            +0.1        0.05        perf-profile.children.cycles-pp.drm_fb_helper_sys_imageblit
> >       0.00            +0.1        0.05        perf-profile.children.cycles-pp.sys_imageblit
> >      29026 ą  3%     -26.7%      21283 ą 44%  numa-vmstat.node0.nr_inactive_anon
> >      30069 ą  3%     -20.5%      23905 ą 26%  numa-vmstat.node0.nr_shmem
> >      12120 ą  2%     -15.5%      10241 ą 12%  numa-vmstat.node0.nr_slab_reclaimable
> >      29026 ą  3%     -26.7%      21283 ą 44%  numa-vmstat.node0.nr_zone_inactive_anon
> >    4010893           +16.1%    4655889 ą  9%  numa-vmstat.node1.nr_active_anon
> >    3982581           +16.3%    4632344 ą  9%  numa-vmstat.node1.nr_anon_pages
> >       6861           +16.1%       7964 ą  8%  numa-vmstat.node1.nr_anon_transparent_hugepages
> >       2317 ą 42%    +336.9%      10125 ą 93%  numa-vmstat.node1.nr_inactive_anon
> >       6596 ą  4%     +18.2%       7799 ą 14%  numa-vmstat.node1.nr_kernel_stack
> >       9629 ą  8%     +66.4%      16020 ą 41%  numa-vmstat.node1.nr_shmem
> >       7558 ą  3%     +26.5%       9561 ą 14%  numa-vmstat.node1.nr_slab_reclaimable
> >    4010227           +16.1%    4655056 ą  9%  numa-vmstat.node1.nr_zone_active_anon
> >       2317 ą 42%    +336.9%      10125 ą 93%  numa-vmstat.node1.nr_zone_inactive_anon
> >    2859663 ą  2%     +46.2%    4179500 ą 36%  numa-vmstat.node1.numa_hit
> >    2680260 ą  2%     +49.3%    4002218 ą 37%  numa-vmstat.node1.numa_local
> >     116661 ą  3%     -26.3%      86010 ą 44%  numa-meminfo.node0.Inactive
> >     116192 ą  3%     -26.7%      85146 ą 44%  numa-meminfo.node0.Inactive(anon)
> >      48486 ą  2%     -15.5%      40966 ą 12%  numa-meminfo.node0.KReclaimable
> >      48486 ą  2%     -15.5%      40966 ą 12%  numa-meminfo.node0.SReclaimable
> >     120367 ą  3%     -20.5%      95642 ą 26%  numa-meminfo.node0.Shmem
> >   16210528           +15.2%   18673368 ą  6%  numa-meminfo.node1.Active
> >   16210394           +15.2%   18673287 ą  6%  numa-meminfo.node1.Active(anon)
> >   14170064           +15.6%   16379835 ą  7%  numa-meminfo.node1.AnonHugePages
> >   16113351           +15.3%   18577254 ą  7%  numa-meminfo.node1.AnonPages
> >      10534 ą 33%    +293.8%      41480 ą 92%  numa-meminfo.node1.Inactive
> >       9262 ą 42%    +338.2%      40589 ą 93%  numa-meminfo.node1.Inactive(anon)
> >      30235 ą  3%     +26.5%      38242 ą 14%  numa-meminfo.node1.KReclaimable
> >       6594 ą  4%     +18.3%       7802 ą 14%  numa-meminfo.node1.KernelStack
> >   17083646           +15.1%   19656922 ą  7%  numa-meminfo.node1.MemUsed
> >      30235 ą  3%     +26.5%      38242 ą 14%  numa-meminfo.node1.SReclaimable
> >      38540 ą  8%     +66.4%      64117 ą 42%  numa-meminfo.node1.Shmem
> >     106342           +19.8%     127451 ą 11%  numa-meminfo.node1.Slab
> >    9479688            +4.5%    9905902        proc-vmstat.nr_active_anon
> >    9434298            +4.5%    9856978        proc-vmstat.nr_anon_pages
> >      16194            +4.3%      16895        proc-vmstat.nr_anon_transparent_hugepages
> >     276.75            +3.6%     286.75        proc-vmstat.nr_dirtied
> >    3888633            -1.1%    3845882        proc-vmstat.nr_dirty_background_threshold
> >    7786774            -1.1%    7701168        proc-vmstat.nr_dirty_threshold
> >   39168820            -1.1%   38741444        proc-vmstat.nr_free_pages
> >      50391            +1.0%      50904        proc-vmstat.nr_slab_unreclaimable
> >     257.50            +3.6%     266.75        proc-vmstat.nr_written
> >    9479678            +4.5%    9905895        proc-vmstat.nr_zone_active_anon
> >    1501517            -5.9%    1412958        proc-vmstat.numa_hint_faults
> >    1075936           -13.1%     934706        proc-vmstat.numa_hint_faults_local
> >   17306395            +4.8%   18141722        proc-vmstat.numa_hit
> >    5211079            +4.2%    5427541        proc-vmstat.numa_huge_pte_updates
> >   17272620            +4.8%   18107691        proc-vmstat.numa_local
> >      33774            +0.8%      34031        proc-vmstat.numa_other
> >     690793 ą  3%     -13.7%     596166 ą  2%  proc-vmstat.numa_pages_migrated
> >  2.669e+09            +4.2%   2.78e+09        proc-vmstat.numa_pte_updates
> >  2.755e+09            +5.6%  2.909e+09        proc-vmstat.pgalloc_normal
> >   13573227 ą  2%      +3.6%   14060842        proc-vmstat.pgfault
> >  2.752e+09            +5.6%  2.906e+09        proc-vmstat.pgfree
> >  1.723e+08 ą  2%     +14.3%   1.97e+08 ą  8%  proc-vmstat.pgmigrate_fail
> >     690793 ą  3%     -13.7%     596166 ą  2%  proc-vmstat.pgmigrate_success
> >    5015265            +5.0%    5266730        proc-vmstat.thp_deferred_split_page
> >    5019661            +5.0%    5271482        proc-vmstat.thp_fault_alloc
> >      18284 ą 62%     -79.9%       3681 ą172%  sched_debug.cfs_rq:/.MIN_vruntime.avg
> >    1901618 ą 62%     -89.9%     192494 ą172%  sched_debug.cfs_rq:/.MIN_vruntime.max
> >     185571 ą 62%     -85.8%      26313 ą172%  sched_debug.cfs_rq:/.MIN_vruntime.stddev
> >      15241 ą  6%     -36.6%       9655 ą  6%  sched_debug.cfs_rq:/.exec_clock.stddev
> >      18284 ą 62%     -79.9%       3681 ą172%  sched_debug.cfs_rq:/.max_vruntime.avg
> >    1901618 ą 62%     -89.9%     192494 ą172%  sched_debug.cfs_rq:/.max_vruntime.max
> >     185571 ą 62%     -85.8%      26313 ą172%  sched_debug.cfs_rq:/.max_vruntime.stddev
> >     898812 ą  7%     -31.2%     618552 ą  5%  sched_debug.cfs_rq:/.min_vruntime.stddev
> >      10.30 ą 12%     +34.5%      13.86 ą  6%  sched_debug.cfs_rq:/.nr_spread_over.avg
> >      34.75 ą  8%     +95.9%      68.08 ą  4%  sched_debug.cfs_rq:/.nr_spread_over.max
> >       9.12 ą 11%     +82.3%      16.62 ą  9%  sched_debug.cfs_rq:/.nr_spread_over.stddev
> >   -1470498           -31.9%   -1000709        sched_debug.cfs_rq:/.spread0.min
> >     899820 ą  7%     -31.2%     618970 ą  5%  sched_debug.cfs_rq:/.spread0.stddev
> >       1589 ą  9%     -19.2%       1284 ą  9%  sched_debug.cfs_rq:/.util_avg.max
> >       0.54 ą 39%   +7484.6%      41.08 ą 92%  sched_debug.cfs_rq:/.util_est_enqueued.min
> >     238.84 ą  8%     -33.2%     159.61 ą 26%  sched_debug.cfs_rq:/.util_est_enqueued.stddev
> >      10787 ą  2%     +13.8%      12274        sched_debug.cpu.nr_switches.avg
> >      35242 ą  9%     +32.3%      46641 ą 25%  sched_debug.cpu.nr_switches.max
> >       9139 ą  3%     +16.4%      10636        sched_debug.cpu.sched_count.avg
> >      32025 ą 10%     +34.6%      43091 ą 27%  sched_debug.cpu.sched_count.max
> >       4016 ą  2%     +14.7%       4606 ą  5%  sched_debug.cpu.sched_count.min
> >       2960           +38.3%       4093        sched_debug.cpu.sched_goidle.avg
> >      11201 ą 24%     +75.8%      19691 ą 26%  sched_debug.cpu.sched_goidle.max
> >       1099 ą  6%     +56.9%       1725 ą  6%  sched_debug.cpu.sched_goidle.min
> >       1877 ą 10%     +32.5%       2487 ą 17%  sched_debug.cpu.sched_goidle.stddev
> >       4348 ą  3%     +19.3%       5188        sched_debug.cpu.ttwu_count.avg
> >      17832 ą 11%     +78.6%      31852 ą 29%  sched_debug.cpu.ttwu_count.max
> >       1699 ą  6%     +28.2%       2178 ą  7%  sched_debug.cpu.ttwu_count.min
> >       1357 ą 10%     -22.6%       1050 ą  4%  sched_debug.cpu.ttwu_local.avg
> >      11483 ą  5%     -25.0%       8614 ą 15%  sched_debug.cpu.ttwu_local.max
> >       1979 ą 12%     -36.8%       1251 ą 10%  sched_debug.cpu.ttwu_local.stddev
> >  3.941e+10            +5.0%  4.137e+10        perf-stat.i.branch-instructions
> >       0.02 ą 50%      -0.0        0.02 ą  5%  perf-stat.i.branch-miss-rate%
> >      67.94            -3.9       63.99        perf-stat.i.cache-miss-rate%
> >  8.329e+08            -1.9%   8.17e+08        perf-stat.i.cache-misses
> >  1.224e+09            +4.5%   1.28e+09        perf-stat.i.cache-references
> >       6650 ą  3%     +15.5%       7678        perf-stat.i.context-switches
> >       1.64            -1.8%       1.61        perf-stat.i.cpi
> >  2.037e+11            +2.8%  2.095e+11        perf-stat.i.cpu-cycles
> >     257.56            -4.0%     247.13        perf-stat.i.cpu-migrations
> >     244.94            +4.5%     255.91        perf-stat.i.cycles-between-cache-misses
> >    1189446 ą  2%      +3.2%    1227527        perf-stat.i.dTLB-load-misses
> >  2.669e+10            +4.7%  2.794e+10        perf-stat.i.dTLB-loads
> >       0.00 ą  7%      -0.0        0.00        perf-stat.i.dTLB-store-miss-rate%
> >     337782            +4.5%     353044        perf-stat.i.dTLB-store-misses
> >  9.096e+09            +4.7%  9.526e+09        perf-stat.i.dTLB-stores
> >      39.50            +2.1       41.64        perf-stat.i.iTLB-load-miss-rate%
> >     296305 ą  2%      +9.0%     323020        perf-stat.i.iTLB-load-misses
> >  1.238e+11            +4.9%  1.299e+11        perf-stat.i.instructions
> >     428249 ą  2%      -4.4%     409553        perf-stat.i.instructions-per-iTLB-miss
> >       0.61            +1.6%       0.62        perf-stat.i.ipc
> >      44430            +3.8%      46121        perf-stat.i.minor-faults
> >      54.82            +3.9       58.73        perf-stat.i.node-load-miss-rate%
> >   68519419 ą  4%     -11.7%   60479057 ą  6%  perf-stat.i.node-load-misses
> >   49879161 ą  3%     -20.7%   39554915 ą  4%  perf-stat.i.node-loads
> >      44428            +3.8%      46119        perf-stat.i.page-faults
> >       0.02            -0.0        0.01 ą  5%  perf-stat.overall.branch-miss-rate%
> >      68.03            -4.2       63.83        perf-stat.overall.cache-miss-rate%
> >       1.65            -2.0%       1.61        perf-stat.overall.cpi
> >     244.61            +4.8%     256.41        perf-stat.overall.cycles-between-cache-misses
> >      30.21            +2.2       32.38        perf-stat.overall.iTLB-load-miss-rate%
> >     417920 ą  2%      -3.7%     402452        perf-stat.overall.instructions-per-iTLB-miss
> >       0.61            +2.1%       0.62        perf-stat.overall.ipc
> >      57.84            +2.6       60.44        perf-stat.overall.node-load-miss-rate%
> >  3.925e+10            +5.1%  4.124e+10        perf-stat.ps.branch-instructions
> >  8.295e+08            -1.8%  8.144e+08        perf-stat.ps.cache-misses
> >  1.219e+09            +4.6%  1.276e+09        perf-stat.ps.cache-references
> >       6625 ą  3%     +15.4%       7648        perf-stat.ps.context-switches
> >  2.029e+11            +2.9%  2.088e+11        perf-stat.ps.cpu-cycles
> >     256.82            -4.2%     246.09        perf-stat.ps.cpu-migrations
> >    1184763 ą  2%      +3.3%    1223366        perf-stat.ps.dTLB-load-misses
> >  2.658e+10            +4.8%  2.786e+10        perf-stat.ps.dTLB-loads
> >     336658            +4.5%     351710        perf-stat.ps.dTLB-store-misses
> >  9.059e+09            +4.8%  9.497e+09        perf-stat.ps.dTLB-stores
> >     295140 ą  2%      +9.0%     321824        perf-stat.ps.iTLB-load-misses
> >  1.233e+11            +5.0%  1.295e+11        perf-stat.ps.instructions
> >      44309            +3.7%      45933        perf-stat.ps.minor-faults
> >   68208972 ą  4%     -11.6%   60272675 ą  6%  perf-stat.ps.node-load-misses
> >   49689740 ą  3%     -20.7%   39401789 ą  4%  perf-stat.ps.node-loads
> >      44308            +3.7%      45932        perf-stat.ps.page-faults
> >  3.732e+13            +5.1%  3.922e+13        perf-stat.total.instructions
> >      14949 ą  2%     +14.5%      17124 ą 11%  softirqs.CPU0.SCHED
> >       9940           +37.8%      13700 ą 24%  softirqs.CPU1.SCHED
> >       9370 ą  2%     +28.2%      12014 ą 16%  softirqs.CPU10.SCHED
> >      17637 ą  2%     -16.5%      14733 ą 16%  softirqs.CPU101.SCHED
> >      17846 ą  3%     -17.4%      14745 ą 16%  softirqs.CPU103.SCHED
> >       9552           +24.7%      11916 ą 17%  softirqs.CPU11.SCHED
> >       9210 ą  5%     +27.9%      11784 ą 16%  softirqs.CPU12.SCHED
> >       9378 ą  3%     +27.7%      11974 ą 16%  softirqs.CPU13.SCHED
> >       9164 ą  2%     +29.4%      11856 ą 18%  softirqs.CPU14.SCHED
> >       9215           +21.2%      11170 ą 19%  softirqs.CPU15.SCHED
> >       9118 ą  2%     +29.1%      11772 ą 16%  softirqs.CPU16.SCHED
> >       9413           +29.2%      12165 ą 18%  softirqs.CPU17.SCHED
> >       9309 ą  2%     +29.9%      12097 ą 17%  softirqs.CPU18.SCHED
> >       9423           +26.1%      11880 ą 15%  softirqs.CPU19.SCHED
> >       9010 ą  7%     +37.8%      12420 ą 18%  softirqs.CPU2.SCHED
> >       9382 ą  3%     +27.0%      11916 ą 15%  softirqs.CPU20.SCHED
> >       9102 ą  4%     +30.0%      11830 ą 16%  softirqs.CPU21.SCHED
> >       9543 ą  3%     +23.4%      11780 ą 18%  softirqs.CPU22.SCHED
> >       8998 ą  5%     +29.2%      11630 ą 18%  softirqs.CPU24.SCHED
> >       9254 ą  2%     +23.9%      11462 ą 19%  softirqs.CPU25.SCHED
> >      18450 ą  4%     -16.9%      15341 ą 16%  softirqs.CPU26.SCHED
> >      17551 ą  4%     -14.8%      14956 ą 13%  softirqs.CPU27.SCHED
> >      17575 ą  4%     -14.6%      15010 ą 14%  softirqs.CPU28.SCHED
> >      17515 ą  5%     -14.2%      15021 ą 13%  softirqs.CPU29.SCHED
> >      17715 ą  2%     -16.1%      14856 ą 13%  softirqs.CPU30.SCHED
> >      17754 ą  4%     -16.1%      14904 ą 13%  softirqs.CPU31.SCHED
> >      17675 ą  2%     -17.0%      14679 ą 21%  softirqs.CPU32.SCHED
> >      17625 ą  2%     -16.0%      14813 ą 13%  softirqs.CPU34.SCHED
> >      17619 ą  2%     -14.7%      15024 ą 14%  softirqs.CPU35.SCHED
> >      17887 ą  3%     -17.0%      14841 ą 14%  softirqs.CPU36.SCHED
> >      17658 ą  3%     -16.3%      14771 ą 12%  softirqs.CPU38.SCHED
> >      17501 ą  2%     -15.3%      14816 ą 14%  softirqs.CPU39.SCHED
> >       9360 ą  2%     +25.4%      11740 ą 14%  softirqs.CPU4.SCHED
> >      17699 ą  4%     -16.2%      14827 ą 14%  softirqs.CPU42.SCHED
> >      17580 ą  3%     -16.5%      14679 ą 15%  softirqs.CPU43.SCHED
> >      17658 ą  3%     -17.1%      14644 ą 14%  softirqs.CPU44.SCHED
> >      17452 ą  4%     -14.0%      15001 ą 15%  softirqs.CPU46.SCHED
> >      17599 ą  4%     -17.4%      14544 ą 14%  softirqs.CPU47.SCHED
> >      17792 ą  3%     -16.5%      14864 ą 14%  softirqs.CPU48.SCHED
> >      17333 ą  2%     -16.7%      14445 ą 14%  softirqs.CPU49.SCHED
> >       9483           +32.3%      12547 ą 24%  softirqs.CPU5.SCHED
> >      17842 ą  3%     -15.9%      14997 ą 16%  softirqs.CPU51.SCHED
> >       9051 ą  2%     +23.3%      11160 ą 13%  softirqs.CPU52.SCHED
> >       9385 ą  3%     +25.2%      11752 ą 16%  softirqs.CPU53.SCHED
> >       9446 ą  6%     +24.9%      11798 ą 14%  softirqs.CPU54.SCHED
> >      10006 ą  6%     +22.4%      12249 ą 14%  softirqs.CPU55.SCHED
> >       9657           +22.0%      11780 ą 16%  softirqs.CPU57.SCHED
> >       9399           +27.5%      11980 ą 15%  softirqs.CPU58.SCHED
> >       9234 ą  3%     +27.7%      11795 ą 14%  softirqs.CPU59.SCHED
> >       9726 ą  6%     +24.0%      12062 ą 16%  softirqs.CPU6.SCHED
> >       9165 ą  2%     +23.7%      11342 ą 14%  softirqs.CPU60.SCHED
> >       9357 ą  2%     +25.8%      11774 ą 15%  softirqs.CPU61.SCHED
> >       9406 ą  3%     +25.2%      11780 ą 16%  softirqs.CPU62.SCHED
> >       9489           +23.2%      11688 ą 15%  softirqs.CPU63.SCHED
> >       9399 ą  2%     +23.5%      11604 ą 16%  softirqs.CPU65.SCHED
> >       8950 ą  2%     +31.6%      11774 ą 16%  softirqs.CPU66.SCHED
> >       9260           +21.7%      11267 ą 19%  softirqs.CPU67.SCHED
> >       9187           +27.1%      11672 ą 17%  softirqs.CPU68.SCHED
> >       9443 ą  2%     +25.5%      11847 ą 17%  softirqs.CPU69.SCHED
> >       9144 ą  3%     +28.0%      11706 ą 16%  softirqs.CPU7.SCHED
> >       9276 ą  2%     +28.0%      11871 ą 17%  softirqs.CPU70.SCHED
> >       9494           +21.4%      11526 ą 14%  softirqs.CPU71.SCHED
> >       9124 ą  3%     +27.8%      11657 ą 17%  softirqs.CPU72.SCHED
> >       9189 ą  3%     +25.9%      11568 ą 16%  softirqs.CPU73.SCHED
> >       9392 ą  2%     +23.7%      11619 ą 16%  softirqs.CPU74.SCHED
> >      17821 ą  3%     -14.7%      15197 ą 17%  softirqs.CPU78.SCHED
> >      17581 ą  2%     -15.7%      14827 ą 15%  softirqs.CPU79.SCHED
> >       9123           +28.2%      11695 ą 15%  softirqs.CPU8.SCHED
> >      17524 ą  2%     -16.7%      14601 ą 14%  softirqs.CPU80.SCHED
> >      17644 ą  3%     -16.2%      14782 ą 14%  softirqs.CPU81.SCHED
> >      17705 ą  3%     -18.6%      14414 ą 22%  softirqs.CPU84.SCHED
> >      17679 ą  2%     -14.1%      15185 ą 11%  softirqs.CPU85.SCHED
> >      17434 ą  3%     -15.5%      14724 ą 14%  softirqs.CPU86.SCHED
> >      17409 ą  2%     -15.0%      14794 ą 13%  softirqs.CPU87.SCHED
> >      17470 ą  3%     -15.7%      14730 ą 13%  softirqs.CPU88.SCHED
> >      17748 ą  4%     -17.1%      14721 ą 12%  softirqs.CPU89.SCHED
> >       9323           +28.0%      11929 ą 17%  softirqs.CPU9.SCHED
> >      17471 ą  2%     -16.9%      14525 ą 13%  softirqs.CPU90.SCHED
> >      17900 ą  3%     -17.0%      14850 ą 14%  softirqs.CPU94.SCHED
> >      17599 ą  4%     -17.4%      14544 ą 15%  softirqs.CPU95.SCHED
> >      17697 ą  4%     -17.7%      14569 ą 13%  softirqs.CPU96.SCHED
> >      17561 ą  3%     -15.1%      14901 ą 13%  softirqs.CPU97.SCHED
> >      17404 ą  3%     -16.1%      14601 ą 13%  softirqs.CPU98.SCHED
> >      17802 ą  3%     -19.4%      14344 ą 15%  softirqs.CPU99.SCHED
> >       1310 ą 10%     -17.0%       1088 ą  5%  interrupts.CPU1.RES:Rescheduling_interrupts
> >       3427           +13.3%       3883 ą  9%  interrupts.CPU10.CAL:Function_call_interrupts
> >     736.50 ą 20%     +34.4%     989.75 ą 17%  interrupts.CPU100.RES:Rescheduling_interrupts
> >       3421 ą  3%     +14.6%       3921 ą  9%  interrupts.CPU101.CAL:Function_call_interrupts
> >       4873 ą  8%     +16.2%       5662 ą  7%  interrupts.CPU101.NMI:Non-maskable_interrupts
> >       4873 ą  8%     +16.2%       5662 ą  7%  interrupts.CPU101.PMI:Performance_monitoring_interrupts
> >     629.50 ą 19%     +83.2%       1153 ą 46%  interrupts.CPU101.RES:Rescheduling_interrupts
> >     661.75 ą 14%     +25.7%     832.00 ą 13%  interrupts.CPU102.RES:Rescheduling_interrupts
> >       4695 ą  5%     +15.5%       5420 ą  9%  interrupts.CPU103.NMI:Non-maskable_interrupts
> >       4695 ą  5%     +15.5%       5420 ą  9%  interrupts.CPU103.PMI:Performance_monitoring_interrupts
> >       3460           +12.1%       3877 ą  9%  interrupts.CPU11.CAL:Function_call_interrupts
> >     691.50 ą  7%     +41.0%     975.00 ą 32%  interrupts.CPU19.RES:Rescheduling_interrupts
> >       3413 ą  2%     +13.4%       3870 ą 10%  interrupts.CPU20.CAL:Function_call_interrupts
> >       3413 ą  2%     +13.4%       3871 ą 10%  interrupts.CPU22.CAL:Function_call_interrupts
> >     863.00 ą 36%     +45.3%       1254 ą 24%  interrupts.CPU23.RES:Rescheduling_interrupts
> >     659.75 ą 12%     +83.4%       1209 ą 20%  interrupts.CPU26.RES:Rescheduling_interrupts
> >     615.00 ą 10%     +87.8%       1155 ą 14%  interrupts.CPU27.RES:Rescheduling_interrupts
> >     663.75 ą  5%     +67.9%       1114 ą  7%  interrupts.CPU28.RES:Rescheduling_interrupts
> >       3421 ą  4%     +13.4%       3879 ą  9%  interrupts.CPU29.CAL:Function_call_interrupts
> >     805.25 ą 16%     +33.0%       1071 ą 15%  interrupts.CPU29.RES:Rescheduling_interrupts
> >       3482 ą  3%     +11.0%       3864 ą  8%  interrupts.CPU3.CAL:Function_call_interrupts
> >     819.75 ą 19%     +48.4%       1216 ą 12%  interrupts.CPU30.RES:Rescheduling_interrupts
> >     777.25 ą  8%     +31.6%       1023 ą  6%  interrupts.CPU31.RES:Rescheduling_interrupts
> >     844.50 ą 25%     +41.7%       1196 ą 20%  interrupts.CPU32.RES:Rescheduling_interrupts
> >     722.75 ą 14%     +94.2%       1403 ą 26%  interrupts.CPU33.RES:Rescheduling_interrupts
> >       3944 ą 25%     +36.8%       5394 ą  9%  interrupts.CPU34.NMI:Non-maskable_interrupts
> >       3944 ą 25%     +36.8%       5394 ą  9%  interrupts.CPU34.PMI:Performance_monitoring_interrupts
> >     781.75 ą  9%     +45.3%       1136 ą 27%  interrupts.CPU34.RES:Rescheduling_interrupts
> >     735.50 ą  9%     +33.3%     980.75 ą  4%  interrupts.CPU35.RES:Rescheduling_interrupts
> >     691.75 ą 10%     +41.6%     979.50 ą 13%  interrupts.CPU36.RES:Rescheduling_interrupts
> >     727.00 ą 16%     +47.7%       1074 ą 15%  interrupts.CPU37.RES:Rescheduling_interrupts
> >       4413 ą  7%     +24.9%       5511 ą  9%  interrupts.CPU38.NMI:Non-maskable_interrupts
> >       4413 ą  7%     +24.9%       5511 ą  9%  interrupts.CPU38.PMI:Performance_monitoring_interrupts
> >     708.75 ą 25%     +62.6%       1152 ą 22%  interrupts.CPU38.RES:Rescheduling_interrupts
> >     666.50 ą  7%     +57.8%       1052 ą 13%  interrupts.CPU39.RES:Rescheduling_interrupts
> >     765.75 ą 11%     +25.2%     958.75 ą 14%  interrupts.CPU4.RES:Rescheduling_interrupts
> >       3395 ą  2%     +15.1%       3908 ą 10%  interrupts.CPU40.CAL:Function_call_interrupts
> >     770.00 ą 16%     +45.3%       1119 ą 18%  interrupts.CPU40.RES:Rescheduling_interrupts
> >     740.50 ą 26%     +61.9%       1198 ą 19%  interrupts.CPU41.RES:Rescheduling_interrupts
> >       3459 ą  2%     +12.9%       3905 ą 11%  interrupts.CPU42.CAL:Function_call_interrupts
> >       4530 ą  5%     +22.8%       5564 ą  9%  interrupts.CPU42.NMI:Non-maskable_interrupts
> >       4530 ą  5%     +22.8%       5564 ą  9%  interrupts.CPU42.PMI:Performance_monitoring_interrupts
> >       3330 ą 25%     +60.0%       5328 ą 10%  interrupts.CPU44.NMI:Non-maskable_interrupts
> >       3330 ą 25%     +60.0%       5328 ą 10%  interrupts.CPU44.PMI:Performance_monitoring_interrupts
> >     686.25 ą  9%     +48.4%       1018 ą 10%  interrupts.CPU44.RES:Rescheduling_interrupts
> >     702.00 ą 15%     +38.6%     973.25 ą  5%  interrupts.CPU45.RES:Rescheduling_interrupts
> >       4742 ą  7%     +19.3%       5657 ą  8%  interrupts.CPU46.NMI:Non-maskable_interrupts
> >       4742 ą  7%     +19.3%       5657 ą  8%  interrupts.CPU46.PMI:Performance_monitoring_interrupts
> >     732.75 ą  6%     +51.9%       1113 ą  7%  interrupts.CPU46.RES:Rescheduling_interrupts
> >     775.50 ą 17%     +41.3%       1095 ą  6%  interrupts.CPU47.RES:Rescheduling_interrupts
> >     670.75 ą  5%     +60.7%       1078 ą  6%  interrupts.CPU48.RES:Rescheduling_interrupts
> >       4870 ą  8%     +16.5%       5676 ą  7%  interrupts.CPU49.NMI:Non-maskable_interrupts
> >       4870 ą  8%     +16.5%       5676 ą  7%  interrupts.CPU49.PMI:Performance_monitoring_interrupts
> >     694.75 ą 12%     +25.8%     874.00 ą 11%  interrupts.CPU49.RES:Rescheduling_interrupts
> >     686.00 ą  9%     +52.0%       1042 ą 20%  interrupts.CPU50.RES:Rescheduling_interrupts
> >       3361           +17.2%       3938 ą  9%  interrupts.CPU51.CAL:Function_call_interrupts
> >       4707 ą  6%     +16.0%       5463 ą  8%  interrupts.CPU51.NMI:Non-maskable_interrupts
> >       4707 ą  6%     +16.0%       5463 ą  8%  interrupts.CPU51.PMI:Performance_monitoring_interrupts
> >     638.75 ą 12%     +28.6%     821.25 ą 15%  interrupts.CPU54.RES:Rescheduling_interrupts
> >     677.50 ą  8%     +51.8%       1028 ą 29%  interrupts.CPU58.RES:Rescheduling_interrupts
> >       3465 ą  2%     +12.0%       3880 ą  9%  interrupts.CPU6.CAL:Function_call_interrupts
> >     641.25 ą  2%     +26.1%     808.75 ą 10%  interrupts.CPU60.RES:Rescheduling_interrupts
> >     599.75 ą  2%     +45.6%     873.50 ą  8%  interrupts.CPU62.RES:Rescheduling_interrupts
> >     661.50 ą  9%     +52.4%       1008 ą 27%  interrupts.CPU63.RES:Rescheduling_interrupts
> >     611.00 ą 12%     +31.1%     801.00 ą 13%  interrupts.CPU69.RES:Rescheduling_interrupts
> >       3507 ą  2%     +10.8%       3888 ą  9%  interrupts.CPU7.CAL:Function_call_interrupts
> >     664.00 ą  5%     +32.3%     878.50 ą 23%  interrupts.CPU70.RES:Rescheduling_interrupts
> >       5780 ą  9%     -38.8%       3540 ą 37%  interrupts.CPU73.NMI:Non-maskable_interrupts
> >       5780 ą  9%     -38.8%       3540 ą 37%  interrupts.CPU73.PMI:Performance_monitoring_interrupts
> >       5787 ą  9%     -26.7%       4243 ą 28%  interrupts.CPU76.NMI:Non-maskable_interrupts
> >       5787 ą  9%     -26.7%       4243 ą 28%  interrupts.CPU76.PMI:Performance_monitoring_interrupts
> >     751.50 ą 15%     +88.0%       1413 ą 37%  interrupts.CPU78.RES:Rescheduling_interrupts
> >     725.50 ą 12%     +82.9%       1327 ą 36%  interrupts.CPU79.RES:Rescheduling_interrupts
> >     714.00 ą 18%     +33.2%     951.00 ą 15%  interrupts.CPU80.RES:Rescheduling_interrupts
> >     706.25 ą 19%     +55.6%       1098 ą 27%  interrupts.CPU82.RES:Rescheduling_interrupts
> >       4524 ą  6%     +19.6%       5409 ą  8%  interrupts.CPU83.NMI:Non-maskable_interrupts
> >       4524 ą  6%     +19.6%       5409 ą  8%  interrupts.CPU83.PMI:Performance_monitoring_interrupts
> >     666.75 ą 15%     +37.3%     915.50 ą  4%  interrupts.CPU83.RES:Rescheduling_interrupts
> >     782.50 ą 26%     +57.6%       1233 ą 21%  interrupts.CPU84.RES:Rescheduling_interrupts
> >     622.75 ą 12%     +77.8%       1107 ą 17%  interrupts.CPU85.RES:Rescheduling_interrupts
> >       3465 ą  3%     +13.5%       3933 ą  9%  interrupts.CPU86.CAL:Function_call_interrupts
> >     714.75 ą 14%     +47.0%       1050 ą 10%  interrupts.CPU86.RES:Rescheduling_interrupts
> >       3519 ą  2%     +11.7%       3929 ą  9%  interrupts.CPU87.CAL:Function_call_interrupts
> >     582.75 ą 10%     +54.2%     898.75 ą 11%  interrupts.CPU87.RES:Rescheduling_interrupts
> >     713.00 ą 10%     +36.6%     974.25 ą 11%  interrupts.CPU88.RES:Rescheduling_interrupts
> >     690.50 ą 13%     +53.0%       1056 ą 13%  interrupts.CPU89.RES:Rescheduling_interrupts
> >       3477           +11.0%       3860 ą  8%  interrupts.CPU9.CAL:Function_call_interrupts
> >     684.50 ą 14%     +39.7%     956.25 ą 11%  interrupts.CPU90.RES:Rescheduling_interrupts
> >       3946 ą 21%     +39.8%       5516 ą 10%  interrupts.CPU91.NMI:Non-maskable_interrupts
> >       3946 ą 21%     +39.8%       5516 ą 10%  interrupts.CPU91.PMI:Performance_monitoring_interrupts
> >     649.00 ą 13%     +54.3%       1001 ą  6%  interrupts.CPU91.RES:Rescheduling_interrupts
> >     674.25 ą 21%     +39.5%     940.25 ą 11%  interrupts.CPU92.RES:Rescheduling_interrupts
> >       3971 ą 26%     +41.2%       5606 ą  8%  interrupts.CPU94.NMI:Non-maskable_interrupts
> >       3971 ą 26%     +41.2%       5606 ą  8%  interrupts.CPU94.PMI:Performance_monitoring_interrupts
> >       4129 ą 22%     +33.2%       5499 ą  9%  interrupts.CPU95.NMI:Non-maskable_interrupts
> >       4129 ą 22%     +33.2%       5499 ą  9%  interrupts.CPU95.PMI:Performance_monitoring_interrupts
> >     685.75 ą 14%     +38.0%     946.50 ą  9%  interrupts.CPU96.RES:Rescheduling_interrupts
> >       4630 ą 11%     +18.3%       5477 ą  8%  interrupts.CPU97.NMI:Non-maskable_interrupts
> >       4630 ą 11%     +18.3%       5477 ą  8%  interrupts.CPU97.PMI:Performance_monitoring_interrupts
> >       4835 ą  9%     +16.3%       5622 ą  9%  interrupts.CPU98.NMI:Non-maskable_interrupts
> >       4835 ą  9%     +16.3%       5622 ą  9%  interrupts.CPU98.PMI:Performance_monitoring_interrupts
> >     596.25 ą 11%     +81.8%       1083 ą  9%  interrupts.CPU98.RES:Rescheduling_interrupts
> >     674.75 ą 17%     +43.7%     969.50 ą  5%  interrupts.CPU99.RES:Rescheduling_interrupts
> >      78.25 ą 13%     +21.4%      95.00 ą 10%  interrupts.IWI:IRQ_work_interrupts
> >      85705 ą  6%     +26.0%     107990 ą  6%  interrupts.RES:Rescheduling_interrupts
> >
> >
> >
> > ***************************************************************************************************
> > lkp-bdw-ep6: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
> > =========================================================================================
> > class/compiler/cpufreq_governor/disk/kconfig/nr_threads/rootfs/sc_pid_max/tbox_group/testcase/testtime/ucode:
> >   scheduler/gcc-7/performance/1HDD/x86_64-rhel-7.6/100%/debian-x86_64-2019-11-14.cgz/4194304/lkp-bdw-ep6/stress-ng/1s/0xb000038
> >
> > commit:
> >   fcf0553db6 ("sched/fair: Remove meaningless imbalance calculation")
> >   0b0695f2b3 ("sched/fair: Rework load_balance()")
> >
> > fcf0553db6f4c793 0b0695f2b34a4afa3f6e9aa1ff0
> > ---------------- ---------------------------
> >          %stddev     %change         %stddev
> >              \          |                \
> >     887157 ą  4%     -23.1%     682080 ą  3%  stress-ng.fault.ops
> >     887743 ą  4%     -23.1%     682337 ą  3%  stress-ng.fault.ops_per_sec
> >    9537184 ą 10%     -21.2%    7518352 ą 14%  stress-ng.hrtimers.ops_per_sec
> >     360922 ą 13%     -21.1%     284734 ą  6%  stress-ng.kill.ops
> >     361115 ą 13%     -21.1%     284810 ą  6%  stress-ng.kill.ops_per_sec
> >   23260649           -26.9%   17006477 ą 24%  stress-ng.mq.ops
> >   23255884           -26.9%   17004540 ą 24%  stress-ng.mq.ops_per_sec
> >    3291588 ą  3%     +42.5%    4690316 ą  2%  stress-ng.schedpolicy.ops
> >    3327913 ą  3%     +41.5%    4709770 ą  2%  stress-ng.schedpolicy.ops_per_sec
> >      48.14            -2.2%      47.09        stress-ng.time.elapsed_time
> >      48.14            -2.2%      47.09        stress-ng.time.elapsed_time.max
> >       5480            +3.7%       5681        stress-ng.time.percent_of_cpu_this_job_got
> >       2249            +1.3%       2278        stress-ng.time.system_time
> >     902759 ą  4%     -22.6%     698616 ą  3%  proc-vmstat.unevictable_pgs_culled
> >   98767954 ą  7%     +16.4%   1.15e+08 ą  7%  cpuidle.C1.time
> >    1181676 ą 12%     -43.2%     671022 ą 37%  cpuidle.C6.usage
> >       2.21 ą  7%      +0.4        2.62 ą 10%  turbostat.C1%
> >    1176838 ą 12%     -43.2%     668921 ą 37%  turbostat.C6
> >    3961223 ą  4%     +12.8%    4469620 ą  5%  vmstat.memory.cache
> >     439.50 ą  3%     +14.7%     504.00 ą  9%  vmstat.procs.r
> >       0.42 ą  7%     -15.6%       0.35 ą 13%  sched_debug.cfs_rq:/.nr_running.stddev
> >       0.00 ą  4%     -18.1%       0.00 ą 16%  sched_debug.cpu.next_balance.stddev
> >       0.41 ą  7%     -15.1%       0.35 ą 13%  sched_debug.cpu.nr_running.stddev
> >       9367 ą  9%     -12.8%       8166 ą  2%  softirqs.CPU1.SCHED
> >      35143 ą  6%     -12.0%      30930 ą  2%  softirqs.CPU22.TIMER
> >      31997 ą  4%      -7.5%      29595 ą  2%  softirqs.CPU27.TIMER
> >       3.64 ą173%    -100.0%       0.00        iostat.sda.await.max
> >       3.64 ą173%    -100.0%       0.00        iostat.sda.r_await.max
> >       3.90 ą173%    -100.0%       0.00        iostat.sdc.await.max
> >       3.90 ą173%    -100.0%       0.00        iostat.sdc.r_await.max
> >   12991737 ą 10%     +61.5%   20979642 ą  8%  numa-numastat.node0.local_node
> >   13073590 ą 10%     +61.1%   21059448 ą  8%  numa-numastat.node0.numa_hit
> >   20903562 ą  3%     -32.2%   14164789 ą  3%  numa-numastat.node1.local_node
> >   20993788 ą  3%     -32.1%   14245636 ą  3%  numa-numastat.node1.numa_hit
> >      90229 ą  4%     -10.4%      80843 ą  9%  numa-numastat.node1.other_node
> >      50.75 ą 90%   +1732.0%     929.75 ą147%  interrupts.CPU23.IWI:IRQ_work_interrupts
> >      40391 ą 59%     -57.0%      17359 ą 11%  interrupts.CPU24.RES:Rescheduling_interrupts
> >      65670 ą 11%     -48.7%      33716 ą 54%  interrupts.CPU42.RES:Rescheduling_interrupts
> >      42201 ą 46%     -57.1%      18121 ą 35%  interrupts.CPU49.RES:Rescheduling_interrupts
> >     293869 ą 44%    +103.5%     598082 ą 23%  interrupts.CPU52.LOC:Local_timer_interrupts
> >      17367 ą  8%    +120.5%      38299 ą 44%  interrupts.CPU55.RES:Rescheduling_interrupts
> >  1.127e+08            +3.8%   1.17e+08 ą  2%  perf-stat.i.branch-misses
> >      11.10            +1.2       12.26 ą  6%  perf-stat.i.cache-miss-rate%
> >  4.833e+10 ą  3%      +4.7%   5.06e+10        perf-stat.i.instructions
> >   15009442 ą  4%     +14.3%   17150138 ą  3%  perf-stat.i.node-load-misses
> >      47.12 ą  5%      +3.2       50.37 ą  5%  perf-stat.i.node-store-miss-rate%
> >    6016833 ą  7%     +17.0%    7036803 ą  3%  perf-stat.i.node-store-misses
> >  1.044e+10 ą  2%      +4.0%  1.086e+10        perf-stat.ps.branch-instructions
> >  1.364e+10 ą  3%      +4.0%  1.418e+10        perf-stat.ps.dTLB-loads
> >  4.804e+10 ą  2%      +4.1%  5.003e+10        perf-stat.ps.instructions
> >   14785608 ą  5%     +11.3%   16451530 ą  3%  perf-stat.ps.node-load-misses
> >    5968712 ą  7%     +13.4%    6769847 ą  3%  perf-stat.ps.node-store-misses
> >      13588 ą  4%     +29.4%      17585 ą  9%  slabinfo.Acpi-State.active_objs
> >      13588 ą  4%     +29.4%      17585 ą  9%  slabinfo.Acpi-State.num_objs
> >      20859 ą  3%      -8.6%      19060 ą  4%  slabinfo.kmalloc-192.num_objs
> >     488.00 ą 25%     +41.0%     688.00 ą  5%  slabinfo.kmalloc-rcl-128.active_objs
> >     488.00 ą 25%     +41.0%     688.00 ą  5%  slabinfo.kmalloc-rcl-128.num_objs
> >      39660 ą  3%     +11.8%      44348 ą  2%  slabinfo.radix_tree_node.active_objs
> >      44284 ą  3%     +12.3%      49720        slabinfo.radix_tree_node.num_objs
> >       5811 ą 15%     +16.1%       6746 ą 14%  slabinfo.sighand_cache.active_objs
> >     402.00 ą 15%     +17.5%     472.50 ą 14%  slabinfo.sighand_cache.active_slabs
> >       6035 ą 15%     +17.5%       7091 ą 14%  slabinfo.sighand_cache.num_objs
> >     402.00 ą 15%     +17.5%     472.50 ą 14%  slabinfo.sighand_cache.num_slabs
> >      10282 ą 10%     +12.9%      11604 ą  9%  slabinfo.signal_cache.active_objs
> >      11350 ą 10%     +12.8%      12808 ą  9%  slabinfo.signal_cache.num_objs
> >     732920 ą  9%    +162.0%    1919987 ą 11%  numa-meminfo.node0.Active
> >     732868 ą  9%    +162.0%    1919814 ą 11%  numa-meminfo.node0.Active(anon)
> >     545019 ą  6%     +61.0%     877443 ą 17%  numa-meminfo.node0.AnonHugePages
> >     695015 ą 10%     +46.8%    1020150 ą 14%  numa-meminfo.node0.AnonPages
> >     638322 ą  4%    +448.2%    3499399 ą  5%  numa-meminfo.node0.FilePages
> >      81008 ą 14%   +2443.4%    2060329 ą  3%  numa-meminfo.node0.Inactive
> >      80866 ą 14%   +2447.4%    2060022 ą  3%  numa-meminfo.node0.Inactive(anon)
> >      86504 ą 10%   +2287.3%    2065084 ą  3%  numa-meminfo.node0.Mapped
> >    2010104          +160.8%    5242366 ą  5%  numa-meminfo.node0.MemUsed
> >      16453 ą 15%    +159.2%      42640        numa-meminfo.node0.PageTables
> >     112769 ą 13%   +2521.1%    2955821 ą  7%  numa-meminfo.node0.Shmem
> >    1839527 ą  4%     -60.2%     732645 ą 23%  numa-meminfo.node1.Active
> >    1839399 ą  4%     -60.2%     732637 ą 23%  numa-meminfo.node1.Active(anon)
> >     982237 ą  7%     -45.9%     531445 ą 27%  numa-meminfo.node1.AnonHugePages
> >    1149348 ą  8%     -41.2%     676067 ą 25%  numa-meminfo.node1.AnonPages
> >    3170649 ą  4%     -77.2%     723230 ą  7%  numa-meminfo.node1.FilePages
> >    1960718 ą  4%     -91.8%     160773 ą 31%  numa-meminfo.node1.Inactive
> >    1960515 ą  4%     -91.8%     160722 ą 31%  numa-meminfo.node1.Inactive(anon)
> >     118489 ą 11%     -20.2%      94603 ą  3%  numa-meminfo.node1.KReclaimable
> >    1966065 ą  4%     -91.5%     166789 ą 29%  numa-meminfo.node1.Mapped
> >    5034310 ą  3%     -60.2%    2003121 ą  9%  numa-meminfo.node1.MemUsed
> >      42684 ą 10%     -64.2%      15283 ą 21%  numa-meminfo.node1.PageTables
> >     118489 ą 11%     -20.2%      94603 ą  3%  numa-meminfo.node1.SReclaimable
> >    2644708 ą  5%     -91.9%     214268 ą 24%  numa-meminfo.node1.Shmem
> >     147513 ą 20%    +244.2%     507737 ą  7%  numa-vmstat.node0.nr_active_anon
> >     137512 ą 21%    +105.8%     282999 ą  3%  numa-vmstat.node0.nr_anon_pages
> >     210.25 ą 33%    +124.7%     472.50 ą 11%  numa-vmstat.node0.nr_anon_transparent_hugepages
> >     158008 ą  4%    +454.7%     876519 ą  6%  numa-vmstat.node0.nr_file_pages
> >      18416 ą 27%   +2711.4%     517747 ą  3%  numa-vmstat.node0.nr_inactive_anon
> >      26255 ą 22%     +34.3%      35251 ą 10%  numa-vmstat.node0.nr_kernel_stack
> >      19893 ą 23%   +2509.5%     519129 ą  3%  numa-vmstat.node0.nr_mapped
> >       3928 ą 22%    +179.4%      10976 ą  4%  numa-vmstat.node0.nr_page_table_pages
> >      26623 ą 18%   +2681.9%     740635 ą  7%  numa-vmstat.node0.nr_shmem
> >     147520 ą 20%    +244.3%     507885 ą  7%  numa-vmstat.node0.nr_zone_active_anon
> >      18415 ą 27%   +2711.5%     517739 ą  3%  numa-vmstat.node0.nr_zone_inactive_anon
> >    6937137 ą  8%     +55.9%   10814957 ą  7%  numa-vmstat.node0.numa_hit
> >    6860210 ą  8%     +56.6%   10739902 ą  7%  numa-vmstat.node0.numa_local
> >     425559 ą 13%     -52.9%     200300 ą 17%  numa-vmstat.node1.nr_active_anon
> >     786341 ą  4%     -76.6%     183664 ą  7%  numa-vmstat.node1.nr_file_pages
> >     483646 ą  4%     -90.8%      44606 ą 29%  numa-vmstat.node1.nr_inactive_anon
> >     485120 ą  4%     -90.5%      46130 ą 27%  numa-vmstat.node1.nr_mapped
> >      10471 ą  6%     -61.3%       4048 ą 18%  numa-vmstat.node1.nr_page_table_pages
> >     654852 ą  5%     -91.4%      56439 ą 25%  numa-vmstat.node1.nr_shmem
> >      29681 ą 11%     -20.3%      23669 ą  3%  numa-vmstat.node1.nr_slab_reclaimable
> >     425556 ą 13%     -52.9%     200359 ą 17%  numa-vmstat.node1.nr_zone_active_anon
> >     483649 ą  4%     -90.8%      44600 ą 29%  numa-vmstat.node1.nr_zone_inactive_anon
> >   10527487 ą  5%     -31.3%    7233899 ą  6%  numa-vmstat.node1.numa_hit
> >   10290625 ą  5%     -31.9%    7006050 ą  7%  numa-vmstat.node1.numa_local
> >
> >
> >
> > ***************************************************************************************************
> > lkp-csl-2sp5: 96 threads Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 192G memory
> > =========================================================================================
> > class/compiler/cpufreq_governor/disk/kconfig/nr_threads/rootfs/tbox_group/testcase/testtime/ucode:
> >   interrupt/gcc-7/performance/1HDD/x86_64-fedora-25/100%/debian-x86_64-2019-11-14.cgz/lkp-csl-2sp5/stress-ng/1s/0x500002c
> >
> > commit:
> >   fcf0553db6 ("sched/fair: Remove meaningless imbalance calculation")
> >   0b0695f2b3 ("sched/fair: Rework load_balance()")
> >
> > fcf0553db6f4c793 0b0695f2b34a4afa3f6e9aa1ff0
> > ---------------- ---------------------------
> >          %stddev     %change         %stddev
> >              \          |                \
> >    6684836           -33.3%    4457559 ą  4%  stress-ng.schedpolicy.ops
> >    6684766           -33.3%    4457633 ą  4%  stress-ng.schedpolicy.ops_per_sec
> >   19978129           -28.8%   14231813 ą 16%  stress-ng.time.involuntary_context_switches
> >      82.49 ą  2%      -5.2%      78.23        stress-ng.time.user_time
> >     106716 ą 29%     +40.3%     149697 ą  2%  meminfo.max_used_kB
> >       4.07 ą 22%      +1.2        5.23 ą  5%  mpstat.cpu.all.irq%
> >    2721317 ą 10%     +66.5%    4531100 ą 22%  cpuidle.POLL.time
> >      71470 ą 18%     +41.1%     100822 ą 11%  cpuidle.POLL.usage
> >     841.00 ą 41%     -50.4%     417.25 ą 17%  numa-meminfo.node0.Dirty
> >       7096 ą  7%     +25.8%       8930 ą  9%  numa-meminfo.node1.KernelStack
> >      68752 ą 90%     -45.9%      37169 ą143%  sched_debug.cfs_rq:/.runnable_weight.stddev
> >     654.93 ą 11%     +19.3%     781.09 ą  2%  sched_debug.cpu.clock_task.stddev
> >     183.06 ą 83%     -76.9%      42.20 ą 17%  iostat.sda.await.max
> >     627.47 ą102%     -96.7%      20.52 ą 38%  iostat.sda.r_await.max
> >     183.08 ą 83%     -76.9%      42.24 ą 17%  iostat.sda.w_await.max
> >     209.00 ą 41%     -50.2%     104.00 ą 17%  numa-vmstat.node0.nr_dirty
> >     209.50 ą 41%     -50.4%     104.00 ą 17%  numa-vmstat.node0.nr_zone_write_pending
> >       6792 ą  8%     +34.4%       9131 ą  7%  numa-vmstat.node1.nr_kernel_stack
> >       3.57 ą173%      +9.8       13.38 ą 25%  perf-profile.calltrace.cycles-pp.proc_reg_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
> >       3.57 ą173%      +9.8       13.38 ą 25%  perf-profile.calltrace.cycles-pp.seq_read.proc_reg_read.vfs_read.ksys_read.do_syscall_64
> >       3.57 ą173%      +9.8       13.39 ą 25%  perf-profile.children.cycles-pp.proc_reg_read
> >       3.57 ą173%     +12.6       16.16 ą 28%  perf-profile.children.cycles-pp.seq_read
> >       7948 ą 56%     -53.1%       3730 ą  5%  softirqs.CPU25.RCU
> >       6701 ą 33%     -46.7%       3570 ą  5%  softirqs.CPU34.RCU
> >       8232 ą 89%     -60.5%       3247        softirqs.CPU50.RCU
> >     326269 ą 16%     -27.4%     236940        softirqs.RCU
> >      68066            +7.9%      73438        proc-vmstat.nr_active_anon
> >      67504            +7.8%      72783        proc-vmstat.nr_anon_pages
> >       7198 ą 19%     +34.2%       9658 ą  2%  proc-vmstat.nr_page_table_pages
> >      40664 ą  8%     +10.1%      44766        proc-vmstat.nr_slab_unreclaimable
> >      68066            +7.9%      73438        proc-vmstat.nr_zone_active_anon
> >    1980169 ą  4%      -5.3%    1875307        proc-vmstat.numa_hit
> >    1960247 ą  4%      -5.4%    1855033        proc-vmstat.numa_local
> >     956008 ą 16%     -17.8%     786247        proc-vmstat.pgfault
> >      26598 ą 76%    +301.2%     106716 ą 45%  interrupts.CPU1.RES:Rescheduling_interrupts
> >     151212 ą 39%     -67.3%      49451 ą 57%  interrupts.CPU26.RES:Rescheduling_interrupts
> >    1013586 ą  2%     -10.9%     903528 ą  7%  interrupts.CPU27.LOC:Local_timer_interrupts
> >    1000980 ą  2%     -11.4%     886740 ą  8%  interrupts.CPU31.LOC:Local_timer_interrupts
> >    1021043 ą  3%      -9.9%     919686 ą  6%  interrupts.CPU32.LOC:Local_timer_interrupts
> >     125222 ą 51%     -86.0%      17483 ą106%  interrupts.CPU33.RES:Rescheduling_interrupts
> >    1003735 ą  2%     -11.1%     891833 ą  8%  interrupts.CPU34.LOC:Local_timer_interrupts
> >    1021799 ą  2%     -13.2%     886665 ą  8%  interrupts.CPU38.LOC:Local_timer_interrupts
> >     997788 ą  2%     -13.2%     866427 ą 10%  interrupts.CPU42.LOC:Local_timer_interrupts
> >    1001618           -11.6%     885490 ą  9%  interrupts.CPU45.LOC:Local_timer_interrupts
> >      22321 ą 58%    +550.3%     145153 ą 22%  interrupts.CPU9.RES:Rescheduling_interrupts
> >       3151 ą 53%     +67.3%       5273 ą  8%  slabinfo.avc_xperms_data.active_objs
> >       3151 ą 53%     +67.3%       5273 ą  8%  slabinfo.avc_xperms_data.num_objs
> >     348.75 ą 13%     +39.8%     487.50 ą  5%  slabinfo.biovec-128.active_objs
> >     348.75 ą 13%     +39.8%     487.50 ą  5%  slabinfo.biovec-128.num_objs
> >      13422 ą 97%    +121.1%      29678 ą  2%  slabinfo.btrfs_extent_map.active_objs
> >      14638 ą 98%    +117.8%      31888 ą  2%  slabinfo.btrfs_extent_map.num_objs
> >       3835 ą 18%     +40.9%       5404 ą  7%  slabinfo.dmaengine-unmap-16.active_objs
> >       3924 ą 18%     +39.9%       5490 ą  8%  slabinfo.dmaengine-unmap-16.num_objs
> >       3482 ą 96%    +119.1%       7631 ą 10%  slabinfo.khugepaged_mm_slot.active_objs
> >       3573 ą 96%    +119.4%       7839 ą 10%  slabinfo.khugepaged_mm_slot.num_objs
> >       8629 ą 52%     -49.2%       4384        slabinfo.kmalloc-rcl-64.active_objs
> >       8629 ą 52%     -49.2%       4384        slabinfo.kmalloc-rcl-64.num_objs
> >       2309 ą 57%     +82.1%       4206 ą  5%  slabinfo.mnt_cache.active_objs
> >       2336 ą 57%     +80.8%       4224 ą  5%  slabinfo.mnt_cache.num_objs
> >       5320 ą 48%     +69.1%       8999 ą 23%  slabinfo.pool_workqueue.active_objs
> >     165.75 ą 48%     +69.4%     280.75 ą 23%  slabinfo.pool_workqueue.active_slabs
> >       5320 ą 48%     +69.2%       8999 ą 23%  slabinfo.pool_workqueue.num_objs
> >     165.75 ą 48%     +69.4%     280.75 ą 23%  slabinfo.pool_workqueue.num_slabs
> >       3306 ą 15%     +27.0%       4199 ą  3%  slabinfo.task_group.active_objs
> >       3333 ą 16%     +30.1%       4336 ą  3%  slabinfo.task_group.num_objs
> >      14.74 ą  2%      +1.8       16.53 ą  2%  perf-stat.i.cache-miss-rate%
> >   22459727 ą 20%     +46.7%   32955572 ą  4%  perf-stat.i.cache-misses
> >      33575 ą 19%     +68.8%      56658 ą 13%  perf-stat.i.cpu-migrations
> >       0.03 ą 20%      +0.0        0.05 ą  8%  perf-stat.i.dTLB-load-miss-rate%
> >    6351703 ą 33%     +47.2%    9352532 ą  9%  perf-stat.i.dTLB-load-misses
> >       0.45 ą  3%      -3.0%       0.44        perf-stat.i.ipc
> >    4711345 ą 18%     +43.9%    6780944 ą  7%  perf-stat.i.node-load-misses
> >      82.51            +4.5       86.97        perf-stat.i.node-store-miss-rate%
> >    2861142 ą 31%     +60.8%    4601146 ą  5%  perf-stat.i.node-store-misses
> >       0.92 ą  6%      -0.1        0.85 ą  2%  perf-stat.overall.branch-miss-rate%
> >       0.02 ą  3%      +0.0        0.02 ą  4%  perf-stat.overall.dTLB-store-miss-rate%
> >     715.05 ą  5%      +9.9%     785.50 ą  4%  perf-stat.overall.instructions-per-iTLB-miss
> >       0.44 ą  2%      -5.4%       0.42 ą  2%  perf-stat.overall.ipc
> >      79.67            +2.1       81.80 ą  2%  perf-stat.overall.node-store-miss-rate%
> >   22237897 ą 19%     +46.4%   32560557 ą  5%  perf-stat.ps.cache-misses
> >      32491 ą 18%     +70.5%      55390 ą 13%  perf-stat.ps.cpu-migrations
> >    6071108 ą 31%     +45.0%    8804767 ą  9%  perf-stat.ps.dTLB-load-misses
> >       1866 ą 98%     -91.9%     150.48 ą  2%  perf-stat.ps.major-faults
> >    4593546 ą 16%     +42.4%    6541402 ą  7%  perf-stat.ps.node-load-misses
> >    2757176 ą 29%     +58.4%    4368169 ą  5%  perf-stat.ps.node-store-misses
> >  1.303e+12 ą  3%      -9.8%  1.175e+12 ą  3%  perf-stat.total.instructions
> >
> >
> >
> > ***************************************************************************************************
> > lkp-bdw-ep6: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
> > =========================================================================================
> > class/compiler/cpufreq_governor/disk/kconfig/nr_threads/rootfs/tbox_group/testcase/testtime/ucode:
> >   interrupt/gcc-7/performance/1HDD/x86_64-rhel-7.6/100%/debian-x86_64-2019-11-14.cgz/lkp-bdw-ep6/stress-ng/30s/0xb000038
> >
> > commit:
> >   fcf0553db6 ("sched/fair: Remove meaningless imbalance calculation")
> >   0b0695f2b3 ("sched/fair: Rework load_balance()")
> >
> > fcf0553db6f4c793 0b0695f2b34a4afa3f6e9aa1ff0
> > ---------------- ---------------------------
> >        fail:runs  %reproduction    fail:runs
> >            |             |             |
> >           1:4          -25%            :4     dmesg.WARNING:at#for_ip_interrupt_entry/0x
> >          %stddev     %change         %stddev
> >              \          |                \
> >   98245522           +42.3%  1.398e+08        stress-ng.schedpolicy.ops
> >    3274860           +42.3%    4661027        stress-ng.schedpolicy.ops_per_sec
> >  3.473e+08            -9.7%  3.137e+08        stress-ng.sigq.ops
> >   11576537            -9.7%   10454846        stress-ng.sigq.ops_per_sec
> >   38097605 ą  6%     +10.3%   42011440 ą  4%  stress-ng.sigrt.ops
> >    1269646 ą  6%     +10.3%    1400024 ą  4%  stress-ng.sigrt.ops_per_sec
> >  3.628e+08 ą  4%     -21.5%  2.848e+08 ą 10%  stress-ng.time.involuntary_context_switches
> >       7040            +2.9%       7245        stress-ng.time.percent_of_cpu_this_job_got
> >      15.09 ą  3%     -13.4%      13.07 ą  5%  iostat.cpu.idle
> >      14.82 ą  3%      -2.0       12.80 ą  5%  mpstat.cpu.all.idle%
> >  3.333e+08 ą 17%     +59.9%  5.331e+08 ą 22%  cpuidle.C1.time
> >    5985148 ą 23%    +112.5%   12719679 ą 20%  cpuidle.C1E.usage
> >      14.50 ą  3%     -12.1%      12.75 ą  6%  vmstat.cpu.id
> >    1113131 ą  2%     -10.5%     996285 ą  3%  vmstat.system.cs
> >       2269            +2.4%       2324        turbostat.Avg_MHz
> >       0.64 ą 17%      +0.4        1.02 ą 23%  turbostat.C1%
> >    5984799 ą 23%    +112.5%   12719086 ą 20%  turbostat.C1E
> >       4.17 ą 32%     -46.0%       2.25 ą 38%  turbostat.Pkg%pc2
> >     216.57            +2.1%     221.12        turbostat.PkgWatt
> >      13.33 ą  3%      +3.9%      13.84        turbostat.RAMWatt
> >      99920           +13.6%     113486 ą 15%  proc-vmstat.nr_active_anon
> >       5738            +1.2%       5806        proc-vmstat.nr_inactive_anon
> >      46788            +2.1%      47749        proc-vmstat.nr_slab_unreclaimable
> >      99920           +13.6%     113486 ą 15%  proc-vmstat.nr_zone_active_anon
> >       5738            +1.2%       5806        proc-vmstat.nr_zone_inactive_anon
> >       3150 ą  2%     +35.4%       4265 ą 33%  proc-vmstat.numa_huge_pte_updates
> >    1641223           +34.3%    2203844 ą 32%  proc-vmstat.numa_pte_updates
> >      13575 ą 18%     +62.1%      21999 ą  4%  slabinfo.ext4_extent_status.active_objs
> >      13954 ą 17%     +57.7%      21999 ą  4%  slabinfo.ext4_extent_status.num_objs
> >       2527 ą  4%      +9.8%       2774 ą  2%  slabinfo.khugepaged_mm_slot.active_objs
> >       2527 ą  4%      +9.8%       2774 ą  2%  slabinfo.khugepaged_mm_slot.num_objs
> >      57547 ą  8%     -15.3%      48743 ą  9%  slabinfo.kmalloc-rcl-64.active_objs
> >     898.75 ą  8%     -15.3%     761.00 ą  9%  slabinfo.kmalloc-rcl-64.active_slabs
> >      57547 ą  8%     -15.3%      48743 ą  9%  slabinfo.kmalloc-rcl-64.num_objs
> >     898.75 ą  8%     -15.3%     761.00 ą  9%  slabinfo.kmalloc-rcl-64.num_slabs
> >  1.014e+10            +1.7%  1.031e+10        perf-stat.i.branch-instructions
> >      13.37 ą  4%      +2.0       15.33 ą  3%  perf-stat.i.cache-miss-rate%
> >  1.965e+11            +2.6%  2.015e+11        perf-stat.i.cpu-cycles
> >   20057708 ą  4%     +13.9%   22841468 ą  4%  perf-stat.i.iTLB-loads
> >  4.973e+10            +1.4%  5.042e+10        perf-stat.i.instructions
> >       3272 ą  2%      +2.9%       3366        perf-stat.i.minor-faults
> >    4500892 ą  3%     +18.9%    5351518 ą  6%  perf-stat.i.node-store-misses
> >       3.91            +1.3%       3.96        perf-stat.overall.cpi
> >      69.62            -1.5       68.11        perf-stat.overall.iTLB-load-miss-rate%
> >  1.047e+10            +1.3%  1.061e+10        perf-stat.ps.branch-instructions
> >    1117454 ą  2%     -10.6%     999467 ą  3%  perf-stat.ps.context-switches
> >  1.986e+11            +2.4%  2.033e+11        perf-stat.ps.cpu-cycles
> >   19614413 ą  4%     +13.6%   22288555 ą  4%  perf-stat.ps.iTLB-loads
> >       3493            -1.1%       3453        perf-stat.ps.minor-faults
> >    4546636 ą  3%     +17.0%    5321658 ą  5%  perf-stat.ps.node-store-misses
> >       0.64 ą  3%      -0.2        0.44 ą 57%  perf-profile.calltrace.cycles-pp.common_timer_get.do_timer_gettime.__x64_sys_timer_gettime.do_syscall_64.entry_SYSCALL_64_after_hwframe
> >       0.66 ą  3%      -0.1        0.58 ą  7%  perf-profile.children.cycles-pp.common_timer_get
> >       0.44 ą  4%      -0.1        0.39 ą  5%  perf-profile.children.cycles-pp.posix_ktime_get_ts
> >       0.39 ą  5%      -0.0        0.34 ą  6%  perf-profile.children.cycles-pp.ktime_get_ts64
> >       0.07 ą 17%      +0.0        0.10 ą  8%  perf-profile.children.cycles-pp.task_tick_fair
> >       0.08 ą 15%      +0.0        0.11 ą  7%  perf-profile.children.cycles-pp.scheduler_tick
> >       0.46 ą  5%      +0.1        0.54 ą  6%  perf-profile.children.cycles-pp.__might_sleep
> >       0.69 ą  8%      +0.2        0.85 ą 12%  perf-profile.children.cycles-pp.___might_sleep
> >       0.90 ą  5%      -0.2        0.73 ą  9%  perf-profile.self.cycles-pp.__might_fault
> >       0.40 ą  6%      -0.1        0.33 ą  9%  perf-profile.self.cycles-pp.do_timer_gettime
> >       0.50 ą  4%      -0.1        0.45 ą  7%  perf-profile.self.cycles-pp.put_itimerspec64
> >       0.32 ą  2%      -0.0        0.27 ą  9%  perf-profile.self.cycles-pp.update_curr_fair
> >       0.20 ą  6%      -0.0        0.18 ą  2%  perf-profile.self.cycles-pp.ktime_get_ts64
> >       0.08 ą 23%      +0.0        0.12 ą  8%  perf-profile.self.cycles-pp._raw_spin_trylock
> >       0.42 ą  5%      +0.1        0.50 ą  6%  perf-profile.self.cycles-pp.__might_sleep
> >       0.66 ą  9%      +0.2        0.82 ą 12%  perf-profile.self.cycles-pp.___might_sleep
> >      47297 ą 13%     +19.7%      56608 ą  5%  softirqs.CPU13.SCHED
> >      47070 ą  3%     +20.5%      56735 ą  7%  softirqs.CPU2.SCHED
> >      55443 ą  9%     -20.2%      44250 ą  2%  softirqs.CPU28.SCHED
> >      56633 ą  3%     -12.6%      49520 ą  7%  softirqs.CPU34.SCHED
> >      56599 ą 11%     -18.0%      46384 ą  2%  softirqs.CPU36.SCHED
> >      56909 ą  9%     -18.4%      46438 ą  6%  softirqs.CPU40.SCHED
> >      45062 ą  9%     +28.1%      57709 ą  9%  softirqs.CPU45.SCHED
> >      43959           +28.7%      56593 ą  9%  softirqs.CPU49.SCHED
> >      46235 ą 10%     +22.2%      56506 ą 11%  softirqs.CPU5.SCHED
> >      44779 ą 12%     +22.5%      54859 ą 11%  softirqs.CPU57.SCHED
> >      46739 ą 10%     +21.1%      56579 ą  8%  softirqs.CPU6.SCHED
> >      53129 ą  4%     -13.1%      46149 ą  8%  softirqs.CPU70.SCHED
> >      55822 ą  7%     -20.5%      44389 ą  8%  softirqs.CPU73.SCHED
> >      56011 ą  5%     -11.4%      49610 ą  7%  softirqs.CPU77.SCHED
> >      55263 ą  9%     -13.2%      47942 ą 12%  softirqs.CPU78.SCHED
> >      58792 ą 14%     -21.3%      46291 ą  9%  softirqs.CPU81.SCHED
> >      53341 ą  7%     -13.7%      46041 ą 10%  softirqs.CPU83.SCHED
> >      59096 ą 15%     -23.9%      44998 ą  6%  softirqs.CPU85.SCHED
> >      36647           -98.5%     543.00 ą 61%  numa-meminfo.node0.Active(file)
> >     620922 ą  4%     -10.4%     556566 ą  5%  numa-meminfo.node0.FilePages
> >      21243 ą  3%     -36.2%      13543 ą 41%  numa-meminfo.node0.Inactive
> >      20802 ą  3%     -35.3%      13455 ą 42%  numa-meminfo.node0.Inactive(anon)
> >      15374 ą  9%     -27.2%      11193 ą  8%  numa-meminfo.node0.KernelStack
> >      21573           -34.7%      14084 ą 14%  numa-meminfo.node0.Mapped
> >    1136795 ą  5%     -12.4%     995965 ą  6%  numa-meminfo.node0.MemUsed
> >      16420 ą  6%     -66.0%       5580 ą 18%  numa-meminfo.node0.PageTables
> >     108182 ą  2%     -18.5%      88150 ą  3%  numa-meminfo.node0.SUnreclaim
> >     166467 ą  2%     -15.8%     140184 ą  4%  numa-meminfo.node0.Slab
> >     181705 ą 36%     +63.8%     297623 ą 10%  numa-meminfo.node1.Active
> >     320.75 ą 27%  +11187.0%      36203        numa-meminfo.node1.Active(file)
> >       2208 ą 38%    +362.1%      10207 ą 54%  numa-meminfo.node1.Inactive
> >       2150 ą 39%    +356.0%       9804 ą 58%  numa-meminfo.node1.Inactive(anon)
> >      41819 ą 10%     +17.3%      49068 ą  6%  numa-meminfo.node1.KReclaimable
> >      11711 ą  5%     +47.2%      17238 ą 22%  numa-meminfo.node1.KernelStack
> >      10642           +68.3%      17911 ą 11%  numa-meminfo.node1.Mapped
> >     952520 ą  6%     +20.3%    1146337 ą  3%  numa-meminfo.node1.MemUsed
> >      12342 ą 15%     +92.4%      23741 ą  9%  numa-meminfo.node1.PageTables
> >      41819 ą 10%     +17.3%      49068 ą  6%  numa-meminfo.node1.SReclaimable
> >      80394 ą  3%     +27.1%     102206 ą  3%  numa-meminfo.node1.SUnreclaim
> >     122214 ą  3%     +23.8%     151275 ą  3%  numa-meminfo.node1.Slab
> >       9160           -98.5%     135.25 ą 61%  numa-vmstat.node0.nr_active_file
> >     155223 ą  4%     -10.4%     139122 ą  5%  numa-vmstat.node0.nr_file_pages
> >       5202 ą  3%     -35.4%       3362 ą 42%  numa-vmstat.node0.nr_inactive_anon
> >     109.50 ą 14%     -80.1%      21.75 ą160%  numa-vmstat.node0.nr_inactive_file
> >      14757 ą  3%     -34.4%       9676 ą 12%  numa-vmstat.node0.nr_kernel_stack
> >       5455           -34.9%       3549 ą 12%  numa-vmstat.node0.nr_mapped
> >       4069 ą  6%     -68.3%       1289 ą 24%  numa-vmstat.node0.nr_page_table_pages
> >      26943 ą  2%     -19.2%      21761 ą  3%  numa-vmstat.node0.nr_slab_unreclaimable
> >       2240 ą  6%     -97.8%      49.00 ą 69%  numa-vmstat.node0.nr_written
> >       9160           -98.5%     135.25 ą 61%  numa-vmstat.node0.nr_zone_active_file
> >       5202 ą  3%     -35.4%       3362 ą 42%  numa-vmstat.node0.nr_zone_inactive_anon
> >     109.50 ą 14%     -80.1%      21.75 ą160%  numa-vmstat.node0.nr_zone_inactive_file
> >      79.75 ą 28%  +11247.0%       9049        numa-vmstat.node1.nr_active_file
> >     542.25 ą 41%    +352.1%       2451 ą 58%  numa-vmstat.node1.nr_inactive_anon
> >      14.00 ą140%    +617.9%     100.50 ą 35%  numa-vmstat.node1.nr_inactive_file
> >      11182 ą  4%     +28.9%      14415 ą  4%  numa-vmstat.node1.nr_kernel_stack
> >       2728 ą  3%     +67.7%       4576 ą  9%  numa-vmstat.node1.nr_mapped
> >       3056 ą 15%     +88.2%       5754 ą  8%  numa-vmstat.node1.nr_page_table_pages
> >      10454 ą 10%     +17.3%      12262 ą  7%  numa-vmstat.node1.nr_slab_reclaimable
> >      20006 ą  3%     +25.0%      25016 ą  3%  numa-vmstat.node1.nr_slab_unreclaimable
> >      19.00 ą 52%  +11859.2%       2272 ą  2%  numa-vmstat.node1.nr_written
> >      79.75 ą 28%  +11247.0%       9049        numa-vmstat.node1.nr_zone_active_file
> >     542.25 ą 41%    +352.1%       2451 ą 58%  numa-vmstat.node1.nr_zone_inactive_anon
> >      14.00 ą140%    +617.9%     100.50 ą 35%  numa-vmstat.node1.nr_zone_inactive_file
> >     173580 ą 21%    +349.5%     780280 ą  7%  sched_debug.cfs_rq:/.MIN_vruntime.avg
> >    6891819 ą 37%    +109.1%   14412817 ą  9%  sched_debug.cfs_rq:/.MIN_vruntime.max
> >    1031500 ą 25%    +189.1%    2982452 ą  8%  sched_debug.cfs_rq:/.MIN_vruntime.stddev
> >     149079           +13.6%     169354 ą  2%  sched_debug.cfs_rq:/.exec_clock.min
> >       8550 ą  3%     -59.7%       3442 ą 32%  sched_debug.cfs_rq:/.exec_clock.stddev
> >       4.95 ą  6%     -15.2%       4.20 ą 10%  sched_debug.cfs_rq:/.load_avg.min
> >     173580 ą 21%    +349.5%     780280 ą  7%  sched_debug.cfs_rq:/.max_vruntime.avg
> >    6891819 ą 37%    +109.1%   14412817 ą  9%  sched_debug.cfs_rq:/.max_vruntime.max
> >    1031500 ą 25%    +189.1%    2982452 ą  8%  sched_debug.cfs_rq:/.max_vruntime.stddev
> >   16144141           +27.9%   20645199 ą  6%  sched_debug.cfs_rq:/.min_vruntime.avg
> >   17660392           +27.7%   22546402 ą  4%  sched_debug.cfs_rq:/.min_vruntime.max
> >   13747718           +36.8%   18802595 ą  5%  sched_debug.cfs_rq:/.min_vruntime.min
> >       0.17 ą 11%     +35.0%       0.22 ą 15%  sched_debug.cfs_rq:/.nr_running.stddev
> >      10.64 ą 14%     -26.4%       7.83 ą 12%  sched_debug.cpu.clock.stddev
> >      10.64 ą 14%     -26.4%       7.83 ą 12%  sched_debug.cpu.clock_task.stddev
> >       7093 ą 42%     -65.9%       2420 ą120%  sched_debug.cpu.curr->pid.min
> >    2434979 ą  2%     -18.6%    1981697 ą  3%  sched_debug.cpu.nr_switches.avg
> >    3993189 ą  6%     -22.2%    3104832 ą  5%  sched_debug.cpu.nr_switches.max
> >    -145.03           -42.8%     -82.90        sched_debug.cpu.nr_uninterruptible.min
> >    2097122 ą  6%     +38.7%    2908923 ą  6%  sched_debug.cpu.sched_count.min
> >     809684 ą 13%     -30.5%     562929 ą 17%  sched_debug.cpu.sched_count.stddev
> >     307565 ą  4%     -15.1%     261231 ą  3%  sched_debug.cpu.ttwu_count.min
> >     207286 ą  6%     -16.4%     173387 ą  3%  sched_debug.cpu.ttwu_local.min
> >     125963 ą 23%     +53.1%     192849 ą  2%  sched_debug.cpu.ttwu_local.stddev
> >    2527246           +10.8%    2800959 ą  3%  sched_debug.cpu.yld_count.avg
> >    1294266 ą  4%     +53.7%    1989264 ą  2%  sched_debug.cpu.yld_count.min
> >     621332 ą  9%     -38.4%     382813 ą 22%  sched_debug.cpu.yld_count.stddev
> >     899.50 ą 28%     -48.2%     465.75 ą 42%  interrupts.37:IR-PCI-MSI.1572868-edge.eth0-TxRx-3
> >     372.50 ą  7%    +169.5%       1004 ą 40%  interrupts.40:IR-PCI-MSI.1572871-edge.eth0-TxRx-6
> >       6201 ą  8%     +17.9%       7309 ą  3%  interrupts.CPU0.CAL:Function_call_interrupts
> >     653368 ą 47%    +159.4%    1695029 ą 17%  interrupts.CPU0.RES:Rescheduling_interrupts
> >       7104 ą  7%     +13.6%       8067        interrupts.CPU1.CAL:Function_call_interrupts
> >       2094 ą 59%     +89.1%       3962 ą 10%  interrupts.CPU10.TLB:TLB_shootdowns
> >       7309 ą  8%     +11.2%       8125        interrupts.CPU11.CAL:Function_call_interrupts
> >       2089 ą 62%     +86.2%       3890 ą 11%  interrupts.CPU13.TLB:TLB_shootdowns
> >       7068 ą  8%     +15.2%       8144 ą  2%  interrupts.CPU14.CAL:Function_call_interrupts
> >       7112 ą  7%     +13.6%       8079 ą  3%  interrupts.CPU15.CAL:Function_call_interrupts
> >       1950 ą 61%    +103.5%       3968 ą 11%  interrupts.CPU15.TLB:TLB_shootdowns
> >     899.50 ą 28%     -48.2%     465.75 ą 42%  interrupts.CPU16.37:IR-PCI-MSI.1572868-edge.eth0-TxRx-3
> >       2252 ą 47%     +62.6%       3664 ą 15%  interrupts.CPU16.TLB:TLB_shootdowns
> >       7111 ą  8%     +14.8%       8167 ą  3%  interrupts.CPU18.CAL:Function_call_interrupts
> >       1972 ą 60%     +96.3%       3872 ą  9%  interrupts.CPU18.TLB:TLB_shootdowns
> >     372.50 ą  7%    +169.5%       1004 ą 40%  interrupts.CPU19.40:IR-PCI-MSI.1572871-edge.eth0-TxRx-6
> >       2942 ą 12%     -57.5%       1251 ą 22%  interrupts.CPU22.TLB:TLB_shootdowns
> >       7819           -12.2%       6861 ą  3%  interrupts.CPU23.CAL:Function_call_interrupts
> >       3327 ą 12%     -62.7%       1241 ą 29%  interrupts.CPU23.TLB:TLB_shootdowns
> >       7767 ą  3%     -14.0%       6683 ą  5%  interrupts.CPU24.CAL:Function_call_interrupts
> >       3185 ą 21%     -63.8%       1154 ą 14%  interrupts.CPU24.TLB:TLB_shootdowns
> >       7679 ą  4%     -11.3%       6812 ą  2%  interrupts.CPU25.CAL:Function_call_interrupts
> >       3004 ą 28%     -63.4%       1100 ą  7%  interrupts.CPU25.TLB:TLB_shootdowns
> >       3187 ą 17%     -61.3%       1232 ą 35%  interrupts.CPU26.TLB:TLB_shootdowns
> >       3193 ą 16%     -59.3%       1299 ą 34%  interrupts.CPU27.TLB:TLB_shootdowns
> >       3059 ą 21%     -58.0%       1285 ą 32%  interrupts.CPU28.TLB:TLB_shootdowns
> >       7798 ą  4%     -13.8%       6719 ą  7%  interrupts.CPU29.CAL:Function_call_interrupts
> >       3122 ą 20%     -62.3%       1178 ą 37%  interrupts.CPU29.TLB:TLB_shootdowns
> >       7727 ą  2%     -11.6%       6827 ą  5%  interrupts.CPU30.CAL:Function_call_interrupts
> >       3102 ą 18%     -59.4%       1259 ą 33%  interrupts.CPU30.TLB:TLB_shootdowns
> >       3269 ą 24%     -58.1%       1371 ą 48%  interrupts.CPU31.TLB:TLB_shootdowns
> >       7918 ą  3%     -14.5%       6771        interrupts.CPU32.CAL:Function_call_interrupts
> >       3324 ą 18%     -70.7%     973.50 ą 18%  interrupts.CPU32.TLB:TLB_shootdowns
> >       2817 ą 27%     -60.2%       1121 ą 26%  interrupts.CPU33.TLB:TLB_shootdowns
> >       7956 ą  3%     -11.8%       7018 ą  4%  interrupts.CPU34.CAL:Function_call_interrupts
> >       3426 ą 21%     -70.3%       1018 ą 29%  interrupts.CPU34.TLB:TLB_shootdowns
> >       3121 ą 17%     -70.3%     926.75 ą 22%  interrupts.CPU35.TLB:TLB_shootdowns
> >       7596 ą  4%     -10.6%       6793 ą  3%  interrupts.CPU36.CAL:Function_call_interrupts
> >       2900 ą 30%     -62.3%       1094 ą 34%  interrupts.CPU36.TLB:TLB_shootdowns
> >       7863           -13.1%       6833 ą  2%  interrupts.CPU37.CAL:Function_call_interrupts
> >       3259 ą 15%     -65.9%       1111 ą 20%  interrupts.CPU37.TLB:TLB_shootdowns
> >       3230 ą 26%     -64.0%       1163 ą 39%  interrupts.CPU38.TLB:TLB_shootdowns
> >       7728 ą  5%     -13.8%       6662 ą  7%  interrupts.CPU39.CAL:Function_call_interrupts
> >       2950 ą 29%     -61.6%       1133 ą 26%  interrupts.CPU39.TLB:TLB_shootdowns
> >       6864 ą  3%     +18.7%       8147        interrupts.CPU4.CAL:Function_call_interrupts
> >       1847 ą 59%    +118.7%       4039 ą  7%  interrupts.CPU4.TLB:TLB_shootdowns
> >       7951 ą  6%     -15.0%       6760 ą  2%  interrupts.CPU40.CAL:Function_call_interrupts
> >       3200 ą 30%     -72.3%     886.50 ą 39%  interrupts.CPU40.TLB:TLB_shootdowns
> >       7819 ą  6%     -11.3%       6933 ą  2%  interrupts.CPU41.CAL:Function_call_interrupts
> >       3149 ą 28%     -62.9%       1169 ą 24%  interrupts.CPU41.TLB:TLB_shootdowns
> >       7884 ą  4%     -11.0%       7019 ą  2%  interrupts.CPU42.CAL:Function_call_interrupts
> >       3248 ą 16%     -63.4%       1190 ą 23%  interrupts.CPU42.TLB:TLB_shootdowns
> >       7659 ą  5%     -12.7%       6690 ą  3%  interrupts.CPU43.CAL:Function_call_interrupts
> >     490732 ą 20%    +114.5%    1052606 ą 47%  interrupts.CPU43.RES:Rescheduling_interrupts
> >    1432688 ą 34%     -67.4%     467217 ą 43%  interrupts.CPU47.RES:Rescheduling_interrupts
> >       7122 ą  8%     +16.0%       8259 ą  3%  interrupts.CPU48.CAL:Function_call_interrupts
> >       1868 ą 65%    +118.4%       4079 ą  8%  interrupts.CPU48.TLB:TLB_shootdowns
> >       7165 ą  8%     +11.3%       7977 ą  5%  interrupts.CPU49.CAL:Function_call_interrupts
> >       1961 ą 59%     +98.4%       3891 ą  4%  interrupts.CPU49.TLB:TLB_shootdowns
> >     461807 ą 47%    +190.8%    1342990 ą 48%  interrupts.CPU5.RES:Rescheduling_interrupts
> >       7167 ą  7%     +15.4%       8273        interrupts.CPU50.CAL:Function_call_interrupts
> >       2027 ą 51%    +103.9%       4134 ą  8%  interrupts.CPU50.TLB:TLB_shootdowns
> >       7163 ą  9%     +16.3%       8328        interrupts.CPU51.CAL:Function_call_interrupts
> >     660073 ą 33%     +74.0%    1148640 ą 25%  interrupts.CPU51.RES:Rescheduling_interrupts
> >       2043 ą 64%     +95.8%       4000 ą  5%  interrupts.CPU51.TLB:TLB_shootdowns
> >       7428 ą  9%     +13.5%       8434 ą  2%  interrupts.CPU52.CAL:Function_call_interrupts
> >       2280 ą 61%     +85.8%       4236 ą  9%  interrupts.CPU52.TLB:TLB_shootdowns
> >       7144 ą 11%     +17.8%       8413        interrupts.CPU53.CAL:Function_call_interrupts
> >       1967 ą 67%    +104.7%       4026 ą  5%  interrupts.CPU53.TLB:TLB_shootdowns
> >       7264 ą 10%     +15.6%       8394 ą  4%  interrupts.CPU54.CAL:Function_call_interrupts
> >       7045 ą 11%     +18.7%       8365 ą  2%  interrupts.CPU56.CAL:Function_call_interrupts
> >       2109 ą 59%     +91.6%       4041 ą 10%  interrupts.CPU56.TLB:TLB_shootdowns
> >       7307 ą  9%     +15.3%       8428 ą  2%  interrupts.CPU57.CAL:Function_call_interrupts
> >       2078 ą 64%     +96.5%       4085 ą  6%  interrupts.CPU57.TLB:TLB_shootdowns
> >       6834 ą 12%     +19.8%       8190 ą  3%  interrupts.CPU58.CAL:Function_call_interrupts
> >     612496 ą 85%    +122.5%    1362815 ą 27%  interrupts.CPU58.RES:Rescheduling_interrupts
> >       1884 ą 69%    +112.0%       3995 ą  8%  interrupts.CPU58.TLB:TLB_shootdowns
> >       7185 ą  8%     +15.9%       8329        interrupts.CPU59.CAL:Function_call_interrupts
> >       1982 ą 58%    +101.1%       3986 ą  5%  interrupts.CPU59.TLB:TLB_shootdowns
> >       7051 ą  6%     +13.1%       7975        interrupts.CPU6.CAL:Function_call_interrupts
> >       1831 ą 49%    +102.1%       3701 ą  8%  interrupts.CPU6.TLB:TLB_shootdowns
> >       7356 ą  8%     +16.2%       8548        interrupts.CPU60.CAL:Function_call_interrupts
> >       2124 ą 57%     +92.8%       4096 ą  5%  interrupts.CPU60.TLB:TLB_shootdowns
> >       7243 ą  9%     +15.1%       8334        interrupts.CPU61.CAL:Function_call_interrupts
> >     572423 ą 71%    +110.0%    1201919 ą 40%  interrupts.CPU61.RES:Rescheduling_interrupts
> >       7295 ą  9%     +14.7%       8369        interrupts.CPU63.CAL:Function_call_interrupts
> >       2139 ą 57%     +85.7%       3971 ą  3%  interrupts.CPU63.TLB:TLB_shootdowns
> >       7964 ą  2%     -15.6%       6726 ą  5%  interrupts.CPU66.CAL:Function_call_interrupts
> >       3198 ą 21%     -65.0%       1119 ą 24%  interrupts.CPU66.TLB:TLB_shootdowns
> >       8103 ą  2%     -17.5%       6687 ą  9%  interrupts.CPU67.CAL:Function_call_interrupts
> >       3357 ą 18%     -62.9%       1244 ą 32%  interrupts.CPU67.TLB:TLB_shootdowns
> >       7772 ą  2%     -14.0%       6687 ą  8%  interrupts.CPU68.CAL:Function_call_interrupts
> >       2983 ą 17%     -59.2%       1217 ą 15%  interrupts.CPU68.TLB:TLB_shootdowns
> >       7986 ą  4%     -13.8%       6887 ą  4%  interrupts.CPU69.CAL:Function_call_interrupts
> >       3192 ą 24%     -65.0%       1117 ą 30%  interrupts.CPU69.TLB:TLB_shootdowns
> >       7070 ą  6%     +14.6%       8100 ą  2%  interrupts.CPU7.CAL:Function_call_interrupts
> >     697891 ą 32%     +54.4%    1077890 ą 18%  interrupts.CPU7.RES:Rescheduling_interrupts
> >       1998 ą 55%     +97.1%       3938 ą 10%  interrupts.CPU7.TLB:TLB_shootdowns
> >       8085           -13.4%       7002 ą  3%  interrupts.CPU70.CAL:Function_call_interrupts
> >    1064985 ą 35%     -62.5%     398986 ą 29%  interrupts.CPU70.RES:Rescheduling_interrupts
> >       3347 ą 12%     -61.7%       1280 ą 24%  interrupts.CPU70.TLB:TLB_shootdowns
> >       2916 ą 16%     -58.8%       1201 ą 39%  interrupts.CPU71.TLB:TLB_shootdowns
> >       3314 ą 19%     -61.3%       1281 ą 26%  interrupts.CPU72.TLB:TLB_shootdowns
> >       3119 ą 18%     -61.5%       1200 ą 39%  interrupts.CPU73.TLB:TLB_shootdowns
> >       7992 ą  4%     -12.6%       6984 ą  3%  interrupts.CPU74.CAL:Function_call_interrupts
> >       3187 ą 21%     -56.8%       1378 ą 40%  interrupts.CPU74.TLB:TLB_shootdowns
> >       7953 ą  4%     -12.0%       6999 ą  4%  interrupts.CPU75.CAL:Function_call_interrupts
> >       3072 ą 26%     -56.8%       1327 ą 34%  interrupts.CPU75.TLB:TLB_shootdowns
> >       8119 ą  5%     -12.4%       7109 ą  7%  interrupts.CPU76.CAL:Function_call_interrupts
> >       3418 ą 20%     -67.5%       1111 ą 31%  interrupts.CPU76.TLB:TLB_shootdowns
> >       7804 ą  5%     -11.4%       6916 ą  4%  interrupts.CPU77.CAL:Function_call_interrupts
> >       7976 ą  5%     -14.4%       6826 ą  3%  interrupts.CPU78.CAL:Function_call_interrupts
> >       3209 ą 27%     -71.8%     904.75 ą 28%  interrupts.CPU78.TLB:TLB_shootdowns
> >       8187 ą  4%     -14.6%       6991 ą  3%  interrupts.CPU79.CAL:Function_call_interrupts
> >       3458 ą 20%     -67.5%       1125 ą 36%  interrupts.CPU79.TLB:TLB_shootdowns
> >       7122 ą  7%     +14.2%       8136 ą  2%  interrupts.CPU8.CAL:Function_call_interrupts
> >       2096 ą 63%     +87.4%       3928 ą  8%  interrupts.CPU8.TLB:TLB_shootdowns
> >       8130 ą  5%     -17.2%       6728 ą  5%  interrupts.CPU81.CAL:Function_call_interrupts
> >       3253 ą 24%     -70.6%     955.00 ą 38%  interrupts.CPU81.TLB:TLB_shootdowns
> >       7940 ą  5%     -13.9%       6839 ą  5%  interrupts.CPU82.CAL:Function_call_interrupts
> >       2952 ą 26%     -66.3%     996.00 ą 51%  interrupts.CPU82.TLB:TLB_shootdowns
> >       7900 ą  6%     -13.4%       6844 ą  3%  interrupts.CPU83.CAL:Function_call_interrupts
> >       3012 ą 34%     -68.3%     956.00 ą 17%  interrupts.CPU83.TLB:TLB_shootdowns
> >       7952 ą  6%     -15.8%       6695 ą  2%  interrupts.CPU84.CAL:Function_call_interrupts
> >       3049 ą 31%     -75.5%     746.50 ą 27%  interrupts.CPU84.TLB:TLB_shootdowns
> >       8065 ą  6%     -15.7%       6798        interrupts.CPU85.CAL:Function_call_interrupts
> >       3222 ą 23%     -69.7%     976.00 ą 13%  interrupts.CPU85.TLB:TLB_shootdowns
> >       8049 ą  5%     -13.2%       6983 ą  4%  interrupts.CPU86.CAL:Function_call_interrupts
> >       3159 ą 19%     -61.9%       1202 ą 27%  interrupts.CPU86.TLB:TLB_shootdowns
> >       8154 ą  8%     -16.9%       6773 ą  3%  interrupts.CPU87.CAL:Function_call_interrupts
> >    1432962 ą 21%     -48.5%     737989 ą 30%  interrupts.CPU87.RES:Rescheduling_interrupts
> >       3186 ą 33%     -72.3%     881.75 ą 21%  interrupts.CPU87.TLB:TLB_shootdowns
> >
> >
> >
> > ***************************************************************************************************
> > lkp-bdw-ep6: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
> > =========================================================================================
> > class/compiler/cpufreq_governor/disk/kconfig/nr_threads/rootfs/tbox_group/testcase/testtime/ucode:
> >   interrupt/gcc-7/performance/1HDD/x86_64-rhel-7.6/100%/debian-x86_64-2019-11-14.cgz/lkp-bdw-ep6/stress-ng/1s/0xb000038
> >
> > commit:
> >   fcf0553db6 ("sched/fair: Remove meaningless imbalance calculation")
> >   0b0695f2b3 ("sched/fair: Rework load_balance()")
> >
> > fcf0553db6f4c793 0b0695f2b34a4afa3f6e9aa1ff0
> > ---------------- ---------------------------
> >          %stddev     %change         %stddev
> >              \          |                \
> >    3345449           +35.1%    4518187 ą  5%  stress-ng.schedpolicy.ops
> >    3347036           +35.1%    4520740 ą  5%  stress-ng.schedpolicy.ops_per_sec
> >   11464910 ą  6%     -23.3%    8796455 ą 11%  stress-ng.sigq.ops
> >   11452565 ą  6%     -23.3%    8786844 ą 11%  stress-ng.sigq.ops_per_sec
> >     228736           +20.7%     276087 ą 20%  stress-ng.sleep.ops
> >     157479           +23.0%     193722 ą 21%  stress-ng.sleep.ops_per_sec
> >   14584704            -5.8%   13744640 ą  4%  stress-ng.timerfd.ops
> >   14546032            -5.7%   13718862 ą  4%  stress-ng.timerfd.ops_per_sec
> >      27.24 ą105%    +283.9%     104.58 ą109%  iostat.sdb.r_await.max
> >     122324 ą 35%     +63.9%     200505 ą 21%  meminfo.AnonHugePages
> >      47267 ą 26%    +155.2%     120638 ą 45%  numa-meminfo.node1.AnonHugePages
> >      22880 ą  6%      -9.9%      20605 ą  3%  softirqs.CPU57.TIMER
> >     636196 ą 24%     +38.5%     880847 ą  7%  cpuidle.C1.usage
> >   55936214 ą 20%     +63.9%   91684673 ą 18%  cpuidle.C1E.time
> >  1.175e+08 ą 22%    +101.8%  2.372e+08 ą 29%  cpuidle.C3.time
> >  4.242e+08 ą  6%     -39.1%  2.584e+08 ą 39%  cpuidle.C6.time
> >      59.50 ą 34%     +66.0%      98.75 ą 22%  proc-vmstat.nr_anon_transparent_hugepages
> >      25612 ą 10%     +13.8%      29146 ą  4%  proc-vmstat.nr_kernel_stack
> >    2783465 ą  9%     +14.5%    3187157 ą  9%  proc-vmstat.pgalloc_normal
> >       1743 ą 28%     +43.8%       2507 ą 23%  proc-vmstat.thp_deferred_split_page
> >       1765 ą 30%     +43.2%       2529 ą 22%  proc-vmstat.thp_fault_alloc
> >     811.00 ą  3%     -13.8%     699.00 ą  7%  slabinfo.kmem_cache_node.active_objs
> >     864.00 ą  3%     -13.0%     752.00 ą  7%  slabinfo.kmem_cache_node.num_objs
> >       8686 ą  7%     +13.6%       9869 ą  3%  slabinfo.pid.active_objs
> >       8690 ą  7%     +13.8%       9890 ą  3%  slabinfo.pid.num_objs
> >       9813 ą  6%     +15.7%      11352 ą  3%  slabinfo.task_delay_info.active_objs
> >       9813 ą  6%     +15.7%      11352 ą  3%  slabinfo.task_delay_info.num_objs
> >      79.22 ą 10%     -41.1%      46.68 ą 22%  sched_debug.cfs_rq:/.load_avg.avg
> >     242.49 ą  6%     -29.6%     170.70 ą 17%  sched_debug.cfs_rq:/.load_avg.stddev
> >      43.14 ą 29%     -67.1%      14.18 ą 66%  sched_debug.cfs_rq:/.removed.load_avg.avg
> >     201.73 ą 15%     -50.1%     100.68 ą 60%  sched_debug.cfs_rq:/.removed.load_avg.stddev
> >       1987 ą 28%     -67.3%     650.09 ą 66%  sched_debug.cfs_rq:/.removed.runnable_sum.avg
> >       9298 ą 15%     -50.3%       4616 ą 60%  sched_debug.cfs_rq:/.removed.runnable_sum.stddev
> >      18.17 ą 27%     -68.6%       5.70 ą 63%  sched_debug.cfs_rq:/.removed.util_avg.avg
> >      87.61 ą 13%     -52.6%      41.48 ą 59%  sched_debug.cfs_rq:/.removed.util_avg.stddev
> >     633327 ą 24%     +38.4%     876596 ą  7%  turbostat.C1
> >       2.75 ą 22%      +1.8        4.52 ą 17%  turbostat.C1E%
> >       5.76 ą 22%      +6.1       11.82 ą 30%  turbostat.C3%
> >      20.69 ą  5%      -8.1       12.63 ą 38%  turbostat.C6%
> >      15.62 ą  6%     +18.4%      18.50 ą  8%  turbostat.CPU%c1
> >       1.56 ą 16%    +208.5%       4.82 ą 38%  turbostat.CPU%c3
> >      12.81 ą  4%     -48.1%       6.65 ą 43%  turbostat.CPU%c6
> >       5.02 ą  8%     -34.6%       3.28 ą 14%  turbostat.Pkg%pc2
> >       0.85 ą 57%     -84.7%       0.13 ą173%  turbostat.Pkg%pc6
> >      88.25 ą 13%    +262.6%     320.00 ą 71%  interrupts.CPU10.TLB:TLB_shootdowns
> >     116.25 ą 36%    +151.6%     292.50 ą 68%  interrupts.CPU19.TLB:TLB_shootdowns
> >     109.25 ą  8%    +217.4%     346.75 ą106%  interrupts.CPU2.TLB:TLB_shootdowns
> >      15180 ą111%    +303.9%      61314 ą 32%  interrupts.CPU23.RES:Rescheduling_interrupts
> >     111.50 ą 26%    +210.3%     346.00 ą 79%  interrupts.CPU3.TLB:TLB_shootdowns
> >      86.50 ą 35%    +413.0%     443.75 ą 66%  interrupts.CPU33.TLB:TLB_shootdowns
> >     728.00 ą  8%     +29.6%     943.50 ą 16%  interrupts.CPU38.CAL:Function_call_interrupts
> >       1070 ą 72%     +84.9%       1979 ą  9%  interrupts.CPU54.76:IR-PCI-MSI.512000-edge.ahci[0000:00:1f.2]
> >      41429 ą 64%     -73.7%      10882 ą 73%  interrupts.CPU59.RES:Rescheduling_interrupts
> >      26330 ą 85%     -73.3%       7022 ą 86%  interrupts.CPU62.RES:Rescheduling_interrupts
> >     103.00 ą 22%    +181.3%     289.75 ą 92%  interrupts.CPU65.TLB:TLB_shootdowns
> >     100.00 ą 40%    +365.0%     465.00 ą 71%  interrupts.CPU70.TLB:TLB_shootdowns
> >     110.25 ą 18%    +308.4%     450.25 ą 71%  interrupts.CPU80.TLB:TLB_shootdowns
> >      93.50 ą 42%    +355.1%     425.50 ą 82%  interrupts.CPU84.TLB:TLB_shootdowns
> >     104.50 ą 18%    +289.7%     407.25 ą 68%  interrupts.CPU87.TLB:TLB_shootdowns
> >       1.76 ą  3%      -0.1        1.66 ą  4%  perf-stat.i.branch-miss-rate%
> >       8.08 ą  6%      +2.0       10.04        perf-stat.i.cache-miss-rate%
> >   18031213 ą  4%     +27.2%   22939937 ą  3%  perf-stat.i.cache-misses
> >  4.041e+08            -1.9%  3.965e+08        perf-stat.i.cache-references
> >      31764 ą 26%     -40.6%      18859 ą 10%  perf-stat.i.cycles-between-cache-misses
> >      66.18            -1.5       64.71        perf-stat.i.iTLB-load-miss-rate%
> >    4503482 ą  8%     +19.5%    5382698 ą  5%  perf-stat.i.node-load-misses
> >    3892859 ą  2%     +16.6%    4538750 ą  4%  perf-stat.i.node-store-misses
> >    1526815 ą 13%     +25.8%    1921178 ą  9%  perf-stat.i.node-stores
> >       4.72 ą  4%      +1.3        6.00 ą  3%  perf-stat.overall.cache-miss-rate%
> >       9120 ą  6%     -18.9%       7394 ą  2%  perf-stat.overall.cycles-between-cache-misses
> >   18237318 ą  4%     +25.4%   22866104 ą  3%  perf-stat.ps.cache-misses
> >    4392089 ą  8%     +18.1%    5189251 ą  5%  perf-stat.ps.node-load-misses
> >    1629766 ą  2%     +17.9%    1920947 ą 13%  perf-stat.ps.node-loads
> >    3694566 ą  2%     +16.1%    4288126 ą  4%  perf-stat.ps.node-store-misses
> >    1536866 ą 12%     +23.7%    1901141 ą  7%  perf-stat.ps.node-stores
> >      38.20 ą 18%     -13.2       24.96 ą 10%  perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
> >      38.20 ą 18%     -13.2       24.96 ą 10%  perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
> >       7.98 ą 67%      -7.2        0.73 ą173%  perf-profile.calltrace.cycles-pp.smp_call_function_single.event_function_call.perf_remove_from_context.perf_event_release_kernel.perf_release
> >       7.98 ą 67%      -7.2        0.73 ą173%  perf-profile.calltrace.cycles-pp.event_function_call.perf_remove_from_context.perf_event_release_kernel.perf_release.__fput
> >       7.98 ą 67%      -7.2        0.73 ą173%  perf-profile.calltrace.cycles-pp.perf_remove_from_context.perf_event_release_kernel.perf_release.__fput.task_work_run
> >      11.86 ą 41%      -6.8        5.07 ą 62%  perf-profile.calltrace.cycles-pp.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
> >      11.86 ą 41%      -6.8        5.07 ą 62%  perf-profile.calltrace.cycles-pp.do_signal.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
> >      11.86 ą 41%      -6.8        5.07 ą 62%  perf-profile.calltrace.cycles-pp.get_signal.do_signal.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
> >      11.86 ą 41%      -6.8        5.07 ą 62%  perf-profile.calltrace.cycles-pp.do_group_exit.get_signal.do_signal.exit_to_usermode_loop.do_syscall_64
> >      11.86 ą 41%      -6.8        5.07 ą 62%  perf-profile.calltrace.cycles-pp.do_exit.do_group_exit.get_signal.do_signal.exit_to_usermode_loop
> >      11.86 ą 41%      -6.8        5.07 ą 62%  perf-profile.calltrace.cycles-pp.task_work_run.do_exit.do_group_exit.get_signal.do_signal
> >       4.27 ą 66%      -3.5        0.73 ą173%  perf-profile.calltrace.cycles-pp.read
> >       4.05 ą 71%      -3.3        0.73 ą173%  perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.read
> >       4.05 ą 71%      -3.3        0.73 ą173%  perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
> >      13.30 ą 38%      -8.2        5.07 ą 62%  perf-profile.children.cycles-pp.task_work_run
> >      12.47 ą 46%      -7.4        5.07 ą 62%  perf-profile.children.cycles-pp.exit_to_usermode_loop
> >      12.47 ą 46%      -7.4        5.07 ą 62%  perf-profile.children.cycles-pp.__fput
> >       7.98 ą 67%      -7.2        0.73 ą173%  perf-profile.children.cycles-pp.perf_remove_from_context
> >      11.86 ą 41%      -6.8        5.07 ą 62%  perf-profile.children.cycles-pp.do_signal
> >      11.86 ą 41%      -6.8        5.07 ą 62%  perf-profile.children.cycles-pp.get_signal
> >       9.43 ą 21%      -4.7        4.72 ą 67%  perf-profile.children.cycles-pp.ksys_read
> >       9.43 ą 21%      -4.7        4.72 ą 67%  perf-profile.children.cycles-pp.vfs_read
> >       4.27 ą 66%      -3.5        0.73 ą173%  perf-profile.children.cycles-pp.read
> >       3.86 ą101%      -3.1        0.71 ą173%  perf-profile.children.cycles-pp._raw_spin_lock
> >       3.86 ą101%      -3.1        0.71 ą173%  perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
> >       3.86 ą101%      -3.1        0.71 ą173%  perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
> >
> >
> >
> > ***************************************************************************************************
> > lkp-csl-2sp5: 96 threads Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 192G memory
> > =========================================================================================
> > class/compiler/cpufreq_governor/disk/fs/kconfig/nr_threads/rootfs/tbox_group/testcase/testtime/ucode:
> >   os/gcc-7/performance/1HDD/ext4/x86_64-rhel-7.6/100%/debian-x86_64-2019-09-23.cgz/lkp-csl-2sp5/stress-ng/1s/0x500002b
> >
> > commit:
> >   fcf0553db6 ("sched/fair: Remove meaningless imbalance calculation")
> >   0b0695f2b3 ("sched/fair: Rework load_balance()")
> >
> > fcf0553db6f4c793 0b0695f2b34a4afa3f6e9aa1ff0
> > ---------------- ---------------------------
> >        fail:runs  %reproduction    fail:runs
> >            |             |             |
> >            :2           50%           1:8     dmesg.WARNING:at_ip_selinux_file_ioctl/0x
> >          %stddev     %change         %stddev
> >              \          |                \
> >     122451 ą 11%     -19.9%      98072 ą 15%  stress-ng.ioprio.ops
> >     116979 ą 11%     -20.7%      92815 ą 16%  stress-ng.ioprio.ops_per_sec
> >     274187 ą 21%     -26.7%     201013 ą 11%  stress-ng.kill.ops
> >     274219 ą 21%     -26.7%     201040 ą 11%  stress-ng.kill.ops_per_sec
> >    3973765           -10.1%    3570462 ą  5%  stress-ng.lockf.ops
> >    3972581           -10.2%    3568935 ą  5%  stress-ng.lockf.ops_per_sec
> >      10719 ą  8%     -39.9%       6442 ą 22%  stress-ng.procfs.ops
> >       9683 ą  3%     -39.3%       5878 ą 22%  stress-ng.procfs.ops_per_sec
> >    6562721           -35.1%    4260609 ą  8%  stress-ng.schedpolicy.ops
> >    6564233           -35.1%    4261479 ą  8%  stress-ng.schedpolicy.ops_per_sec
> >    1070988           +21.4%    1299977 ą  7%  stress-ng.sigrt.ops
> >    1061773           +21.2%    1286618 ą  7%  stress-ng.sigrt.ops_per_sec
> >    1155684 ą  5%     -14.8%     984531 ą 16%  stress-ng.symlink.ops
> >     991624 ą  4%     -23.8%     755147 ą 41%  stress-ng.symlink.ops_per_sec
> >       6925           -12.1%       6086 ą 27%  stress-ng.time.percent_of_cpu_this_job_got
> >      24.68            +9.3       33.96 ą 52%  mpstat.cpu.all.idle%
> >     171.00 ą  2%     -55.3%      76.50 ą 60%  numa-vmstat.node1.nr_inactive_file
> >     171.00 ą  2%     -55.3%      76.50 ą 60%  numa-vmstat.node1.nr_zone_inactive_file
> >  2.032e+11           -12.5%  1.777e+11 ą 27%  perf-stat.i.cpu-cycles
> >  2.025e+11           -12.0%  1.782e+11 ą 27%  perf-stat.ps.cpu-cycles
> >      25.00           +37.5%      34.38 ą 51%  vmstat.cpu.id
> >      68.00           -13.2%      59.00 ą 27%  vmstat.cpu.sy
> >      25.24           +37.0%      34.57 ą 51%  iostat.cpu.idle
> >      68.21           -12.7%      59.53 ą 27%  iostat.cpu.system
> >       4.31 ą100%    +200.6%      12.96 ą 63%  iostat.sda.r_await.max
> >       1014 ą  2%     -17.1%     841.00 ą 10%  meminfo.Inactive(file)
> >      30692 ą 12%     -20.9%      24280 ą 30%  meminfo.Mlocked
> >     103627 ą 27%     -32.7%      69720        meminfo.Percpu
> >     255.50 ą  2%     -18.1%     209.25 ą 10%  proc-vmstat.nr_inactive_file
> >     255.50 ą  2%     -18.1%     209.25 ą 10%  proc-vmstat.nr_zone_inactive_file
> >     185035 ą 22%     -22.2%     143917 ą 25%  proc-vmstat.pgmigrate_success
> >       2107           -12.3%       1848 ą 27%  turbostat.Avg_MHz
> >      69.00            -7.1%      64.12 ą  8%  turbostat.PkgTmp
> >      94.63            -2.2%      92.58 ą  4%  turbostat.RAMWatt
> >      96048           +26.8%     121800 ą  8%  softirqs.CPU10.NET_RX
> >      96671 ą  4%     +34.2%     129776 ą  6%  softirqs.CPU15.NET_RX
> >     171243 ą  3%     -12.9%     149135 ą  8%  softirqs.CPU25.NET_RX
> >     165317 ą  4%     -11.4%     146494 ą  9%  softirqs.CPU27.NET_RX
> >     139558           -24.5%     105430 ą 14%  softirqs.CPU58.NET_RX
> >     147836           -15.8%     124408 ą  6%  softirqs.CPU63.NET_RX
> >     129568           -13.8%     111624 ą 10%  softirqs.CPU66.NET_RX
> >       1050 ą  2%     +14.2%       1198 ą  9%  slabinfo.biovec-128.active_objs
> >       1050 ą  2%     +14.2%       1198 ą  9%  slabinfo.biovec-128.num_objs
> >      23129           +19.6%      27668 ą  6%  slabinfo.kmalloc-512.active_objs
> >     766.50           +17.4%     899.75 ą  6%  slabinfo.kmalloc-512.active_slabs
> >      24535           +17.4%      28806 ą  6%  slabinfo.kmalloc-512.num_objs
> >     766.50           +17.4%     899.75 ą  6%  slabinfo.kmalloc-512.num_slabs
> >       1039 ą  4%      -4.3%     994.12 ą  6%  slabinfo.sock_inode_cache.active_slabs
> >      40527 ą  4%      -4.3%      38785 ą  6%  slabinfo.sock_inode_cache.num_objs
> >       1039 ą  4%      -4.3%     994.12 ą  6%  slabinfo.sock_inode_cache.num_slabs
> >    1549456           -43.6%     873443 ą 24%  sched_debug.cfs_rq:/.min_vruntime.stddev
> >      73.25 ą  5%     +74.8%     128.03 ą 31%  sched_debug.cfs_rq:/.nr_spread_over.stddev
> >      18.60 ą 57%     -63.8%       6.73 ą 64%  sched_debug.cfs_rq:/.removed.load_avg.avg
> >      79.57 ą 44%     -44.1%      44.52 ą 55%  sched_debug.cfs_rq:/.removed.load_avg.stddev
> >     857.10 ą 57%     -63.8%     310.09 ą 64%  sched_debug.cfs_rq:/.removed.runnable_sum.avg
> >       3664 ą 44%     -44.1%       2049 ą 55%  sched_debug.cfs_rq:/.removed.runnable_sum.stddev
> >       4.91 ą 42%     -45.3%       2.69 ą 61%  sched_debug.cfs_rq:/.removed.util_avg.avg
> >    1549544           -43.6%     874006 ą 24%  sched_debug.cfs_rq:/.spread0.stddev
> >     786.14 ą  6%     -20.1%     628.46 ą 23%  sched_debug.cfs_rq:/.util_avg.avg
> >       1415 ą  8%     -16.7%       1178 ą 18%  sched_debug.cfs_rq:/.util_avg.max
> >     467435 ą 15%     +46.7%     685829 ą 15%  sched_debug.cpu.avg_idle.avg
> >      17972 ą  8%    +631.2%     131410 ą 34%  sched_debug.cpu.avg_idle.min
> >       7.66 ą 26%    +209.7%      23.72 ą 54%  sched_debug.cpu.clock.stddev
> >       7.66 ą 26%    +209.7%      23.72 ą 54%  sched_debug.cpu.clock_task.stddev
> >     618063 ą  5%     -17.0%     513085 ą  5%  sched_debug.cpu.max_idle_balance_cost.max
> >      12083 ą 28%     -85.4%       1768 ą231%  sched_debug.cpu.max_idle_balance_cost.stddev
> >      12857 ą 16%   +2117.7%     285128 ą106%  sched_debug.cpu.yld_count.min
> >       0.55 ą  6%      -0.2        0.37 ą 51%  perf-profile.children.cycles-pp.fpregs_assert_state_consistent
> >       0.30 ą 21%      -0.2        0.14 ą105%  perf-profile.children.cycles-pp.yield_task_fair
> >       0.32 ą  6%      -0.2        0.16 ą 86%  perf-profile.children.cycles-pp.rmap_walk_anon
> >       0.19            -0.1        0.10 ą 86%  perf-profile.children.cycles-pp.page_mapcount_is_zero
> >       0.19            -0.1        0.10 ą 86%  perf-profile.children.cycles-pp.total_mapcount
> >       0.14            -0.1        0.09 ą 29%  perf-profile.children.cycles-pp.start_kernel
> >       0.11 ą  9%      -0.0        0.07 ą 47%  perf-profile.children.cycles-pp.__switch_to
> >       0.10 ą 14%      -0.0        0.06 ą 45%  perf-profile.children.cycles-pp.switch_fpu_return
> >       0.08 ą  6%      -0.0        0.04 ą 79%  perf-profile.children.cycles-pp.__update_load_avg_se
> >       0.12 ą 13%      -0.0        0.09 ą 23%  perf-profile.children.cycles-pp.native_write_msr
> >       0.31 ą  6%      -0.2        0.15 ą 81%  perf-profile.self.cycles-pp.poll_idle
> >       0.50 ą  6%      -0.2        0.35 ą 50%  perf-profile.self.cycles-pp.fpregs_assert_state_consistent
> >       0.18 ą  2%      -0.1        0.10 ą 86%  perf-profile.self.cycles-pp.total_mapcount
> >       0.10 ą 14%      -0.0        0.06 ą 45%  perf-profile.self.cycles-pp.switch_fpu_return
> >       0.10 ą 10%      -0.0        0.06 ą 47%  perf-profile.self.cycles-pp.__switch_to
> >       0.07 ą  7%      -0.0        0.03 ą100%  perf-profile.self.cycles-pp.prep_new_page
> >       0.07 ą  7%      -0.0        0.03 ą100%  perf-profile.self.cycles-pp.llist_add_batch
> >       0.07 ą 14%      -0.0        0.04 ą 79%  perf-profile.self.cycles-pp.__update_load_avg_se
> >       0.12 ą 13%      -0.0        0.09 ą 23%  perf-profile.self.cycles-pp.native_write_msr
> >      66096 ą 99%     -99.8%     148.50 ą 92%  interrupts.109:PCI-MSI.31981642-edge.i40e-eth0-TxRx-73
> >     543.50 ą 39%     -73.3%     145.38 ą 81%  interrupts.112:PCI-MSI.31981645-edge.i40e-eth0-TxRx-76
> >     169.00 ą 28%     -55.3%      75.50 ą 83%  interrupts.116:PCI-MSI.31981649-edge.i40e-eth0-TxRx-80
> >     224.00 ą 14%     -57.6%      95.00 ą 87%  interrupts.121:PCI-MSI.31981654-edge.i40e-eth0-TxRx-85
> >     680.00 ą 28%     -80.5%     132.75 ą 82%  interrupts.43:PCI-MSI.31981576-edge.i40e-eth0-TxRx-7
> >     327.50 ą 31%     -39.0%     199.62 ą 60%  interrupts.60:PCI-MSI.31981593-edge.i40e-eth0-TxRx-24
> >     217.50 ą 19%     -51.7%     105.12 ą 79%  interrupts.63:PCI-MSI.31981596-edge.i40e-eth0-TxRx-27
> >     375.00 ą 46%     -78.5%      80.50 ą 82%  interrupts.70:PCI-MSI.31981603-edge.i40e-eth0-TxRx-34
> >     196.50 ą  3%     -51.6%      95.12 ą 74%  interrupts.72:PCI-MSI.31981605-edge.i40e-eth0-TxRx-36
> >     442.50 ą 45%     -73.1%     118.88 ą 90%  interrupts.79:PCI-MSI.31981612-edge.i40e-eth0-TxRx-43
> >     271.00 ą  8%     -53.2%     126.88 ą 75%  interrupts.82:PCI-MSI.31981615-edge.i40e-eth0-TxRx-46
> >     145448 ą  4%     -41.6%      84975 ą 42%  interrupts.CPU1.RES:Rescheduling_interrupts
> >      11773 ą 19%     -38.1%       7290 ą 52%  interrupts.CPU13.TLB:TLB_shootdowns
> >      24177 ą 15%    +356.5%     110368 ą 58%  interrupts.CPU16.RES:Rescheduling_interrupts
> >       3395 ą  3%     +78.3%       6055 ą 18%  interrupts.CPU17.NMI:Non-maskable_interrupts
> >       3395 ą  3%     +78.3%       6055 ą 18%  interrupts.CPU17.PMI:Performance_monitoring_interrupts
> >     106701 ą 41%     -55.6%      47425 ą 56%  interrupts.CPU18.RES:Rescheduling_interrupts
> >     327.50 ą 31%     -39.3%     198.88 ą 60%  interrupts.CPU24.60:PCI-MSI.31981593-edge.i40e-eth0-TxRx-24
> >     411618           +53.6%     632283 ą 77%  interrupts.CPU25.LOC:Local_timer_interrupts
> >      16189 ą 26%     -53.0%       7611 ą 66%  interrupts.CPU25.TLB:TLB_shootdowns
> >     407253           +54.4%     628596 ą 78%  interrupts.CPU26.LOC:Local_timer_interrupts
> >     216.50 ą 19%     -51.8%     104.25 ą 80%  interrupts.CPU27.63:PCI-MSI.31981596-edge.i40e-eth0-TxRx-27
> >       7180           -20.9%       5682 ą 25%  interrupts.CPU29.NMI:Non-maskable_interrupts
> >       7180           -20.9%       5682 ą 25%  interrupts.CPU29.PMI:Performance_monitoring_interrupts
> >      15186 ą 12%     -45.5%       8276 ą 49%  interrupts.CPU3.TLB:TLB_shootdowns
> >      13092 ą 19%     -29.5%       9231 ą 35%  interrupts.CPU30.TLB:TLB_shootdowns
> >      13204 ą 26%     -29.3%       9336 ą 19%  interrupts.CPU31.TLB:TLB_shootdowns
> >     374.50 ą 46%     -78.7%      79.62 ą 83%  interrupts.CPU34.70:PCI-MSI.31981603-edge.i40e-eth0-TxRx-34
> >       7188           -25.6%       5345 ą 26%  interrupts.CPU35.NMI:Non-maskable_interrupts
> >       7188           -25.6%       5345 ą 26%  interrupts.CPU35.PMI:Performance_monitoring_interrupts
> >     196.00 ą  4%     -52.0%      94.12 ą 75%  interrupts.CPU36.72:PCI-MSI.31981605-edge.i40e-eth0-TxRx-36
> >      12170 ą 20%     -34.3%       7998 ą 32%  interrupts.CPU39.TLB:TLB_shootdowns
> >     442.00 ą 45%     -73.3%     118.12 ą 91%  interrupts.CPU43.79:PCI-MSI.31981612-edge.i40e-eth0-TxRx-43
> >      12070 ą 15%     -37.2%       7581 ą 49%  interrupts.CPU43.TLB:TLB_shootdowns
> >       7177           -27.6%       5195 ą 26%  interrupts.CPU45.NMI:Non-maskable_interrupts
> >       7177           -27.6%       5195 ą 26%  interrupts.CPU45.PMI:Performance_monitoring_interrupts
> >     271.00 ą  8%     -53.4%     126.38 ą 75%  interrupts.CPU46.82:PCI-MSI.31981615-edge.i40e-eth0-TxRx-46
> >       3591           +84.0%       6607 ą 12%  interrupts.CPU46.NMI:Non-maskable_interrupts
> >       3591           +84.0%       6607 ą 12%  interrupts.CPU46.PMI:Performance_monitoring_interrupts
> >      57614 ą 30%     -34.0%      38015 ą 28%  interrupts.CPU46.RES:Rescheduling_interrupts
> >     149154 ą 41%     -47.2%      78808 ą 51%  interrupts.CPU51.RES:Rescheduling_interrupts
> >      30366 ą 28%    +279.5%     115229 ą 42%  interrupts.CPU52.RES:Rescheduling_interrupts
> >      29690          +355.5%     135237 ą 57%  interrupts.CPU54.RES:Rescheduling_interrupts
> >     213106 ą  2%     -66.9%      70545 ą 43%  interrupts.CPU59.RES:Rescheduling_interrupts
> >     225753 ą  7%     -72.9%      61212 ą 72%  interrupts.CPU60.RES:Rescheduling_interrupts
> >      12430 ą 14%     -41.5%       7276 ą 52%  interrupts.CPU61.TLB:TLB_shootdowns
> >      44552 ą 22%    +229.6%     146864 ą 36%  interrupts.CPU65.RES:Rescheduling_interrupts
> >     126088 ą 56%     -35.3%      81516 ą 73%  interrupts.CPU66.RES:Rescheduling_interrupts
> >     170880 ą 15%     -62.9%      63320 ą 52%  interrupts.CPU68.RES:Rescheduling_interrupts
> >     186033 ą 10%     -39.8%     112012 ą 41%  interrupts.CPU69.RES:Rescheduling_interrupts
> >     679.50 ą 29%     -80.5%     132.25 ą 82%  interrupts.CPU7.43:PCI-MSI.31981576-edge.i40e-eth0-TxRx-7
> >     124750 ą 18%     -39.4%      75553 ą 43%  interrupts.CPU7.RES:Rescheduling_interrupts
> >     158500 ą 47%     -52.1%      75915 ą 67%  interrupts.CPU71.RES:Rescheduling_interrupts
> >      11846 ą 11%     -32.5%       8001 ą 47%  interrupts.CPU72.TLB:TLB_shootdowns
> >      66095 ą 99%     -99.8%     147.62 ą 93%  interrupts.CPU73.109:PCI-MSI.31981642-edge.i40e-eth0-TxRx-73
> >       7221 ą  2%     -31.0%       4982 ą 35%  interrupts.CPU73.NMI:Non-maskable_interrupts
> >       7221 ą  2%     -31.0%       4982 ą 35%  interrupts.CPU73.PMI:Performance_monitoring_interrupts
> >      15304 ą 14%     -47.9%       7972 ą 31%  interrupts.CPU73.TLB:TLB_shootdowns
> >      10918 ą  3%     -31.9%       7436 ą 36%  interrupts.CPU74.TLB:TLB_shootdowns
> >     543.00 ą 39%     -73.3%     144.75 ą 81%  interrupts.CPU76.112:PCI-MSI.31981645-edge.i40e-eth0-TxRx-76
> >      12214 ą 14%     -40.9%       7220 ą 38%  interrupts.CPU79.TLB:TLB_shootdowns
> >     168.00 ą 29%     -55.7%      74.50 ą 85%  interrupts.CPU80.116:PCI-MSI.31981649-edge.i40e-eth0-TxRx-80
> >      28619 ą  3%    +158.4%      73939 ą 44%  interrupts.CPU80.RES:Rescheduling_interrupts
> >      12258           -34.3%       8056 ą 29%  interrupts.CPU80.TLB:TLB_shootdowns
> >       7214           -19.5%       5809 ą 24%  interrupts.CPU82.NMI:Non-maskable_interrupts
> >       7214           -19.5%       5809 ą 24%  interrupts.CPU82.PMI:Performance_monitoring_interrupts
> >      13522 ą 11%     -41.2%       7949 ą 29%  interrupts.CPU84.TLB:TLB_shootdowns
> >     223.50 ą 14%     -57.8%      94.25 ą 88%  interrupts.CPU85.121:PCI-MSI.31981654-edge.i40e-eth0-TxRx-85
> >      11989 ą  2%     -31.7%       8194 ą 22%  interrupts.CPU85.TLB:TLB_shootdowns
> >     121153 ą 29%     -41.4%      70964 ą 58%  interrupts.CPU86.RES:Rescheduling_interrupts
> >      11731 ą  8%     -40.7%       6957 ą 36%  interrupts.CPU86.TLB:TLB_shootdowns
> >      12192 ą 22%     -35.8%       7824 ą 43%  interrupts.CPU87.TLB:TLB_shootdowns
> >      11603 ą 19%     -31.8%       7915 ą 41%  interrupts.CPU89.TLB:TLB_shootdowns
> >      10471 ą  5%     -27.0%       7641 ą 31%  interrupts.CPU91.TLB:TLB_shootdowns
> >       7156           -20.9%       5658 ą 23%  interrupts.CPU92.NMI:Non-maskable_interrupts
> >       7156           -20.9%       5658 ą 23%  interrupts.CPU92.PMI:Performance_monitoring_interrupts
> >      99802 ą 20%     -43.6%      56270 ą 47%  interrupts.CPU92.RES:Rescheduling_interrupts
> >     109162 ą 18%     -28.7%      77839 ą 26%  interrupts.CPU93.RES:Rescheduling_interrupts
> >      15044 ą 29%     -44.4%       8359 ą 30%  interrupts.CPU93.TLB:TLB_shootdowns
> >     110749 ą 19%     -47.3%      58345 ą 48%  interrupts.CPU94.RES:Rescheduling_interrupts
> >       7245           -21.4%       5697 ą 25%  interrupts.CPU95.NMI:Non-maskable_interrupts
> >       7245           -21.4%       5697 ą 25%  interrupts.CPU95.PMI:Performance_monitoring_interrupts
> >       1969 ą  5%    +491.7%      11653 ą 81%  interrupts.IWI:IRQ_work_interrupts
> >
> >
> >
> > ***************************************************************************************************
> > lkp-bdw-ep6: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
> > =========================================================================================
> > class/compiler/cpufreq_governor/disk/kconfig/nr_threads/rootfs/tbox_group/testcase/testtime/ucode:
> >   interrupt/gcc-7/performance/1HDD/x86_64-rhel-7.6/100%/debian-x86_64-2019-09-23.cgz/lkp-bdw-ep6/stress-ng/30s/0xb000038
> >
> > commit:
> >   fcf0553db6 ("sched/fair: Remove meaningless imbalance calculation")
> >   0b0695f2b3 ("sched/fair: Rework load_balance()")
> >
> > fcf0553db6f4c793 0b0695f2b34a4afa3f6e9aa1ff0
> > ---------------- ---------------------------
> >          %stddev     %change         %stddev
> >              \          |                \
> >   98318389           +43.0%  1.406e+08        stress-ng.schedpolicy.ops
> >    3277346           +43.0%    4685146        stress-ng.schedpolicy.ops_per_sec
> >  3.506e+08 ą  4%     -10.3%  3.146e+08 ą  3%  stress-ng.sigq.ops
> >   11684738 ą  4%     -10.3%   10485353 ą  3%  stress-ng.sigq.ops_per_sec
> >  3.628e+08 ą  6%     -19.4%  2.925e+08 ą  6%  stress-ng.time.involuntary_context_switches
> >      29456            +2.8%      30285        stress-ng.time.system_time
> >    7636655 ą  9%     +46.6%   11197377 ą 27%  cpuidle.C1E.usage
> >    1111483 ą  3%      -9.5%    1005829        vmstat.system.cs
> >   22638222 ą  4%     +16.5%   26370816 ą 11%  meminfo.Committed_AS
> >      28908 ą  6%     +24.6%      36020 ą 16%  meminfo.KernelStack
> >    7636543 ą  9%     +46.6%   11196090 ą 27%  turbostat.C1E
> >       3.46 ą 16%     -61.2%       1.35 ą  7%  turbostat.Pkg%pc2
> >     217.54            +1.7%     221.33        turbostat.PkgWatt
> >      13.34 ą  2%      +5.8%      14.11        turbostat.RAMWatt
> >     525.50 ą  8%     -15.7%     443.00 ą 12%  slabinfo.biovec-128.active_objs
> >     525.50 ą  8%     -15.7%     443.00 ą 12%  slabinfo.biovec-128.num_objs
> >      28089 ą 12%     -33.0%      18833 ą 22%  slabinfo.pool_workqueue.active_objs
> >     877.25 ą 12%     -32.6%     591.00 ą 21%  slabinfo.pool_workqueue.active_slabs
> >      28089 ą 12%     -32.6%      18925 ą 21%  slabinfo.pool_workqueue.num_objs
> >     877.25 ą 12%     -32.6%     591.00 ą 21%  slabinfo.pool_workqueue.num_slabs
> >     846.75 ą  6%     -18.0%     694.75 ą  9%  slabinfo.skbuff_fclone_cache.active_objs
> >     846.75 ą  6%     -18.0%     694.75 ą  9%  slabinfo.skbuff_fclone_cache.num_objs
> >      63348 ą  6%     -20.7%      50261 ą  4%  softirqs.CPU14.SCHED
> >      44394 ą  4%     +21.4%      53880 ą  8%  softirqs.CPU42.SCHED
> >      52246 ą  7%     -15.1%      44352        softirqs.CPU47.SCHED
> >      58350 ą  4%     -11.0%      51914 ą  7%  softirqs.CPU6.SCHED
> >      58009 ą  7%     -23.8%      44206 ą  4%  softirqs.CPU63.SCHED
> >      49166 ą  6%     +23.4%      60683 ą  9%  softirqs.CPU68.SCHED
> >      44594 ą  7%     +14.3%      50951 ą  8%  softirqs.CPU78.SCHED
> >      46407 ą  9%     +19.6%      55515 ą  8%  softirqs.CPU84.SCHED
> >      55555 ą  8%     -15.5%      46933 ą  4%  softirqs.CPU9.SCHED
> >     198757 ą 18%     +44.1%     286316 ą  9%  numa-meminfo.node0.Active
> >     189280 ą 19%     +37.1%     259422 ą  7%  numa-meminfo.node0.Active(anon)
> >     110438 ą 33%     +68.3%     185869 ą 16%  numa-meminfo.node0.AnonHugePages
> >     143458 ą 28%     +67.7%     240547 ą 13%  numa-meminfo.node0.AnonPages
> >      12438 ą 16%     +61.9%      20134 ą 37%  numa-meminfo.node0.KernelStack
> >    1004379 ą  7%     +16.4%    1168764 ą  4%  numa-meminfo.node0.MemUsed
> >     357111 ą 24%     -41.6%     208655 ą 29%  numa-meminfo.node1.Active
> >     330094 ą 22%     -39.6%     199339 ą 32%  numa-meminfo.node1.Active(anon)
> >     265924 ą 25%     -52.2%     127138 ą 46%  numa-meminfo.node1.AnonHugePages
> >     314059 ą 22%     -49.6%     158305 ą 36%  numa-meminfo.node1.AnonPages
> >      15386 ą 16%     -25.1%      11525 ą 15%  numa-meminfo.node1.KernelStack
> >    1200805 ą 11%     -18.6%     977595 ą  7%  numa-meminfo.node1.MemUsed
> >     965.50 ą 15%     -29.3%     682.25 ą 43%  numa-meminfo.node1.Mlocked
> >      46762 ą 18%     +37.8%      64452 ą  8%  numa-vmstat.node0.nr_active_anon
> >      35393 ą 27%     +68.9%      59793 ą 12%  numa-vmstat.node0.nr_anon_pages
> >      52.75 ą 33%     +71.1%      90.25 ą 15%  numa-vmstat.node0.nr_anon_transparent_hugepages
> >      15.00 ą 96%    +598.3%     104.75 ą 15%  numa-vmstat.node0.nr_inactive_file
> >      11555 ą 22%     +68.9%      19513 ą 41%  numa-vmstat.node0.nr_kernel_stack
> >     550.25 ą162%    +207.5%       1691 ą 48%  numa-vmstat.node0.nr_written
> >      46762 ą 18%     +37.8%      64452 ą  8%  numa-vmstat.node0.nr_zone_active_anon
> >      15.00 ą 96%    +598.3%     104.75 ą 15%  numa-vmstat.node0.nr_zone_inactive_file
> >      82094 ą 22%     -39.5%      49641 ą 32%  numa-vmstat.node1.nr_active_anon
> >      78146 ą 23%     -49.5%      39455 ą 37%  numa-vmstat.node1.nr_anon_pages
> >     129.00 ą 25%     -52.3%      61.50 ą 47%  numa-vmstat.node1.nr_anon_transparent_hugepages
> >     107.75 ą 12%     -85.4%      15.75 ą103%  numa-vmstat.node1.nr_inactive_file
> >      14322 ą 11%     -21.1%      11304 ą 11%  numa-vmstat.node1.nr_kernel_stack
> >     241.00 ą 15%     -29.5%     170.00 ą 43%  numa-vmstat.node1.nr_mlock
> >      82094 ą 22%     -39.5%      49641 ą 32%  numa-vmstat.node1.nr_zone_active_anon
> >     107.75 ą 12%     -85.4%      15.75 ą103%  numa-vmstat.node1.nr_zone_inactive_file
> >       0.81 ą  5%      +0.2        0.99 ą 10%  perf-profile.calltrace.cycles-pp.task_rq_lock.task_sched_runtime.cpu_clock_sample.posix_cpu_timer_get.do_timer_gettime
> >       0.60 ą 11%      +0.2        0.83 ą  9%  perf-profile.calltrace.cycles-pp.___might_sleep.__might_fault._copy_to_user.put_itimerspec64.__x64_sys_timer_gettime
> >       1.73 ą  9%      +0.3        2.05 ą  8%  perf-profile.calltrace.cycles-pp.__might_fault._copy_to_user.put_itimerspec64.__x64_sys_timer_gettime.do_syscall_64
> >       3.92 ą  5%      +0.6        4.49 ą  7%  perf-profile.calltrace.cycles-pp.task_sched_runtime.cpu_clock_sample.posix_cpu_timer_get.do_timer_gettime.__x64_sys_timer_gettime
> >       4.17 ą  4%      +0.6        4.78 ą  7%  perf-profile.calltrace.cycles-pp.cpu_clock_sample.posix_cpu_timer_get.do_timer_gettime.__x64_sys_timer_gettime.do_syscall_64
> >       5.72 ą  3%      +0.7        6.43 ą  7%  perf-profile.calltrace.cycles-pp.posix_cpu_timer_get.do_timer_gettime.__x64_sys_timer_gettime.do_syscall_64.entry_SYSCALL_64_after_hwframe
> >       0.24 ą 54%      -0.2        0.07 ą131%  perf-profile.children.cycles-pp.ext4_inode_csum_set
> >       0.45 ą  3%      +0.1        0.56 ą  4%  perf-profile.children.cycles-pp.__might_sleep
> >       0.84 ą  5%      +0.2        1.03 ą  9%  perf-profile.children.cycles-pp.task_rq_lock
> >       0.66 ą  8%      +0.2        0.88 ą  7%  perf-profile.children.cycles-pp.___might_sleep
> >       1.83 ą  9%      +0.3        2.16 ą  8%  perf-profile.children.cycles-pp.__might_fault
> >       4.04 ą  5%      +0.6        4.62 ą  7%  perf-profile.children.cycles-pp.task_sched_runtime
> >       4.24 ą  4%      +0.6        4.87 ą  7%  perf-profile.children.cycles-pp.cpu_clock_sample
> >       5.77 ą  3%      +0.7        6.48 ą  7%  perf-profile.children.cycles-pp.posix_cpu_timer_get
> >       0.22 ą 11%      +0.1        0.28 ą 15%  perf-profile.self.cycles-pp.cpu_clock_sample
> >       0.47 ą  7%      +0.1        0.55 ą  5%  perf-profile.self.cycles-pp.update_curr
> >       0.28 ą  5%      +0.1        0.38 ą 14%  perf-profile.self.cycles-pp.task_rq_lock
> >       0.42 ą  3%      +0.1        0.53 ą  4%  perf-profile.self.cycles-pp.__might_sleep
> >       0.50 ą  5%      +0.1        0.61 ą 11%  perf-profile.self.cycles-pp.task_sched_runtime
> >       0.63 ą  9%      +0.2        0.85 ą  7%  perf-profile.self.cycles-pp.___might_sleep
> >    9180611 ą  5%     +40.1%   12859327 ą 14%  sched_debug.cfs_rq:/.MIN_vruntime.max
> >    1479571 ą  6%     +57.6%    2331469 ą 14%  sched_debug.cfs_rq:/.MIN_vruntime.stddev
> >       7951 ą  6%     -52.5%       3773 ą 17%  sched_debug.cfs_rq:/.exec_clock.stddev
> >     321306 ą 39%     -44.2%     179273        sched_debug.cfs_rq:/.load.max
> >    9180613 ą  5%     +40.1%   12859327 ą 14%  sched_debug.cfs_rq:/.max_vruntime.max
> >    1479571 ą  6%     +57.6%    2331469 ą 14%  sched_debug.cfs_rq:/.max_vruntime.stddev
> >   16622378           +20.0%   19940069 ą  7%  sched_debug.cfs_rq:/.min_vruntime.avg
> >   18123901           +19.7%   21686545 ą  6%  sched_debug.cfs_rq:/.min_vruntime.max
> >   14338218 ą  3%     +27.4%   18267927 ą  7%  sched_debug.cfs_rq:/.min_vruntime.min
> >       0.17 ą 16%     +23.4%       0.21 ą 11%  sched_debug.cfs_rq:/.nr_running.stddev
> >     319990 ą 39%     -44.6%     177347        sched_debug.cfs_rq:/.runnable_weight.max
> >   -2067420           -33.5%   -1375445        sched_debug.cfs_rq:/.spread0.min
> >       1033 ą  8%     -13.7%     891.85 ą  3%  sched_debug.cfs_rq:/.util_est_enqueued.max
> >      93676 ą 16%     -29.0%      66471 ą 17%  sched_debug.cpu.avg_idle.min
> >      10391 ą 52%    +118.9%      22750 ą 15%  sched_debug.cpu.curr->pid.avg
> >      14393 ą 35%    +113.2%      30689 ą 17%  sched_debug.cpu.curr->pid.max
> >       3041 ą 38%    +161.8%       7963 ą 11%  sched_debug.cpu.curr->pid.stddev
> >       3.38 ą  6%     -16.3%       2.83 ą  5%  sched_debug.cpu.nr_running.max
> >    2412687 ą  4%     -16.0%    2027251 ą  3%  sched_debug.cpu.nr_switches.avg
> >    4038819 ą  3%     -20.2%    3223112 ą  5%  sched_debug.cpu.nr_switches.max
> >     834203 ą 17%     -37.8%     518798 ą 27%  sched_debug.cpu.nr_switches.stddev
> >      45.85 ą 13%     +41.2%      64.75 ą 18%  sched_debug.cpu.nr_uninterruptible.max
> >    1937209 ą  2%     +58.5%    3070891 ą  3%  sched_debug.cpu.sched_count.min
> >    1074023 ą 13%     -57.9%     451958 ą 12%  sched_debug.cpu.sched_count.stddev
> >    1283769 ą  7%     +65.1%    2118907 ą  7%  sched_debug.cpu.yld_count.min
> >     714244 ą  5%     -51.9%     343373 ą 22%  sched_debug.cpu.yld_count.stddev
> >      12.54 ą  9%     -18.8%      10.18 ą 15%  perf-stat.i.MPKI
> >  1.011e+10            +2.6%  1.038e+10        perf-stat.i.branch-instructions
> >      13.22 ą  5%      +2.5       15.75 ą  3%  perf-stat.i.cache-miss-rate%
> >   21084021 ą  6%     +33.9%   28231058 ą  6%  perf-stat.i.cache-misses
> >    1143861 ą  5%     -12.1%    1005721 ą  6%  perf-stat.i.context-switches
> >  1.984e+11            +1.8%   2.02e+11        perf-stat.i.cpu-cycles
> >  1.525e+10            +1.3%  1.544e+10        perf-stat.i.dTLB-loads
> >      65.46            -2.7       62.76 ą  3%  perf-stat.i.iTLB-load-miss-rate%
> >   20360883 ą  4%     +10.5%   22500874 ą  4%  perf-stat.i.iTLB-loads
> >  4.963e+10            +2.0%  5.062e+10        perf-stat.i.instructions
> >     181557            -2.4%     177113        perf-stat.i.msec
> >    5350122 ą  8%     +26.5%    6765332 ą  7%  perf-stat.i.node-load-misses
> >    4264320 ą  3%     +24.8%    5321600 ą  4%  perf-stat.i.node-store-misses
> >       6.12 ą  5%      +1.5        7.60 ą  2%  perf-stat.overall.cache-miss-rate%
> >       7646 ą  6%     -17.7%       6295 ą  3%  perf-stat.overall.cycles-between-cache-misses
> >      69.29            -1.1       68.22        perf-stat.overall.iTLB-load-miss-rate%
> >      61.11 ą  2%      +6.6       67.71 ą  5%  perf-stat.overall.node-load-miss-rate%
> >      74.82            +1.8       76.58        perf-stat.overall.node-store-miss-rate%
> >  1.044e+10            +1.8%  1.063e+10        perf-stat.ps.branch-instructions
> >   26325951 ą  6%     +22.9%   32366684 ą  2%  perf-stat.ps.cache-misses
> >    1115530 ą  3%      -9.5%    1009780        perf-stat.ps.context-switches
> >  1.536e+10            +1.0%  1.552e+10        perf-stat.ps.dTLB-loads
> >   44718416 ą  2%      +5.8%   47308605 ą  3%  perf-stat.ps.iTLB-load-misses
> >   19831973 ą  4%     +11.1%   22040029 ą  4%  perf-stat.ps.iTLB-loads
> >  5.064e+10            +1.4%  5.137e+10        perf-stat.ps.instructions
> >    5454694 ą  9%     +26.4%    6892365 ą  6%  perf-stat.ps.node-load-misses
> >    4263688 ą  4%     +24.9%    5325279 ą  4%  perf-stat.ps.node-store-misses
> >  3.001e+13            +1.7%  3.052e+13        perf-stat.total.instructions
> >      18550           -74.9%       4650 ą173%  interrupts.76:IR-PCI-MSI.512000-edge.ahci[0000:00:1f.2]
> >       7642 ą  9%     -20.4%       6086 ą  2%  interrupts.CPU0.CAL:Function_call_interrupts
> >       4376 ą 22%     -75.4%       1077 ą 41%  interrupts.CPU0.TLB:TLB_shootdowns
> >       8402 ą  5%     -19.0%       6806        interrupts.CPU1.CAL:Function_call_interrupts
> >       4559 ą 20%     -73.7%       1199 ą 15%  interrupts.CPU1.TLB:TLB_shootdowns
> >       8423 ą  4%     -20.2%       6725 ą  2%  interrupts.CPU10.CAL:Function_call_interrupts
> >       4536 ą 14%     -75.0%       1135 ą 20%  interrupts.CPU10.TLB:TLB_shootdowns
> >       8303 ą  3%     -18.2%       6795 ą  2%  interrupts.CPU11.CAL:Function_call_interrupts
> >       4404 ą 11%     -71.6%       1250 ą 35%  interrupts.CPU11.TLB:TLB_shootdowns
> >       8491 ą  6%     -21.3%       6683        interrupts.CPU12.CAL:Function_call_interrupts
> >       4723 ą 20%     -77.2%       1077 ą 17%  interrupts.CPU12.TLB:TLB_shootdowns
> >       8403 ą  5%     -20.3%       6700 ą  2%  interrupts.CPU13.CAL:Function_call_interrupts
> >       4557 ą 19%     -74.2%       1175 ą 22%  interrupts.CPU13.TLB:TLB_shootdowns
> >       8459 ą  4%     -18.6%       6884        interrupts.CPU14.CAL:Function_call_interrupts
> >       4559 ą 18%     -69.8%       1376 ą 13%  interrupts.CPU14.TLB:TLB_shootdowns
> >       8305 ą  7%     -17.7%       6833 ą  2%  interrupts.CPU15.CAL:Function_call_interrupts
> >       4261 ą 25%     -67.6%       1382 ą 24%  interrupts.CPU15.TLB:TLB_shootdowns
> >       8277 ą  5%     -19.1%       6696 ą  3%  interrupts.CPU16.CAL:Function_call_interrupts
> >       4214 ą 22%     -69.6%       1282 ą  8%  interrupts.CPU16.TLB:TLB_shootdowns
> >       8258 ą  5%     -18.9%       6694 ą  3%  interrupts.CPU17.CAL:Function_call_interrupts
> >       4461 ą 19%     -74.1%       1155 ą 21%  interrupts.CPU17.TLB:TLB_shootdowns
> >       8457 ą  6%     -20.6%       6717        interrupts.CPU18.CAL:Function_call_interrupts
> >       4889 ą 34%     +60.0%       7822        interrupts.CPU18.NMI:Non-maskable_interrupts
> >       4889 ą 34%     +60.0%       7822        interrupts.CPU18.PMI:Performance_monitoring_interrupts
> >       4731 ą 22%     -77.2%       1078 ą 10%  interrupts.CPU18.TLB:TLB_shootdowns
> >       8160 ą  5%     -18.1%       6684        interrupts.CPU19.CAL:Function_call_interrupts
> >       4311 ą 20%     -74.2%       1114 ą 13%  interrupts.CPU19.TLB:TLB_shootdowns
> >       8464 ą  2%     -18.2%       6927 ą  3%  interrupts.CPU2.CAL:Function_call_interrupts
> >       4938 ą 14%     -70.5%       1457 ą 18%  interrupts.CPU2.TLB:TLB_shootdowns
> >       8358 ą  6%     -19.7%       6715 ą  3%  interrupts.CPU20.CAL:Function_call_interrupts
> >       4567 ą 24%     -74.6%       1160 ą 35%  interrupts.CPU20.TLB:TLB_shootdowns
> >       8460 ą  4%     -22.3%       6577 ą  2%  interrupts.CPU21.CAL:Function_call_interrupts
> >       4514 ą 18%     -76.0%       1084 ą 22%  interrupts.CPU21.TLB:TLB_shootdowns
> >       6677 ą  6%     +19.6%       7988 ą  9%  interrupts.CPU22.CAL:Function_call_interrupts
> >       1288 ą 14%    +209.1%       3983 ą 35%  interrupts.CPU22.TLB:TLB_shootdowns
> >       6751 ą  2%     +24.0%       8370 ą  9%  interrupts.CPU23.CAL:Function_call_interrupts
> >       1037 ą 29%    +323.0%       4388 ą 36%  interrupts.CPU23.TLB:TLB_shootdowns
> >       6844           +20.6%       8251 ą  9%  interrupts.CPU24.CAL:Function_call_interrupts
> >       1205 ą 17%    +229.2%       3967 ą 40%  interrupts.CPU24.TLB:TLB_shootdowns
> >       6880           +21.9%       8389 ą  7%  interrupts.CPU25.CAL:Function_call_interrupts
> >       1228 ą 19%    +245.2%       4240 ą 35%  interrupts.CPU25.TLB:TLB_shootdowns
> >       6494 ą  8%     +25.1%       8123 ą  9%  interrupts.CPU26.CAL:Function_call_interrupts
> >       1141 ą 13%    +262.5%       4139 ą 32%  interrupts.CPU26.TLB:TLB_shootdowns
> >       6852           +19.2%       8166 ą  7%  interrupts.CPU27.CAL:Function_call_interrupts
> >       1298 ą  8%    +197.1%       3857 ą 31%  interrupts.CPU27.TLB:TLB_shootdowns
> >       6563 ą  6%     +25.2%       8214 ą  8%  interrupts.CPU28.CAL:Function_call_interrupts
> >       1176 ą  8%    +237.1%       3964 ą 33%  interrupts.CPU28.TLB:TLB_shootdowns
> >       6842 ą  2%     +21.4%       8308 ą  8%  interrupts.CPU29.CAL:Function_call_interrupts
> >       1271 ą 11%    +223.8%       4118 ą 33%  interrupts.CPU29.TLB:TLB_shootdowns
> >       8418 ą  3%     -21.1%       6643 ą  2%  interrupts.CPU3.CAL:Function_call_interrupts
> >       4677 ą 11%     -75.1%       1164 ą 16%  interrupts.CPU3.TLB:TLB_shootdowns
> >       6798 ą  3%     +21.8%       8284 ą  7%  interrupts.CPU30.CAL:Function_call_interrupts
> >       1219 ą 12%    +236.3%       4102 ą 30%  interrupts.CPU30.TLB:TLB_shootdowns
> >       6503 ą  4%     +25.9%       8186 ą  6%  interrupts.CPU31.CAL:Function_call_interrupts
> >       1046 ą 15%    +289.1%       4072 ą 32%  interrupts.CPU31.TLB:TLB_shootdowns
> >       6949 ą  3%     +17.2%       8141 ą  8%  interrupts.CPU32.CAL:Function_call_interrupts
> >       1241 ą 23%    +210.6%       3854 ą 34%  interrupts.CPU32.TLB:TLB_shootdowns
> >       1487 ą 26%    +161.6%       3889 ą 46%  interrupts.CPU33.TLB:TLB_shootdowns
> >       1710 ą 44%    +140.1%       4105 ą 36%  interrupts.CPU34.TLB:TLB_shootdowns
> >       6957 ą  2%     +15.2%       8012 ą  9%  interrupts.CPU35.CAL:Function_call_interrupts
> >       1165 ą  8%    +223.1%       3765 ą 38%  interrupts.CPU35.TLB:TLB_shootdowns
> >       1423 ą 24%    +173.4%       3892 ą 33%  interrupts.CPU36.TLB:TLB_shootdowns
> >       1279 ą 29%    +224.2%       4148 ą 39%  interrupts.CPU37.TLB:TLB_shootdowns
> >       1301 ą 20%    +226.1%       4244 ą 35%  interrupts.CPU38.TLB:TLB_shootdowns
> >       6906 ą  2%     +18.5%       8181 ą  8%  interrupts.CPU39.CAL:Function_call_interrupts
> >     368828 ą 20%     +96.2%     723710 ą  7%  interrupts.CPU39.RES:Rescheduling_interrupts
> >       1438 ą 12%    +174.8%       3951 ą 33%  interrupts.CPU39.TLB:TLB_shootdowns
> >       8399 ą  5%     -19.2%       6788 ą  2%  interrupts.CPU4.CAL:Function_call_interrupts
> >       4567 ą 18%     -72.7%       1245 ą 28%  interrupts.CPU4.TLB:TLB_shootdowns
> >       6895           +22.4%       8439 ą  9%  interrupts.CPU40.CAL:Function_call_interrupts
> >       1233 ą 11%    +247.1%       4280 ą 36%  interrupts.CPU40.TLB:TLB_shootdowns
> >       6819 ą  2%     +21.3%       8274 ą  9%  interrupts.CPU41.CAL:Function_call_interrupts
> >       1260 ą 14%    +207.1%       3871 ą 38%  interrupts.CPU41.TLB:TLB_shootdowns
> >       1301 ą  9%    +204.7%       3963 ą 36%  interrupts.CPU42.TLB:TLB_shootdowns
> >       6721 ą  3%     +22.3%       8221 ą  7%  interrupts.CPU43.CAL:Function_call_interrupts
> >       1237 ą 19%    +224.8%       4017 ą 35%  interrupts.CPU43.TLB:TLB_shootdowns
> >       8422 ą  8%     -22.7%       6506 ą  5%  interrupts.CPU44.CAL:Function_call_interrupts
> >   15261375 ą  7%      -7.8%   14064176        interrupts.CPU44.LOC:Local_timer_interrupts
> >       4376 ą 25%     -75.7%       1063 ą 26%  interrupts.CPU44.TLB:TLB_shootdowns
> >       8451 ą  5%     -23.7%       6448 ą  6%  interrupts.CPU45.CAL:Function_call_interrupts
> >       4351 ą 18%     -74.9%       1094 ą 12%  interrupts.CPU45.TLB:TLB_shootdowns
> >       8705 ą  6%     -21.2%       6860 ą  2%  interrupts.CPU46.CAL:Function_call_interrupts
> >       4787 ą 20%     -69.5%       1462 ą 16%  interrupts.CPU46.TLB:TLB_shootdowns
> >       8334 ą  3%     -18.9%       6763        interrupts.CPU47.CAL:Function_call_interrupts
> >       4126 ą 10%     -71.3%       1186 ą 18%  interrupts.CPU47.TLB:TLB_shootdowns
> >       8578 ą  4%     -21.7%       6713        interrupts.CPU48.CAL:Function_call_interrupts
> >       4520 ą 15%     -74.5%       1154 ą 23%  interrupts.CPU48.TLB:TLB_shootdowns
> >       8450 ą  8%     -18.8%       6863 ą  3%  interrupts.CPU49.CAL:Function_call_interrupts
> >       4494 ą 24%     -66.5%       1505 ą 22%  interrupts.CPU49.TLB:TLB_shootdowns
> >       8307 ą  4%     -18.0%       6816 ą  2%  interrupts.CPU5.CAL:Function_call_interrupts
> >       7845           -37.4%       4908 ą 34%  interrupts.CPU5.NMI:Non-maskable_interrupts
> >       7845           -37.4%       4908 ą 34%  interrupts.CPU5.PMI:Performance_monitoring_interrupts
> >       4429 ą 17%     -69.8%       1339 ą 20%  interrupts.CPU5.TLB:TLB_shootdowns
> >       8444 ą  4%     -21.7%       6613        interrupts.CPU50.CAL:Function_call_interrupts
> >       4282 ą 16%     -76.0%       1029 ą 17%  interrupts.CPU50.TLB:TLB_shootdowns
> >       8750 ą  6%     -22.2%       6803        interrupts.CPU51.CAL:Function_call_interrupts
> >       4755 ą 20%     -73.1%       1277 ą 15%  interrupts.CPU51.TLB:TLB_shootdowns
> >       8478 ą  6%     -20.2%       6766 ą  2%  interrupts.CPU52.CAL:Function_call_interrupts
> >       4337 ą 20%     -72.6%       1190 ą 22%  interrupts.CPU52.TLB:TLB_shootdowns
> >       8604 ą  7%     -21.5%       6750 ą  4%  interrupts.CPU53.CAL:Function_call_interrupts
> >       4649 ą 17%     -74.3%       1193 ą 23%  interrupts.CPU53.TLB:TLB_shootdowns
> >       8317 ą  9%     -19.4%       6706 ą  3%  interrupts.CPU54.CAL:Function_call_interrupts
> >       4372 ą 12%     -75.4%       1076 ą 29%  interrupts.CPU54.TLB:TLB_shootdowns
> >       8439 ą  3%     -18.5%       6876        interrupts.CPU55.CAL:Function_call_interrupts
> >       4415 ą 11%     -71.6%       1254 ą 17%  interrupts.CPU55.TLB:TLB_shootdowns
> >       8869 ą  6%     -22.6%       6864 ą  2%  interrupts.CPU56.CAL:Function_call_interrupts
> >     517594 ą 13%    +123.3%    1155539 ą 25%  interrupts.CPU56.RES:Rescheduling_interrupts
> >       5085 ą 22%     -74.9%       1278 ą 17%  interrupts.CPU56.TLB:TLB_shootdowns
> >       8682 ą  4%     -21.7%       6796 ą  2%  interrupts.CPU57.CAL:Function_call_interrupts
> >       4808 ą 19%     -74.1%       1243 ą 13%  interrupts.CPU57.TLB:TLB_shootdowns
> >       8626 ą  7%     -21.8%       6746 ą  2%  interrupts.CPU58.CAL:Function_call_interrupts
> >       4816 ą 20%     -79.1%       1007 ą 28%  interrupts.CPU58.TLB:TLB_shootdowns
> >       8759 ą  8%     -20.3%       6984        interrupts.CPU59.CAL:Function_call_interrupts
> >       4840 ą 22%     -70.6%       1423 ą 14%  interrupts.CPU59.TLB:TLB_shootdowns
> >       8167 ą  6%     -19.0%       6615 ą  2%  interrupts.CPU6.CAL:Function_call_interrupts
> >       4129 ą 21%     -75.4%       1017 ą 24%  interrupts.CPU6.TLB:TLB_shootdowns
> >       8910 ą  4%     -23.7%       6794 ą  3%  interrupts.CPU60.CAL:Function_call_interrupts
> >       5017 ą 12%     -77.8%       1113 ą 15%  interrupts.CPU60.TLB:TLB_shootdowns
> >       8689 ą  5%     -21.6%       6808        interrupts.CPU61.CAL:Function_call_interrupts
> >       4715 ą 20%     -77.6%       1055 ą 19%  interrupts.CPU61.TLB:TLB_shootdowns
> >       8574 ą  4%     -18.9%       6953 ą  2%  interrupts.CPU62.CAL:Function_call_interrupts
> >       4494 ą 17%     -72.3%       1244 ą  7%  interrupts.CPU62.TLB:TLB_shootdowns
> >       8865 ą  3%     -25.4%       6614 ą  7%  interrupts.CPU63.CAL:Function_call_interrupts
> >       4870 ą 12%     -76.8%       1130 ą 12%  interrupts.CPU63.TLB:TLB_shootdowns
> >       8724 ą  7%     -20.2%       6958 ą  3%  interrupts.CPU64.CAL:Function_call_interrupts
> >       4736 ą 16%     -72.6%       1295 ą  7%  interrupts.CPU64.TLB:TLB_shootdowns
> >       8717 ą  6%     -23.7%       6653 ą  4%  interrupts.CPU65.CAL:Function_call_interrupts
> >       4626 ą 19%     -76.5%       1087 ą 21%  interrupts.CPU65.TLB:TLB_shootdowns
> >       6671           +24.7%       8318 ą  9%  interrupts.CPU66.CAL:Function_call_interrupts
> >       1091 ą  8%    +249.8%       3819 ą 32%  interrupts.CPU66.TLB:TLB_shootdowns
> >       6795 ą  2%     +26.9%       8624 ą  9%  interrupts.CPU67.CAL:Function_call_interrupts
> >       1098 ą 24%    +299.5%       4388 ą 39%  interrupts.CPU67.TLB:TLB_shootdowns
> >       6704 ą  5%     +25.8%       8431 ą  8%  interrupts.CPU68.CAL:Function_call_interrupts
> >       1214 ą 15%    +236.1%       4083 ą 36%  interrupts.CPU68.TLB:TLB_shootdowns
> >       1049 ą 15%    +326.2%       4473 ą 33%  interrupts.CPU69.TLB:TLB_shootdowns
> >       8554 ą  6%     -19.6%       6874 ą  2%  interrupts.CPU7.CAL:Function_call_interrupts
> >       4753 ą 19%     -71.7%       1344 ą 16%  interrupts.CPU7.TLB:TLB_shootdowns
> >       1298 ą 13%    +227.4%       4249 ą 38%  interrupts.CPU70.TLB:TLB_shootdowns
> >       6976           +19.9%       8362 ą  7%  interrupts.CPU71.CAL:Function_call_interrupts
> >    1232748 ą 18%     -57.3%     525824 ą 33%  interrupts.CPU71.RES:Rescheduling_interrupts
> >       1253 ą  9%    +211.8%       3909 ą 31%  interrupts.CPU71.TLB:TLB_shootdowns
> >       1316 ą 22%    +188.7%       3800 ą 33%  interrupts.CPU72.TLB:TLB_shootdowns
> >       6665 ą  5%     +26.5%       8429 ą  8%  interrupts.CPU73.CAL:Function_call_interrupts
> >       1202 ą 13%    +234.1%       4017 ą 37%  interrupts.CPU73.TLB:TLB_shootdowns
> >       6639 ą  5%     +27.0%       8434 ą  8%  interrupts.CPU74.CAL:Function_call_interrupts
> >       1079 ą 16%    +269.4%       3986 ą 36%  interrupts.CPU74.TLB:TLB_shootdowns
> >       1055 ą 12%    +301.2%       4235 ą 34%  interrupts.CPU75.TLB:TLB_shootdowns
> >       7011 ą  3%     +21.6%       8522 ą  8%  interrupts.CPU76.CAL:Function_call_interrupts
> >       1223 ą 13%    +230.7%       4047 ą 35%  interrupts.CPU76.TLB:TLB_shootdowns
> >       6886 ą  7%     +25.6%       8652 ą 10%  interrupts.CPU77.CAL:Function_call_interrupts
> >       1316 ą 16%    +229.8%       4339 ą 36%  interrupts.CPU77.TLB:TLB_shootdowns
> >       7343 ą  5%     +19.1%       8743 ą  9%  interrupts.CPU78.CAL:Function_call_interrupts
> >       1699 ą 37%    +144.4%       4152 ą 31%  interrupts.CPU78.TLB:TLB_shootdowns
> >       7136 ą  4%     +21.4%       8666 ą  9%  interrupts.CPU79.CAL:Function_call_interrupts
> >       1094 ą 13%    +276.2%       4118 ą 34%  interrupts.CPU79.TLB:TLB_shootdowns
> >       8531 ą  5%     -19.5%       6869 ą  2%  interrupts.CPU8.CAL:Function_call_interrupts
> >       4764 ą 16%     -71.0%       1382 ą 14%  interrupts.CPU8.TLB:TLB_shootdowns
> >       1387 ą 29%    +181.8%       3910 ą 38%  interrupts.CPU80.TLB:TLB_shootdowns
> >       1114 ą 30%    +259.7%       4007 ą 36%  interrupts.CPU81.TLB:TLB_shootdowns
> >       7012           +23.9%       8685 ą  8%  interrupts.CPU82.CAL:Function_call_interrupts
> >       1274 ą 12%    +255.4%       4530 ą 27%  interrupts.CPU82.TLB:TLB_shootdowns
> >       6971 ą  3%     +23.8%       8628 ą  9%  interrupts.CPU83.CAL:Function_call_interrupts
> >       1156 ą 18%    +260.1%       4162 ą 34%  interrupts.CPU83.TLB:TLB_shootdowns
> >       7030 ą  4%     +21.0%       8504 ą  8%  interrupts.CPU84.CAL:Function_call_interrupts
> >       1286 ą 23%    +224.0%       4166 ą 31%  interrupts.CPU84.TLB:TLB_shootdowns
> >       7059           +22.4%       8644 ą 11%  interrupts.CPU85.CAL:Function_call_interrupts
> >       1421 ą 22%    +208.8%       4388 ą 33%  interrupts.CPU85.TLB:TLB_shootdowns
> >       7018 ą  2%     +22.8%       8615 ą  9%  interrupts.CPU86.CAL:Function_call_interrupts
> >       1258 ą  8%    +231.1%       4167 ą 34%  interrupts.CPU86.TLB:TLB_shootdowns
> >       1338 ą  3%    +217.9%       4255 ą 31%  interrupts.CPU87.TLB:TLB_shootdowns
> >       8376 ą  4%     -19.0%       6787 ą  2%  interrupts.CPU9.CAL:Function_call_interrupts
> >       4466 ą 17%     -71.2%       1286 ą 18%  interrupts.CPU9.TLB:TLB_shootdowns
> >
> >
> >
> >
> >
> > Disclaimer:
> > Results have been estimated based on internal Intel analysis and are provided
> > for informational purposes only. Any difference in system hardware or software
> > design or configuration may affect actual performance.
> >
> >
> > Thanks,
> > Oliver Sang
> >

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ