lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Date:   Fri, 8 Nov 2019 16:02:13 +0800
From:   kernel test robot <rong.a.chen@...el.com>
To:     Vincent Guittot <vincent.guittot@...aro.org>
Cc:     Ingo Molnar <mingo@...nel.org>, Ben Segall <bsegall@...gle.com>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Juri Lelli <juri.lelli@...hat.com>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Mel Gorman <mgorman@...e.de>, Mike Galbraith <efault@....de>,
        Peter Zijlstra <peterz@...radead.org>,
        Steven Rostedt <rostedt@...dmis.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        LKML <linux-kernel@...r.kernel.org>,
        Stephen Rothwell <sfr@...b.auug.org.au>, lkp@...ts.01.org
Subject: [sched/fair]  0b0695f2b3:  vm-scalability.median 3.1% improvement

Greeting,

FYI, we noticed a 3.1% improvement of vm-scalability.median due to commit:


commit: 0b0695f2b34a4afa3f6e9aa1ff0e5336d8dad912 ("sched/fair: Rework load_balance()")
https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git master

in testcase: vm-scalability
on test machine: 104 threads Skylake with 192G memory
with following parameters:

	runtime: 300s
	size: 8T
	test: anon-cow-seq
	cpufreq_governor: performance
	ucode: 0x2000064

test-description: The motivation behind this suite is to exercise functions and regions of the mm/ of the Linux kernel which are of interest to us.
test-url: https://git.kernel.org/cgit/linux/kernel/git/wfg/vm-scalability.git/

In addition to that, the commit also has significant impact on the following tests:

+------------------+-----------------------------------------------------------------------+
| testcase: change | stress-ng: stress-ng.schedpolicy.ops_per_sec 43.0% improvement        |
| test machine     | 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory |
| test parameters  | class=interrupt                                                       |
|                  | cpufreq_governor=performance                                          |
|                  | disk=1HDD                                                             |
|                  | nr_threads=100%                                                       |
|                  | testtime=30s                                                          |
|                  | ucode=0xb000038                                                       |
+------------------+-----------------------------------------------------------------------+




Details are as below:
-------------------------------------------------------------------------------------------------->


To reproduce:

        git clone https://github.com/intel/lkp-tests.git
        cd lkp-tests
        bin/lkp install job.yaml  # job file is attached in this email
        bin/lkp run     job.yaml

=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/size/tbox_group/test/testcase/ucode:
  gcc-7/performance/x86_64-rhel-7.6/debian-x86_64-2019-09-23.cgz/300s/8T/lkp-skl-fpga01/anon-cow-seq/vm-scalability/0x2000064

commit: 
  fcf0553db6 ("sched/fair: Remove meaningless imbalance calculation")
  0b0695f2b3 ("sched/fair: Rework load_balance()")

fcf0553db6f4c793 0b0695f2b34a4afa3f6e9aa1ff0 
---------------- --------------------------- 
         %stddev     %change         %stddev
             \          |                \  
    413301            +3.1%     426103        vm-scalability.median
      0.04 ±  2%     -34.0%       0.03 ± 12%  vm-scalability.median_stddev
  43837589            +2.4%   44902458        vm-scalability.throughput
    181085           -18.7%     147221        vm-scalability.time.involuntary_context_switches
  12762365 ±  2%      +3.9%   13262025        vm-scalability.time.minor_page_faults
      7773            +2.9%       7997        vm-scalability.time.percent_of_cpu_this_job_got
     11449            +1.2%      11589        vm-scalability.time.system_time
     12024            +4.7%      12584        vm-scalability.time.user_time
    439194 ±  2%     +46.0%     641402 ±  2%  vm-scalability.time.voluntary_context_switches
 1.148e+10            +5.0%  1.206e+10        vm-scalability.workload
      0.00 ± 54%      +0.0        0.00 ± 17%  mpstat.cpu.all.iowait%
   4767597           +52.5%    7268430 ± 41%  numa-numastat.node1.local_node
   4781030           +52.3%    7280347 ± 41%  numa-numastat.node1.numa_hit
     24.75            -9.1%      22.50 ±  2%  vmstat.cpu.id
     37.50            +4.7%      39.25        vmstat.cpu.us
      6643 ±  3%     +15.1%       7647        vmstat.system.cs
  12220504           +33.4%   16298593 ±  4%  cpuidle.C1.time
    260215 ±  6%     +55.3%     404158 ±  3%  cpuidle.C1.usage
   4986034 ±  3%     +56.2%    7786811 ±  2%  cpuidle.POLL.time
    145941 ±  3%     +61.2%     235218 ±  2%  cpuidle.POLL.usage
      1990            +3.0%       2049        turbostat.Avg_MHz
    254633 ±  6%     +56.7%     398892 ±  4%  turbostat.C1
      0.04            +0.0        0.05        turbostat.C1%
    309.99            +1.5%     314.75        turbostat.RAMWatt
      1688 ± 11%     +17.4%       1983 ±  5%  slabinfo.UNIX.active_objs
      1688 ± 11%     +17.4%       1983 ±  5%  slabinfo.UNIX.num_objs
      2460 ±  3%     -15.8%       2072 ± 11%  slabinfo.dmaengine-unmap-16.active_objs
      2460 ±  3%     -15.8%       2072 ± 11%  slabinfo.dmaengine-unmap-16.num_objs
      2814 ±  9%     +14.6%       3225 ±  4%  slabinfo.sock_inode_cache.active_objs
      2814 ±  9%     +14.6%       3225 ±  4%  slabinfo.sock_inode_cache.num_objs
      0.67 ±  5%      +0.1        0.73 ±  3%  perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.alloc_pages_vma.do_huge_pmd_wp_page.__handle_mm_fault.handle_mm_fault
      0.68 ±  6%      +0.1        0.74 ±  2%  perf-profile.calltrace.cycles-pp.alloc_pages_vma.do_huge_pmd_wp_page.__handle_mm_fault.handle_mm_fault.__do_page_fault
      0.05            +0.0        0.07 ±  7%  perf-profile.children.cycles-pp.schedule
      0.06            +0.0        0.08 ±  6%  perf-profile.children.cycles-pp.__wake_up_common
      0.06 ±  7%      +0.0        0.08 ±  6%  perf-profile.children.cycles-pp.wake_up_page_bit
      0.23 ±  7%      +0.0        0.28 ±  5%  perf-profile.children.cycles-pp._raw_spin_lock_irqsave
      0.00            +0.1        0.05        perf-profile.children.cycles-pp.drm_fb_helper_sys_imageblit
      0.00            +0.1        0.05        perf-profile.children.cycles-pp.sys_imageblit
     29026 ±  3%     -26.7%      21283 ± 44%  numa-vmstat.node0.nr_inactive_anon
     30069 ±  3%     -20.5%      23905 ± 26%  numa-vmstat.node0.nr_shmem
     12120 ±  2%     -15.5%      10241 ± 12%  numa-vmstat.node0.nr_slab_reclaimable
     29026 ±  3%     -26.7%      21283 ± 44%  numa-vmstat.node0.nr_zone_inactive_anon
   4010893           +16.1%    4655889 ±  9%  numa-vmstat.node1.nr_active_anon
   3982581           +16.3%    4632344 ±  9%  numa-vmstat.node1.nr_anon_pages
      6861           +16.1%       7964 ±  8%  numa-vmstat.node1.nr_anon_transparent_hugepages
      2317 ± 42%    +336.9%      10125 ± 93%  numa-vmstat.node1.nr_inactive_anon
      6596 ±  4%     +18.2%       7799 ± 14%  numa-vmstat.node1.nr_kernel_stack
      9629 ±  8%     +66.4%      16020 ± 41%  numa-vmstat.node1.nr_shmem
      7558 ±  3%     +26.5%       9561 ± 14%  numa-vmstat.node1.nr_slab_reclaimable
   4010227           +16.1%    4655056 ±  9%  numa-vmstat.node1.nr_zone_active_anon
      2317 ± 42%    +336.9%      10125 ± 93%  numa-vmstat.node1.nr_zone_inactive_anon
   2859663 ±  2%     +46.2%    4179500 ± 36%  numa-vmstat.node1.numa_hit
   2680260 ±  2%     +49.3%    4002218 ± 37%  numa-vmstat.node1.numa_local
    116661 ±  3%     -26.3%      86010 ± 44%  numa-meminfo.node0.Inactive
    116192 ±  3%     -26.7%      85146 ± 44%  numa-meminfo.node0.Inactive(anon)
     48486 ±  2%     -15.5%      40966 ± 12%  numa-meminfo.node0.KReclaimable
     48486 ±  2%     -15.5%      40966 ± 12%  numa-meminfo.node0.SReclaimable
    120367 ±  3%     -20.5%      95642 ± 26%  numa-meminfo.node0.Shmem
  16210528           +15.2%   18673368 ±  6%  numa-meminfo.node1.Active
  16210394           +15.2%   18673287 ±  6%  numa-meminfo.node1.Active(anon)
  14170064           +15.6%   16379835 ±  7%  numa-meminfo.node1.AnonHugePages
  16113351           +15.3%   18577254 ±  7%  numa-meminfo.node1.AnonPages
     10534 ± 33%    +293.8%      41480 ± 92%  numa-meminfo.node1.Inactive
      9262 ± 42%    +338.2%      40589 ± 93%  numa-meminfo.node1.Inactive(anon)
     30235 ±  3%     +26.5%      38242 ± 14%  numa-meminfo.node1.KReclaimable
      6594 ±  4%     +18.3%       7802 ± 14%  numa-meminfo.node1.KernelStack
  17083646           +15.1%   19656922 ±  7%  numa-meminfo.node1.MemUsed
     30235 ±  3%     +26.5%      38242 ± 14%  numa-meminfo.node1.SReclaimable
     38540 ±  8%     +66.4%      64117 ± 42%  numa-meminfo.node1.Shmem
    106342           +19.8%     127451 ± 11%  numa-meminfo.node1.Slab
   9479688            +4.5%    9905902        proc-vmstat.nr_active_anon
   9434298            +4.5%    9856978        proc-vmstat.nr_anon_pages
     16194            +4.3%      16895        proc-vmstat.nr_anon_transparent_hugepages
    276.75            +3.6%     286.75        proc-vmstat.nr_dirtied
   3888633            -1.1%    3845882        proc-vmstat.nr_dirty_background_threshold
   7786774            -1.1%    7701168        proc-vmstat.nr_dirty_threshold
  39168820            -1.1%   38741444        proc-vmstat.nr_free_pages
     50391            +1.0%      50904        proc-vmstat.nr_slab_unreclaimable
    257.50            +3.6%     266.75        proc-vmstat.nr_written
   9479678            +4.5%    9905895        proc-vmstat.nr_zone_active_anon
   1501517            -5.9%    1412958        proc-vmstat.numa_hint_faults
   1075936           -13.1%     934706        proc-vmstat.numa_hint_faults_local
  17306395            +4.8%   18141722        proc-vmstat.numa_hit
   5211079            +4.2%    5427541        proc-vmstat.numa_huge_pte_updates
  17272620            +4.8%   18107691        proc-vmstat.numa_local
     33774            +0.8%      34031        proc-vmstat.numa_other
    690793 ±  3%     -13.7%     596166 ±  2%  proc-vmstat.numa_pages_migrated
 2.669e+09            +4.2%   2.78e+09        proc-vmstat.numa_pte_updates
 2.755e+09            +5.6%  2.909e+09        proc-vmstat.pgalloc_normal
  13573227 ±  2%      +3.6%   14060842        proc-vmstat.pgfault
 2.752e+09            +5.6%  2.906e+09        proc-vmstat.pgfree
 1.723e+08 ±  2%     +14.3%   1.97e+08 ±  8%  proc-vmstat.pgmigrate_fail
    690793 ±  3%     -13.7%     596166 ±  2%  proc-vmstat.pgmigrate_success
   5015265            +5.0%    5266730        proc-vmstat.thp_deferred_split_page
   5019661            +5.0%    5271482        proc-vmstat.thp_fault_alloc
     18284 ± 62%     -79.9%       3681 ±172%  sched_debug.cfs_rq:/.MIN_vruntime.avg
   1901618 ± 62%     -89.9%     192494 ±172%  sched_debug.cfs_rq:/.MIN_vruntime.max
    185571 ± 62%     -85.8%      26313 ±172%  sched_debug.cfs_rq:/.MIN_vruntime.stddev
     15241 ±  6%     -36.6%       9655 ±  6%  sched_debug.cfs_rq:/.exec_clock.stddev
     18284 ± 62%     -79.9%       3681 ±172%  sched_debug.cfs_rq:/.max_vruntime.avg
   1901618 ± 62%     -89.9%     192494 ±172%  sched_debug.cfs_rq:/.max_vruntime.max
    185571 ± 62%     -85.8%      26313 ±172%  sched_debug.cfs_rq:/.max_vruntime.stddev
    898812 ±  7%     -31.2%     618552 ±  5%  sched_debug.cfs_rq:/.min_vruntime.stddev
     10.30 ± 12%     +34.5%      13.86 ±  6%  sched_debug.cfs_rq:/.nr_spread_over.avg
     34.75 ±  8%     +95.9%      68.08 ±  4%  sched_debug.cfs_rq:/.nr_spread_over.max
      9.12 ± 11%     +82.3%      16.62 ±  9%  sched_debug.cfs_rq:/.nr_spread_over.stddev
  -1470498           -31.9%   -1000709        sched_debug.cfs_rq:/.spread0.min
    899820 ±  7%     -31.2%     618970 ±  5%  sched_debug.cfs_rq:/.spread0.stddev
      1589 ±  9%     -19.2%       1284 ±  9%  sched_debug.cfs_rq:/.util_avg.max
      0.54 ± 39%   +7484.6%      41.08 ± 92%  sched_debug.cfs_rq:/.util_est_enqueued.min
    238.84 ±  8%     -33.2%     159.61 ± 26%  sched_debug.cfs_rq:/.util_est_enqueued.stddev
     10787 ±  2%     +13.8%      12274        sched_debug.cpu.nr_switches.avg
     35242 ±  9%     +32.3%      46641 ± 25%  sched_debug.cpu.nr_switches.max
      9139 ±  3%     +16.4%      10636        sched_debug.cpu.sched_count.avg
     32025 ± 10%     +34.6%      43091 ± 27%  sched_debug.cpu.sched_count.max
      4016 ±  2%     +14.7%       4606 ±  5%  sched_debug.cpu.sched_count.min
      2960           +38.3%       4093        sched_debug.cpu.sched_goidle.avg
     11201 ± 24%     +75.8%      19691 ± 26%  sched_debug.cpu.sched_goidle.max
      1099 ±  6%     +56.9%       1725 ±  6%  sched_debug.cpu.sched_goidle.min
      1877 ± 10%     +32.5%       2487 ± 17%  sched_debug.cpu.sched_goidle.stddev
      4348 ±  3%     +19.3%       5188        sched_debug.cpu.ttwu_count.avg
     17832 ± 11%     +78.6%      31852 ± 29%  sched_debug.cpu.ttwu_count.max
      1699 ±  6%     +28.2%       2178 ±  7%  sched_debug.cpu.ttwu_count.min
      1357 ± 10%     -22.6%       1050 ±  4%  sched_debug.cpu.ttwu_local.avg
     11483 ±  5%     -25.0%       8614 ± 15%  sched_debug.cpu.ttwu_local.max
      1979 ± 12%     -36.8%       1251 ± 10%  sched_debug.cpu.ttwu_local.stddev
 3.941e+10            +5.0%  4.137e+10        perf-stat.i.branch-instructions
      0.02 ± 50%      -0.0        0.02 ±  5%  perf-stat.i.branch-miss-rate%
     67.94            -3.9       63.99        perf-stat.i.cache-miss-rate%
 8.329e+08            -1.9%   8.17e+08        perf-stat.i.cache-misses
 1.224e+09            +4.5%   1.28e+09        perf-stat.i.cache-references
      6650 ±  3%     +15.5%       7678        perf-stat.i.context-switches
      1.64            -1.8%       1.61        perf-stat.i.cpi
 2.037e+11            +2.8%  2.095e+11        perf-stat.i.cpu-cycles
    257.56            -4.0%     247.13        perf-stat.i.cpu-migrations
    244.94            +4.5%     255.91        perf-stat.i.cycles-between-cache-misses
   1189446 ±  2%      +3.2%    1227527        perf-stat.i.dTLB-load-misses
 2.669e+10            +4.7%  2.794e+10        perf-stat.i.dTLB-loads
      0.00 ±  7%      -0.0        0.00        perf-stat.i.dTLB-store-miss-rate%
    337782            +4.5%     353044        perf-stat.i.dTLB-store-misses
 9.096e+09            +4.7%  9.526e+09        perf-stat.i.dTLB-stores
     39.50            +2.1       41.64        perf-stat.i.iTLB-load-miss-rate%
    296305 ±  2%      +9.0%     323020        perf-stat.i.iTLB-load-misses
 1.238e+11            +4.9%  1.299e+11        perf-stat.i.instructions
    428249 ±  2%      -4.4%     409553        perf-stat.i.instructions-per-iTLB-miss
      0.61            +1.6%       0.62        perf-stat.i.ipc
     44430            +3.8%      46121        perf-stat.i.minor-faults
     54.82            +3.9       58.73        perf-stat.i.node-load-miss-rate%
  68519419 ±  4%     -11.7%   60479057 ±  6%  perf-stat.i.node-load-misses
  49879161 ±  3%     -20.7%   39554915 ±  4%  perf-stat.i.node-loads
     44428            +3.8%      46119        perf-stat.i.page-faults
      0.02            -0.0        0.01 ±  5%  perf-stat.overall.branch-miss-rate%
     68.03            -4.2       63.83        perf-stat.overall.cache-miss-rate%
      1.65            -2.0%       1.61        perf-stat.overall.cpi
    244.61            +4.8%     256.41        perf-stat.overall.cycles-between-cache-misses
     30.21            +2.2       32.38        perf-stat.overall.iTLB-load-miss-rate%
    417920 ±  2%      -3.7%     402452        perf-stat.overall.instructions-per-iTLB-miss
      0.61            +2.1%       0.62        perf-stat.overall.ipc
     57.84            +2.6       60.44        perf-stat.overall.node-load-miss-rate%
 3.925e+10            +5.1%  4.124e+10        perf-stat.ps.branch-instructions
 8.295e+08            -1.8%  8.144e+08        perf-stat.ps.cache-misses
 1.219e+09            +4.6%  1.276e+09        perf-stat.ps.cache-references
      6625 ±  3%     +15.4%       7648        perf-stat.ps.context-switches
 2.029e+11            +2.9%  2.088e+11        perf-stat.ps.cpu-cycles
    256.82            -4.2%     246.09        perf-stat.ps.cpu-migrations
   1184763 ±  2%      +3.3%    1223366        perf-stat.ps.dTLB-load-misses
 2.658e+10            +4.8%  2.786e+10        perf-stat.ps.dTLB-loads
    336658            +4.5%     351710        perf-stat.ps.dTLB-store-misses
 9.059e+09            +4.8%  9.497e+09        perf-stat.ps.dTLB-stores
    295140 ±  2%      +9.0%     321824        perf-stat.ps.iTLB-load-misses
 1.233e+11            +5.0%  1.295e+11        perf-stat.ps.instructions
     44309            +3.7%      45933        perf-stat.ps.minor-faults
  68208972 ±  4%     -11.6%   60272675 ±  6%  perf-stat.ps.node-load-misses
  49689740 ±  3%     -20.7%   39401789 ±  4%  perf-stat.ps.node-loads
     44308            +3.7%      45932        perf-stat.ps.page-faults
 3.732e+13            +5.1%  3.922e+13        perf-stat.total.instructions
     14949 ±  2%     +14.5%      17124 ± 11%  softirqs.CPU0.SCHED
      9940           +37.8%      13700 ± 24%  softirqs.CPU1.SCHED
      9370 ±  2%     +28.2%      12014 ± 16%  softirqs.CPU10.SCHED
     17637 ±  2%     -16.5%      14733 ± 16%  softirqs.CPU101.SCHED
     17846 ±  3%     -17.4%      14745 ± 16%  softirqs.CPU103.SCHED
      9552           +24.7%      11916 ± 17%  softirqs.CPU11.SCHED
      9210 ±  5%     +27.9%      11784 ± 16%  softirqs.CPU12.SCHED
      9378 ±  3%     +27.7%      11974 ± 16%  softirqs.CPU13.SCHED
      9164 ±  2%     +29.4%      11856 ± 18%  softirqs.CPU14.SCHED
      9215           +21.2%      11170 ± 19%  softirqs.CPU15.SCHED
      9118 ±  2%     +29.1%      11772 ± 16%  softirqs.CPU16.SCHED
      9413           +29.2%      12165 ± 18%  softirqs.CPU17.SCHED
      9309 ±  2%     +29.9%      12097 ± 17%  softirqs.CPU18.SCHED
      9423           +26.1%      11880 ± 15%  softirqs.CPU19.SCHED
      9010 ±  7%     +37.8%      12420 ± 18%  softirqs.CPU2.SCHED
      9382 ±  3%     +27.0%      11916 ± 15%  softirqs.CPU20.SCHED
      9102 ±  4%     +30.0%      11830 ± 16%  softirqs.CPU21.SCHED
      9543 ±  3%     +23.4%      11780 ± 18%  softirqs.CPU22.SCHED
      8998 ±  5%     +29.2%      11630 ± 18%  softirqs.CPU24.SCHED
      9254 ±  2%     +23.9%      11462 ± 19%  softirqs.CPU25.SCHED
     18450 ±  4%     -16.9%      15341 ± 16%  softirqs.CPU26.SCHED
     17551 ±  4%     -14.8%      14956 ± 13%  softirqs.CPU27.SCHED
     17575 ±  4%     -14.6%      15010 ± 14%  softirqs.CPU28.SCHED
     17515 ±  5%     -14.2%      15021 ± 13%  softirqs.CPU29.SCHED
     17715 ±  2%     -16.1%      14856 ± 13%  softirqs.CPU30.SCHED
     17754 ±  4%     -16.1%      14904 ± 13%  softirqs.CPU31.SCHED
     17675 ±  2%     -17.0%      14679 ± 21%  softirqs.CPU32.SCHED
     17625 ±  2%     -16.0%      14813 ± 13%  softirqs.CPU34.SCHED
     17619 ±  2%     -14.7%      15024 ± 14%  softirqs.CPU35.SCHED
     17887 ±  3%     -17.0%      14841 ± 14%  softirqs.CPU36.SCHED
     17658 ±  3%     -16.3%      14771 ± 12%  softirqs.CPU38.SCHED
     17501 ±  2%     -15.3%      14816 ± 14%  softirqs.CPU39.SCHED
      9360 ±  2%     +25.4%      11740 ± 14%  softirqs.CPU4.SCHED
     17699 ±  4%     -16.2%      14827 ± 14%  softirqs.CPU42.SCHED
     17580 ±  3%     -16.5%      14679 ± 15%  softirqs.CPU43.SCHED
     17658 ±  3%     -17.1%      14644 ± 14%  softirqs.CPU44.SCHED
     17452 ±  4%     -14.0%      15001 ± 15%  softirqs.CPU46.SCHED
     17599 ±  4%     -17.4%      14544 ± 14%  softirqs.CPU47.SCHED
     17792 ±  3%     -16.5%      14864 ± 14%  softirqs.CPU48.SCHED
     17333 ±  2%     -16.7%      14445 ± 14%  softirqs.CPU49.SCHED
      9483           +32.3%      12547 ± 24%  softirqs.CPU5.SCHED
     17842 ±  3%     -15.9%      14997 ± 16%  softirqs.CPU51.SCHED
      9051 ±  2%     +23.3%      11160 ± 13%  softirqs.CPU52.SCHED
      9385 ±  3%     +25.2%      11752 ± 16%  softirqs.CPU53.SCHED
      9446 ±  6%     +24.9%      11798 ± 14%  softirqs.CPU54.SCHED
     10006 ±  6%     +22.4%      12249 ± 14%  softirqs.CPU55.SCHED
      9657           +22.0%      11780 ± 16%  softirqs.CPU57.SCHED
      9399           +27.5%      11980 ± 15%  softirqs.CPU58.SCHED
      9234 ±  3%     +27.7%      11795 ± 14%  softirqs.CPU59.SCHED
      9726 ±  6%     +24.0%      12062 ± 16%  softirqs.CPU6.SCHED
      9165 ±  2%     +23.7%      11342 ± 14%  softirqs.CPU60.SCHED
      9357 ±  2%     +25.8%      11774 ± 15%  softirqs.CPU61.SCHED
      9406 ±  3%     +25.2%      11780 ± 16%  softirqs.CPU62.SCHED
      9489           +23.2%      11688 ± 15%  softirqs.CPU63.SCHED
      9399 ±  2%     +23.5%      11604 ± 16%  softirqs.CPU65.SCHED
      8950 ±  2%     +31.6%      11774 ± 16%  softirqs.CPU66.SCHED
      9260           +21.7%      11267 ± 19%  softirqs.CPU67.SCHED
      9187           +27.1%      11672 ± 17%  softirqs.CPU68.SCHED
      9443 ±  2%     +25.5%      11847 ± 17%  softirqs.CPU69.SCHED
      9144 ±  3%     +28.0%      11706 ± 16%  softirqs.CPU7.SCHED
      9276 ±  2%     +28.0%      11871 ± 17%  softirqs.CPU70.SCHED
      9494           +21.4%      11526 ± 14%  softirqs.CPU71.SCHED
      9124 ±  3%     +27.8%      11657 ± 17%  softirqs.CPU72.SCHED
      9189 ±  3%     +25.9%      11568 ± 16%  softirqs.CPU73.SCHED
      9392 ±  2%     +23.7%      11619 ± 16%  softirqs.CPU74.SCHED
     17821 ±  3%     -14.7%      15197 ± 17%  softirqs.CPU78.SCHED
     17581 ±  2%     -15.7%      14827 ± 15%  softirqs.CPU79.SCHED
      9123           +28.2%      11695 ± 15%  softirqs.CPU8.SCHED
     17524 ±  2%     -16.7%      14601 ± 14%  softirqs.CPU80.SCHED
     17644 ±  3%     -16.2%      14782 ± 14%  softirqs.CPU81.SCHED
     17705 ±  3%     -18.6%      14414 ± 22%  softirqs.CPU84.SCHED
     17679 ±  2%     -14.1%      15185 ± 11%  softirqs.CPU85.SCHED
     17434 ±  3%     -15.5%      14724 ± 14%  softirqs.CPU86.SCHED
     17409 ±  2%     -15.0%      14794 ± 13%  softirqs.CPU87.SCHED
     17470 ±  3%     -15.7%      14730 ± 13%  softirqs.CPU88.SCHED
     17748 ±  4%     -17.1%      14721 ± 12%  softirqs.CPU89.SCHED
      9323           +28.0%      11929 ± 17%  softirqs.CPU9.SCHED
     17471 ±  2%     -16.9%      14525 ± 13%  softirqs.CPU90.SCHED
     17900 ±  3%     -17.0%      14850 ± 14%  softirqs.CPU94.SCHED
     17599 ±  4%     -17.4%      14544 ± 15%  softirqs.CPU95.SCHED
     17697 ±  4%     -17.7%      14569 ± 13%  softirqs.CPU96.SCHED
     17561 ±  3%     -15.1%      14901 ± 13%  softirqs.CPU97.SCHED
     17404 ±  3%     -16.1%      14601 ± 13%  softirqs.CPU98.SCHED
     17802 ±  3%     -19.4%      14344 ± 15%  softirqs.CPU99.SCHED
      1310 ± 10%     -17.0%       1088 ±  5%  interrupts.CPU1.RES:Rescheduling_interrupts
      3427           +13.3%       3883 ±  9%  interrupts.CPU10.CAL:Function_call_interrupts
    736.50 ± 20%     +34.4%     989.75 ± 17%  interrupts.CPU100.RES:Rescheduling_interrupts
      3421 ±  3%     +14.6%       3921 ±  9%  interrupts.CPU101.CAL:Function_call_interrupts
      4873 ±  8%     +16.2%       5662 ±  7%  interrupts.CPU101.NMI:Non-maskable_interrupts
      4873 ±  8%     +16.2%       5662 ±  7%  interrupts.CPU101.PMI:Performance_monitoring_interrupts
    629.50 ± 19%     +83.2%       1153 ± 46%  interrupts.CPU101.RES:Rescheduling_interrupts
    661.75 ± 14%     +25.7%     832.00 ± 13%  interrupts.CPU102.RES:Rescheduling_interrupts
      4695 ±  5%     +15.5%       5420 ±  9%  interrupts.CPU103.NMI:Non-maskable_interrupts
      4695 ±  5%     +15.5%       5420 ±  9%  interrupts.CPU103.PMI:Performance_monitoring_interrupts
      3460           +12.1%       3877 ±  9%  interrupts.CPU11.CAL:Function_call_interrupts
    691.50 ±  7%     +41.0%     975.00 ± 32%  interrupts.CPU19.RES:Rescheduling_interrupts
      3413 ±  2%     +13.4%       3870 ± 10%  interrupts.CPU20.CAL:Function_call_interrupts
      3413 ±  2%     +13.4%       3871 ± 10%  interrupts.CPU22.CAL:Function_call_interrupts
    863.00 ± 36%     +45.3%       1254 ± 24%  interrupts.CPU23.RES:Rescheduling_interrupts
    659.75 ± 12%     +83.4%       1209 ± 20%  interrupts.CPU26.RES:Rescheduling_interrupts
    615.00 ± 10%     +87.8%       1155 ± 14%  interrupts.CPU27.RES:Rescheduling_interrupts
    663.75 ±  5%     +67.9%       1114 ±  7%  interrupts.CPU28.RES:Rescheduling_interrupts
      3421 ±  4%     +13.4%       3879 ±  9%  interrupts.CPU29.CAL:Function_call_interrupts
    805.25 ± 16%     +33.0%       1071 ± 15%  interrupts.CPU29.RES:Rescheduling_interrupts
      3482 ±  3%     +11.0%       3864 ±  8%  interrupts.CPU3.CAL:Function_call_interrupts
    819.75 ± 19%     +48.4%       1216 ± 12%  interrupts.CPU30.RES:Rescheduling_interrupts
    777.25 ±  8%     +31.6%       1023 ±  6%  interrupts.CPU31.RES:Rescheduling_interrupts
    844.50 ± 25%     +41.7%       1196 ± 20%  interrupts.CPU32.RES:Rescheduling_interrupts
    722.75 ± 14%     +94.2%       1403 ± 26%  interrupts.CPU33.RES:Rescheduling_interrupts
      3944 ± 25%     +36.8%       5394 ±  9%  interrupts.CPU34.NMI:Non-maskable_interrupts
      3944 ± 25%     +36.8%       5394 ±  9%  interrupts.CPU34.PMI:Performance_monitoring_interrupts
    781.75 ±  9%     +45.3%       1136 ± 27%  interrupts.CPU34.RES:Rescheduling_interrupts
    735.50 ±  9%     +33.3%     980.75 ±  4%  interrupts.CPU35.RES:Rescheduling_interrupts
    691.75 ± 10%     +41.6%     979.50 ± 13%  interrupts.CPU36.RES:Rescheduling_interrupts
    727.00 ± 16%     +47.7%       1074 ± 15%  interrupts.CPU37.RES:Rescheduling_interrupts
      4413 ±  7%     +24.9%       5511 ±  9%  interrupts.CPU38.NMI:Non-maskable_interrupts
      4413 ±  7%     +24.9%       5511 ±  9%  interrupts.CPU38.PMI:Performance_monitoring_interrupts
    708.75 ± 25%     +62.6%       1152 ± 22%  interrupts.CPU38.RES:Rescheduling_interrupts
    666.50 ±  7%     +57.8%       1052 ± 13%  interrupts.CPU39.RES:Rescheduling_interrupts
    765.75 ± 11%     +25.2%     958.75 ± 14%  interrupts.CPU4.RES:Rescheduling_interrupts
      3395 ±  2%     +15.1%       3908 ± 10%  interrupts.CPU40.CAL:Function_call_interrupts
    770.00 ± 16%     +45.3%       1119 ± 18%  interrupts.CPU40.RES:Rescheduling_interrupts
    740.50 ± 26%     +61.9%       1198 ± 19%  interrupts.CPU41.RES:Rescheduling_interrupts
      3459 ±  2%     +12.9%       3905 ± 11%  interrupts.CPU42.CAL:Function_call_interrupts
      4530 ±  5%     +22.8%       5564 ±  9%  interrupts.CPU42.NMI:Non-maskable_interrupts
      4530 ±  5%     +22.8%       5564 ±  9%  interrupts.CPU42.PMI:Performance_monitoring_interrupts
      3330 ± 25%     +60.0%       5328 ± 10%  interrupts.CPU44.NMI:Non-maskable_interrupts
      3330 ± 25%     +60.0%       5328 ± 10%  interrupts.CPU44.PMI:Performance_monitoring_interrupts
    686.25 ±  9%     +48.4%       1018 ± 10%  interrupts.CPU44.RES:Rescheduling_interrupts
    702.00 ± 15%     +38.6%     973.25 ±  5%  interrupts.CPU45.RES:Rescheduling_interrupts
      4742 ±  7%     +19.3%       5657 ±  8%  interrupts.CPU46.NMI:Non-maskable_interrupts
      4742 ±  7%     +19.3%       5657 ±  8%  interrupts.CPU46.PMI:Performance_monitoring_interrupts
    732.75 ±  6%     +51.9%       1113 ±  7%  interrupts.CPU46.RES:Rescheduling_interrupts
    775.50 ± 17%     +41.3%       1095 ±  6%  interrupts.CPU47.RES:Rescheduling_interrupts
    670.75 ±  5%     +60.7%       1078 ±  6%  interrupts.CPU48.RES:Rescheduling_interrupts
      4870 ±  8%     +16.5%       5676 ±  7%  interrupts.CPU49.NMI:Non-maskable_interrupts
      4870 ±  8%     +16.5%       5676 ±  7%  interrupts.CPU49.PMI:Performance_monitoring_interrupts
    694.75 ± 12%     +25.8%     874.00 ± 11%  interrupts.CPU49.RES:Rescheduling_interrupts
    686.00 ±  9%     +52.0%       1042 ± 20%  interrupts.CPU50.RES:Rescheduling_interrupts
      3361           +17.2%       3938 ±  9%  interrupts.CPU51.CAL:Function_call_interrupts
      4707 ±  6%     +16.0%       5463 ±  8%  interrupts.CPU51.NMI:Non-maskable_interrupts
      4707 ±  6%     +16.0%       5463 ±  8%  interrupts.CPU51.PMI:Performance_monitoring_interrupts
    638.75 ± 12%     +28.6%     821.25 ± 15%  interrupts.CPU54.RES:Rescheduling_interrupts
    677.50 ±  8%     +51.8%       1028 ± 29%  interrupts.CPU58.RES:Rescheduling_interrupts
      3465 ±  2%     +12.0%       3880 ±  9%  interrupts.CPU6.CAL:Function_call_interrupts
    641.25 ±  2%     +26.1%     808.75 ± 10%  interrupts.CPU60.RES:Rescheduling_interrupts
    599.75 ±  2%     +45.6%     873.50 ±  8%  interrupts.CPU62.RES:Rescheduling_interrupts
    661.50 ±  9%     +52.4%       1008 ± 27%  interrupts.CPU63.RES:Rescheduling_interrupts
    611.00 ± 12%     +31.1%     801.00 ± 13%  interrupts.CPU69.RES:Rescheduling_interrupts
      3507 ±  2%     +10.8%       3888 ±  9%  interrupts.CPU7.CAL:Function_call_interrupts
    664.00 ±  5%     +32.3%     878.50 ± 23%  interrupts.CPU70.RES:Rescheduling_interrupts
      5780 ±  9%     -38.8%       3540 ± 37%  interrupts.CPU73.NMI:Non-maskable_interrupts
      5780 ±  9%     -38.8%       3540 ± 37%  interrupts.CPU73.PMI:Performance_monitoring_interrupts
      5787 ±  9%     -26.7%       4243 ± 28%  interrupts.CPU76.NMI:Non-maskable_interrupts
      5787 ±  9%     -26.7%       4243 ± 28%  interrupts.CPU76.PMI:Performance_monitoring_interrupts
    751.50 ± 15%     +88.0%       1413 ± 37%  interrupts.CPU78.RES:Rescheduling_interrupts
    725.50 ± 12%     +82.9%       1327 ± 36%  interrupts.CPU79.RES:Rescheduling_interrupts
    714.00 ± 18%     +33.2%     951.00 ± 15%  interrupts.CPU80.RES:Rescheduling_interrupts
    706.25 ± 19%     +55.6%       1098 ± 27%  interrupts.CPU82.RES:Rescheduling_interrupts
      4524 ±  6%     +19.6%       5409 ±  8%  interrupts.CPU83.NMI:Non-maskable_interrupts
      4524 ±  6%     +19.6%       5409 ±  8%  interrupts.CPU83.PMI:Performance_monitoring_interrupts
    666.75 ± 15%     +37.3%     915.50 ±  4%  interrupts.CPU83.RES:Rescheduling_interrupts
    782.50 ± 26%     +57.6%       1233 ± 21%  interrupts.CPU84.RES:Rescheduling_interrupts
    622.75 ± 12%     +77.8%       1107 ± 17%  interrupts.CPU85.RES:Rescheduling_interrupts
      3465 ±  3%     +13.5%       3933 ±  9%  interrupts.CPU86.CAL:Function_call_interrupts
    714.75 ± 14%     +47.0%       1050 ± 10%  interrupts.CPU86.RES:Rescheduling_interrupts
      3519 ±  2%     +11.7%       3929 ±  9%  interrupts.CPU87.CAL:Function_call_interrupts
    582.75 ± 10%     +54.2%     898.75 ± 11%  interrupts.CPU87.RES:Rescheduling_interrupts
    713.00 ± 10%     +36.6%     974.25 ± 11%  interrupts.CPU88.RES:Rescheduling_interrupts
    690.50 ± 13%     +53.0%       1056 ± 13%  interrupts.CPU89.RES:Rescheduling_interrupts
      3477           +11.0%       3860 ±  8%  interrupts.CPU9.CAL:Function_call_interrupts
    684.50 ± 14%     +39.7%     956.25 ± 11%  interrupts.CPU90.RES:Rescheduling_interrupts
      3946 ± 21%     +39.8%       5516 ± 10%  interrupts.CPU91.NMI:Non-maskable_interrupts
      3946 ± 21%     +39.8%       5516 ± 10%  interrupts.CPU91.PMI:Performance_monitoring_interrupts
    649.00 ± 13%     +54.3%       1001 ±  6%  interrupts.CPU91.RES:Rescheduling_interrupts
    674.25 ± 21%     +39.5%     940.25 ± 11%  interrupts.CPU92.RES:Rescheduling_interrupts
      3971 ± 26%     +41.2%       5606 ±  8%  interrupts.CPU94.NMI:Non-maskable_interrupts
      3971 ± 26%     +41.2%       5606 ±  8%  interrupts.CPU94.PMI:Performance_monitoring_interrupts
      4129 ± 22%     +33.2%       5499 ±  9%  interrupts.CPU95.NMI:Non-maskable_interrupts
      4129 ± 22%     +33.2%       5499 ±  9%  interrupts.CPU95.PMI:Performance_monitoring_interrupts
    685.75 ± 14%     +38.0%     946.50 ±  9%  interrupts.CPU96.RES:Rescheduling_interrupts
      4630 ± 11%     +18.3%       5477 ±  8%  interrupts.CPU97.NMI:Non-maskable_interrupts
      4630 ± 11%     +18.3%       5477 ±  8%  interrupts.CPU97.PMI:Performance_monitoring_interrupts
      4835 ±  9%     +16.3%       5622 ±  9%  interrupts.CPU98.NMI:Non-maskable_interrupts
      4835 ±  9%     +16.3%       5622 ±  9%  interrupts.CPU98.PMI:Performance_monitoring_interrupts
    596.25 ± 11%     +81.8%       1083 ±  9%  interrupts.CPU98.RES:Rescheduling_interrupts
    674.75 ± 17%     +43.7%     969.50 ±  5%  interrupts.CPU99.RES:Rescheduling_interrupts
     78.25 ± 13%     +21.4%      95.00 ± 10%  interrupts.IWI:IRQ_work_interrupts
     85705 ±  6%     +26.0%     107990 ±  6%  interrupts.RES:Rescheduling_interrupts


                                                                                
                               vm-scalability.throughput                        
                                                                                
  4.55e+07 +-+--------------------------------------------------------------+   
           |             O                                                  |   
   4.5e+07 +-+   O O   O         O   O   O  O                               |   
           O O       O     O   O                O                           |   
           |   O             O     O   O                                    |   
  4.45e+07 +-+                                O                             |   
           |                                                                |   
   4.4e+07 +-+                                                           .+.|   
           |                                                         .+.+   |   
  4.35e+07 +-+       +. .+.      +. .+.                      .+.    +       |   
           |.+.+.   :  +   +.   +  +   +.+..+.+   +   +   +.+   +. +        |   
           |     +. :        +.+               + + + + + +        +         |   
   4.3e+07 +-+     +                            +   +   +                   |   
           |                                                                |   
  4.25e+07 +-+--------------------------------------------------------------+   
                                                                                
                                                                                                                                                                
                            vm-scalability.time.user_time                       
                                                                                
  13000 +-+-----------------------------------------------------------------+   
        O     O          O                                                  |   
  12800 +-+                                                                 |   
        |        O O                                                        |   
        |   O              O O O  O O O     O                               |   
  12600 +-O          O                  O O   O                             |   
        |              O                                                    |   
  12400 +-+                                                                 |   
        |                                                                   |   
  12200 +-+                                                                 |   
        |     +..                                                           |   
        |    +          .+.               +.+.+.  .+. .+.+.+.          .+.+.|   
  12000 +-+.+    +.+. .+   +.+.+..+.+.+  :      +.   +       +.+.+..  +     |   
        |            +                 + :                           +      |   
  11800 +-+-----------------------------------------------------------------+   
                                                                                
                                                                                                                                                                
                  vm-scalability.time.percent_of_cpu_this_job_got               
                                                                                
  8050 O-+-------------------O----------------------------------------------+   
       |   O O    O     O O                 O                               |   
  8000 +-O      O   O          O O O O O  O                                 |   
       |              O                       O                             |   
  7950 +-+                                                                  |   
       |                                                                    |   
  7900 +-+                                                                  |   
       |                                                                    |   
  7850 +-+                                                                  |   
       |                +                                                   |   
  7800 +-+   +..+.+     ::       +          +.            .+            +.  |   
       |.   +      :   : :   +  : +        :  +. .+. .+..+  :   .+.    +  +.|   
  7750 +-+.+       :   :  :.. + :  +.      :    +   +       : .+   +..+     |   
       |            +.+   +    +     +.+..+                  +              |   
  7700 +-+------------------------------------------------------------------+   
                                                                                
                                                                                                                                                                
                  vm-scalability.time.involuntary_context_switches              
                                                                                
  190000 +-+----------------------------------------------------------------+   
         |              +.      +                                 +.+    .+.|   
  180000 +-+           :  +.   + +                 +   +. .+    ..   + .+   |   
         |    .+.      :    +.+   +.+.+     .+.   : : :  +  :  +      +     |   
         |   +   +.  .+                +  .+   +. : : :     : +             |   
  170000 +-++      +.                   +.       +   +       +              |   
         | +                                                                |   
  160000 +-+                                                                |   
         |                                                                  |   
  150000 +-+                                                                |   
         | O                      O   O O  O O O                            |   
         |   O                  O   O                                       |   
  140000 O-+   O O O  O   O O O                                             |   
         |              O                                                   |   
  130000 +-+----------------------------------------------------------------+   
                                                                                
                                                                                                                                                                
                                vm-scalability.median                           
                                                                                
  430000 +-+----------O---------O---O---------------------------------------+   
         O     O   O    O               O  O                                |   
  425000 +-O     O        O   O                O                            |   
         |   O              O     O   O      O                              |   
         |                                                                  |   
  420000 +-+                                                                |   
         |         +                                                        |   
  415000 +-+       ::     +                                              .+ |   
         |        :  :   + :      +                                  .+.+  +|   
  410000 +-+  .+  :  : .+  :     + +   .+..+.             .+.+.    .+       |   
         |.+.+  +:    +     +. .+   +.+      +   +. .+. .+     +..+         |   
         |       +            +               + +  +   +                    |   
  405000 +-+                                   +                            |   
         |                                                                  |   
  400000 +-+----------------------------------------------------------------+   
                                                                                
                                                                                                                                                                
                                vm-scalability.workload                         
                                                                                
  1.23e+10 +-+---O----------------------------------------------------------+   
  1.22e+10 +-+                                                              |   
           O       O O O   O   O O   O                                      |   
  1.21e+10 +-O O         O   O     O   O O  O O O                           |   
   1.2e+10 +-+                                                              |   
           |                                                                |   
  1.19e+10 +-+                                                              |   
  1.18e+10 +-+                                                              |   
  1.17e+10 +-+                                                              |   
           |                                                                |   
  1.16e+10 +-+                                                              |   
  1.15e+10 +-+   +   +   +.+       +.+      +.+     +   +.+.+           +.+.|   
           |    + + + + +   +     +   +   ..   +   + + +     +         +    |   
  1.14e+10 +-+.+   +   +     +.+.+     +.+      +.+   +       +.+.+.+.+     |   
  1.13e+10 +-+--------------------------------------------------------------+   
                                                                                
                                                                                
[*] bisect-good sample
[O] bisect-bad  sample

***************************************************************************************************
lkp-bdw-ep6: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
=========================================================================================
class/compiler/cpufreq_governor/disk/kconfig/nr_threads/rootfs/tbox_group/testcase/testtime/ucode:
  interrupt/gcc-7/performance/1HDD/x86_64-rhel-7.6/100%/debian-x86_64-2019-09-23.cgz/lkp-bdw-ep6/stress-ng/30s/0xb000038

commit: 
  fcf0553db6 ("sched/fair: Remove meaningless imbalance calculation")
  0b0695f2b3 ("sched/fair: Rework load_balance()")

fcf0553db6f4c793 0b0695f2b34a4afa3f6e9aa1ff0 
---------------- --------------------------- 
         %stddev     %change         %stddev
             \          |                \  
  98318389           +43.0%  1.406e+08        stress-ng.schedpolicy.ops
   3277346           +43.0%    4685146        stress-ng.schedpolicy.ops_per_sec
 3.506e+08 ±  4%     -10.3%  3.146e+08 ±  3%  stress-ng.sigq.ops
  11684738 ±  4%     -10.3%   10485353 ±  3%  stress-ng.sigq.ops_per_sec
 3.628e+08 ±  6%     -19.4%  2.925e+08 ±  6%  stress-ng.time.involuntary_context_switches
     29456            +2.8%      30285        stress-ng.time.system_time
   7636655 ±  9%     +46.6%   11197377 ± 27%  cpuidle.C1E.usage
   1111483 ±  3%      -9.5%    1005829        vmstat.system.cs
  22638222 ±  4%     +16.5%   26370816 ± 11%  meminfo.Committed_AS
     28908 ±  6%     +24.6%      36020 ± 16%  meminfo.KernelStack
   7636543 ±  9%     +46.6%   11196090 ± 27%  turbostat.C1E
      3.46 ± 16%     -61.2%       1.35 ±  7%  turbostat.Pkg%pc2
    217.54            +1.7%     221.33        turbostat.PkgWatt
     13.34 ±  2%      +5.8%      14.11        turbostat.RAMWatt
    525.50 ±  8%     -15.7%     443.00 ± 12%  slabinfo.biovec-128.active_objs
    525.50 ±  8%     -15.7%     443.00 ± 12%  slabinfo.biovec-128.num_objs
     28089 ± 12%     -33.0%      18833 ± 22%  slabinfo.pool_workqueue.active_objs
    877.25 ± 12%     -32.6%     591.00 ± 21%  slabinfo.pool_workqueue.active_slabs
     28089 ± 12%     -32.6%      18925 ± 21%  slabinfo.pool_workqueue.num_objs
    877.25 ± 12%     -32.6%     591.00 ± 21%  slabinfo.pool_workqueue.num_slabs
    846.75 ±  6%     -18.0%     694.75 ±  9%  slabinfo.skbuff_fclone_cache.active_objs
    846.75 ±  6%     -18.0%     694.75 ±  9%  slabinfo.skbuff_fclone_cache.num_objs
     63348 ±  6%     -20.7%      50261 ±  4%  softirqs.CPU14.SCHED
     44394 ±  4%     +21.4%      53880 ±  8%  softirqs.CPU42.SCHED
     52246 ±  7%     -15.1%      44352        softirqs.CPU47.SCHED
     58350 ±  4%     -11.0%      51914 ±  7%  softirqs.CPU6.SCHED
     58009 ±  7%     -23.8%      44206 ±  4%  softirqs.CPU63.SCHED
     49166 ±  6%     +23.4%      60683 ±  9%  softirqs.CPU68.SCHED
     44594 ±  7%     +14.3%      50951 ±  8%  softirqs.CPU78.SCHED
     46407 ±  9%     +19.6%      55515 ±  8%  softirqs.CPU84.SCHED
     55555 ±  8%     -15.5%      46933 ±  4%  softirqs.CPU9.SCHED
    198757 ± 18%     +44.1%     286316 ±  9%  numa-meminfo.node0.Active
    189280 ± 19%     +37.1%     259422 ±  7%  numa-meminfo.node0.Active(anon)
    110438 ± 33%     +68.3%     185869 ± 16%  numa-meminfo.node0.AnonHugePages
    143458 ± 28%     +67.7%     240547 ± 13%  numa-meminfo.node0.AnonPages
     12438 ± 16%     +61.9%      20134 ± 37%  numa-meminfo.node0.KernelStack
   1004379 ±  7%     +16.4%    1168764 ±  4%  numa-meminfo.node0.MemUsed
    357111 ± 24%     -41.6%     208655 ± 29%  numa-meminfo.node1.Active
    330094 ± 22%     -39.6%     199339 ± 32%  numa-meminfo.node1.Active(anon)
    265924 ± 25%     -52.2%     127138 ± 46%  numa-meminfo.node1.AnonHugePages
    314059 ± 22%     -49.6%     158305 ± 36%  numa-meminfo.node1.AnonPages
     15386 ± 16%     -25.1%      11525 ± 15%  numa-meminfo.node1.KernelStack
   1200805 ± 11%     -18.6%     977595 ±  7%  numa-meminfo.node1.MemUsed
    965.50 ± 15%     -29.3%     682.25 ± 43%  numa-meminfo.node1.Mlocked
     46762 ± 18%     +37.8%      64452 ±  8%  numa-vmstat.node0.nr_active_anon
     35393 ± 27%     +68.9%      59793 ± 12%  numa-vmstat.node0.nr_anon_pages
     52.75 ± 33%     +71.1%      90.25 ± 15%  numa-vmstat.node0.nr_anon_transparent_hugepages
     15.00 ± 96%    +598.3%     104.75 ± 15%  numa-vmstat.node0.nr_inactive_file
     11555 ± 22%     +68.9%      19513 ± 41%  numa-vmstat.node0.nr_kernel_stack
    550.25 ±162%    +207.5%       1691 ± 48%  numa-vmstat.node0.nr_written
     46762 ± 18%     +37.8%      64452 ±  8%  numa-vmstat.node0.nr_zone_active_anon
     15.00 ± 96%    +598.3%     104.75 ± 15%  numa-vmstat.node0.nr_zone_inactive_file
     82094 ± 22%     -39.5%      49641 ± 32%  numa-vmstat.node1.nr_active_anon
     78146 ± 23%     -49.5%      39455 ± 37%  numa-vmstat.node1.nr_anon_pages
    129.00 ± 25%     -52.3%      61.50 ± 47%  numa-vmstat.node1.nr_anon_transparent_hugepages
    107.75 ± 12%     -85.4%      15.75 ±103%  numa-vmstat.node1.nr_inactive_file
     14322 ± 11%     -21.1%      11304 ± 11%  numa-vmstat.node1.nr_kernel_stack
    241.00 ± 15%     -29.5%     170.00 ± 43%  numa-vmstat.node1.nr_mlock
     82094 ± 22%     -39.5%      49641 ± 32%  numa-vmstat.node1.nr_zone_active_anon
    107.75 ± 12%     -85.4%      15.75 ±103%  numa-vmstat.node1.nr_zone_inactive_file
      0.81 ±  5%      +0.2        0.99 ± 10%  perf-profile.calltrace.cycles-pp.task_rq_lock.task_sched_runtime.cpu_clock_sample.posix_cpu_timer_get.do_timer_gettime
      0.60 ± 11%      +0.2        0.83 ±  9%  perf-profile.calltrace.cycles-pp.___might_sleep.__might_fault._copy_to_user.put_itimerspec64.__x64_sys_timer_gettime
      1.73 ±  9%      +0.3        2.05 ±  8%  perf-profile.calltrace.cycles-pp.__might_fault._copy_to_user.put_itimerspec64.__x64_sys_timer_gettime.do_syscall_64
      3.92 ±  5%      +0.6        4.49 ±  7%  perf-profile.calltrace.cycles-pp.task_sched_runtime.cpu_clock_sample.posix_cpu_timer_get.do_timer_gettime.__x64_sys_timer_gettime
      4.17 ±  4%      +0.6        4.78 ±  7%  perf-profile.calltrace.cycles-pp.cpu_clock_sample.posix_cpu_timer_get.do_timer_gettime.__x64_sys_timer_gettime.do_syscall_64
      5.72 ±  3%      +0.7        6.43 ±  7%  perf-profile.calltrace.cycles-pp.posix_cpu_timer_get.do_timer_gettime.__x64_sys_timer_gettime.do_syscall_64.entry_SYSCALL_64_after_hwframe
      0.24 ± 54%      -0.2        0.07 ±131%  perf-profile.children.cycles-pp.ext4_inode_csum_set
      0.45 ±  3%      +0.1        0.56 ±  4%  perf-profile.children.cycles-pp.__might_sleep
      0.84 ±  5%      +0.2        1.03 ±  9%  perf-profile.children.cycles-pp.task_rq_lock
      0.66 ±  8%      +0.2        0.88 ±  7%  perf-profile.children.cycles-pp.___might_sleep
      1.83 ±  9%      +0.3        2.16 ±  8%  perf-profile.children.cycles-pp.__might_fault
      4.04 ±  5%      +0.6        4.62 ±  7%  perf-profile.children.cycles-pp.task_sched_runtime
      4.24 ±  4%      +0.6        4.87 ±  7%  perf-profile.children.cycles-pp.cpu_clock_sample
      5.77 ±  3%      +0.7        6.48 ±  7%  perf-profile.children.cycles-pp.posix_cpu_timer_get
      0.22 ± 11%      +0.1        0.28 ± 15%  perf-profile.self.cycles-pp.cpu_clock_sample
      0.47 ±  7%      +0.1        0.55 ±  5%  perf-profile.self.cycles-pp.update_curr
      0.28 ±  5%      +0.1        0.38 ± 14%  perf-profile.self.cycles-pp.task_rq_lock
      0.42 ±  3%      +0.1        0.53 ±  4%  perf-profile.self.cycles-pp.__might_sleep
      0.50 ±  5%      +0.1        0.61 ± 11%  perf-profile.self.cycles-pp.task_sched_runtime
      0.63 ±  9%      +0.2        0.85 ±  7%  perf-profile.self.cycles-pp.___might_sleep
   9180611 ±  5%     +40.1%   12859327 ± 14%  sched_debug.cfs_rq:/.MIN_vruntime.max
   1479571 ±  6%     +57.6%    2331469 ± 14%  sched_debug.cfs_rq:/.MIN_vruntime.stddev
      7951 ±  6%     -52.5%       3773 ± 17%  sched_debug.cfs_rq:/.exec_clock.stddev
    321306 ± 39%     -44.2%     179273        sched_debug.cfs_rq:/.load.max
   9180613 ±  5%     +40.1%   12859327 ± 14%  sched_debug.cfs_rq:/.max_vruntime.max
   1479571 ±  6%     +57.6%    2331469 ± 14%  sched_debug.cfs_rq:/.max_vruntime.stddev
  16622378           +20.0%   19940069 ±  7%  sched_debug.cfs_rq:/.min_vruntime.avg
  18123901           +19.7%   21686545 ±  6%  sched_debug.cfs_rq:/.min_vruntime.max
  14338218 ±  3%     +27.4%   18267927 ±  7%  sched_debug.cfs_rq:/.min_vruntime.min
      0.17 ± 16%     +23.4%       0.21 ± 11%  sched_debug.cfs_rq:/.nr_running.stddev
    319990 ± 39%     -44.6%     177347        sched_debug.cfs_rq:/.runnable_weight.max
  -2067420           -33.5%   -1375445        sched_debug.cfs_rq:/.spread0.min
      1033 ±  8%     -13.7%     891.85 ±  3%  sched_debug.cfs_rq:/.util_est_enqueued.max
     93676 ± 16%     -29.0%      66471 ± 17%  sched_debug.cpu.avg_idle.min
     10391 ± 52%    +118.9%      22750 ± 15%  sched_debug.cpu.curr->pid.avg
     14393 ± 35%    +113.2%      30689 ± 17%  sched_debug.cpu.curr->pid.max
      3041 ± 38%    +161.8%       7963 ± 11%  sched_debug.cpu.curr->pid.stddev
      3.38 ±  6%     -16.3%       2.83 ±  5%  sched_debug.cpu.nr_running.max
   2412687 ±  4%     -16.0%    2027251 ±  3%  sched_debug.cpu.nr_switches.avg
   4038819 ±  3%     -20.2%    3223112 ±  5%  sched_debug.cpu.nr_switches.max
    834203 ± 17%     -37.8%     518798 ± 27%  sched_debug.cpu.nr_switches.stddev
     45.85 ± 13%     +41.2%      64.75 ± 18%  sched_debug.cpu.nr_uninterruptible.max
   1937209 ±  2%     +58.5%    3070891 ±  3%  sched_debug.cpu.sched_count.min
   1074023 ± 13%     -57.9%     451958 ± 12%  sched_debug.cpu.sched_count.stddev
   1283769 ±  7%     +65.1%    2118907 ±  7%  sched_debug.cpu.yld_count.min
    714244 ±  5%     -51.9%     343373 ± 22%  sched_debug.cpu.yld_count.stddev
     12.54 ±  9%     -18.8%      10.18 ± 15%  perf-stat.i.MPKI
 1.011e+10            +2.6%  1.038e+10        perf-stat.i.branch-instructions
     13.22 ±  5%      +2.5       15.75 ±  3%  perf-stat.i.cache-miss-rate%
  21084021 ±  6%     +33.9%   28231058 ±  6%  perf-stat.i.cache-misses
   1143861 ±  5%     -12.1%    1005721 ±  6%  perf-stat.i.context-switches
 1.984e+11            +1.8%   2.02e+11        perf-stat.i.cpu-cycles
 1.525e+10            +1.3%  1.544e+10        perf-stat.i.dTLB-loads
     65.46            -2.7       62.76 ±  3%  perf-stat.i.iTLB-load-miss-rate%
  20360883 ±  4%     +10.5%   22500874 ±  4%  perf-stat.i.iTLB-loads
 4.963e+10            +2.0%  5.062e+10        perf-stat.i.instructions
    181557            -2.4%     177113        perf-stat.i.msec
   5350122 ±  8%     +26.5%    6765332 ±  7%  perf-stat.i.node-load-misses
   4264320 ±  3%     +24.8%    5321600 ±  4%  perf-stat.i.node-store-misses
      6.12 ±  5%      +1.5        7.60 ±  2%  perf-stat.overall.cache-miss-rate%
      7646 ±  6%     -17.7%       6295 ±  3%  perf-stat.overall.cycles-between-cache-misses
     69.29            -1.1       68.22        perf-stat.overall.iTLB-load-miss-rate%
     61.11 ±  2%      +6.6       67.71 ±  5%  perf-stat.overall.node-load-miss-rate%
     74.82            +1.8       76.58        perf-stat.overall.node-store-miss-rate%
 1.044e+10            +1.8%  1.063e+10        perf-stat.ps.branch-instructions
  26325951 ±  6%     +22.9%   32366684 ±  2%  perf-stat.ps.cache-misses
   1115530 ±  3%      -9.5%    1009780        perf-stat.ps.context-switches
 1.536e+10            +1.0%  1.552e+10        perf-stat.ps.dTLB-loads
  44718416 ±  2%      +5.8%   47308605 ±  3%  perf-stat.ps.iTLB-load-misses
  19831973 ±  4%     +11.1%   22040029 ±  4%  perf-stat.ps.iTLB-loads
 5.064e+10            +1.4%  5.137e+10        perf-stat.ps.instructions
   5454694 ±  9%     +26.4%    6892365 ±  6%  perf-stat.ps.node-load-misses
   4263688 ±  4%     +24.9%    5325279 ±  4%  perf-stat.ps.node-store-misses
 3.001e+13            +1.7%  3.052e+13        perf-stat.total.instructions
     18550           -74.9%       4650 ±173%  interrupts.76:IR-PCI-MSI.512000-edge.ahci[0000:00:1f.2]
      7642 ±  9%     -20.4%       6086 ±  2%  interrupts.CPU0.CAL:Function_call_interrupts
      4376 ± 22%     -75.4%       1077 ± 41%  interrupts.CPU0.TLB:TLB_shootdowns
      8402 ±  5%     -19.0%       6806        interrupts.CPU1.CAL:Function_call_interrupts
      4559 ± 20%     -73.7%       1199 ± 15%  interrupts.CPU1.TLB:TLB_shootdowns
      8423 ±  4%     -20.2%       6725 ±  2%  interrupts.CPU10.CAL:Function_call_interrupts
      4536 ± 14%     -75.0%       1135 ± 20%  interrupts.CPU10.TLB:TLB_shootdowns
      8303 ±  3%     -18.2%       6795 ±  2%  interrupts.CPU11.CAL:Function_call_interrupts
      4404 ± 11%     -71.6%       1250 ± 35%  interrupts.CPU11.TLB:TLB_shootdowns
      8491 ±  6%     -21.3%       6683        interrupts.CPU12.CAL:Function_call_interrupts
      4723 ± 20%     -77.2%       1077 ± 17%  interrupts.CPU12.TLB:TLB_shootdowns
      8403 ±  5%     -20.3%       6700 ±  2%  interrupts.CPU13.CAL:Function_call_interrupts
      4557 ± 19%     -74.2%       1175 ± 22%  interrupts.CPU13.TLB:TLB_shootdowns
      8459 ±  4%     -18.6%       6884        interrupts.CPU14.CAL:Function_call_interrupts
      4559 ± 18%     -69.8%       1376 ± 13%  interrupts.CPU14.TLB:TLB_shootdowns
      8305 ±  7%     -17.7%       6833 ±  2%  interrupts.CPU15.CAL:Function_call_interrupts
      4261 ± 25%     -67.6%       1382 ± 24%  interrupts.CPU15.TLB:TLB_shootdowns
      8277 ±  5%     -19.1%       6696 ±  3%  interrupts.CPU16.CAL:Function_call_interrupts
      4214 ± 22%     -69.6%       1282 ±  8%  interrupts.CPU16.TLB:TLB_shootdowns
      8258 ±  5%     -18.9%       6694 ±  3%  interrupts.CPU17.CAL:Function_call_interrupts
      4461 ± 19%     -74.1%       1155 ± 21%  interrupts.CPU17.TLB:TLB_shootdowns
      8457 ±  6%     -20.6%       6717        interrupts.CPU18.CAL:Function_call_interrupts
      4889 ± 34%     +60.0%       7822        interrupts.CPU18.NMI:Non-maskable_interrupts
      4889 ± 34%     +60.0%       7822        interrupts.CPU18.PMI:Performance_monitoring_interrupts
      4731 ± 22%     -77.2%       1078 ± 10%  interrupts.CPU18.TLB:TLB_shootdowns
      8160 ±  5%     -18.1%       6684        interrupts.CPU19.CAL:Function_call_interrupts
      4311 ± 20%     -74.2%       1114 ± 13%  interrupts.CPU19.TLB:TLB_shootdowns
      8464 ±  2%     -18.2%       6927 ±  3%  interrupts.CPU2.CAL:Function_call_interrupts
      4938 ± 14%     -70.5%       1457 ± 18%  interrupts.CPU2.TLB:TLB_shootdowns
      8358 ±  6%     -19.7%       6715 ±  3%  interrupts.CPU20.CAL:Function_call_interrupts
      4567 ± 24%     -74.6%       1160 ± 35%  interrupts.CPU20.TLB:TLB_shootdowns
      8460 ±  4%     -22.3%       6577 ±  2%  interrupts.CPU21.CAL:Function_call_interrupts
      4514 ± 18%     -76.0%       1084 ± 22%  interrupts.CPU21.TLB:TLB_shootdowns
      6677 ±  6%     +19.6%       7988 ±  9%  interrupts.CPU22.CAL:Function_call_interrupts
      1288 ± 14%    +209.1%       3983 ± 35%  interrupts.CPU22.TLB:TLB_shootdowns
      6751 ±  2%     +24.0%       8370 ±  9%  interrupts.CPU23.CAL:Function_call_interrupts
      1037 ± 29%    +323.0%       4388 ± 36%  interrupts.CPU23.TLB:TLB_shootdowns
      6844           +20.6%       8251 ±  9%  interrupts.CPU24.CAL:Function_call_interrupts
      1205 ± 17%    +229.2%       3967 ± 40%  interrupts.CPU24.TLB:TLB_shootdowns
      6880           +21.9%       8389 ±  7%  interrupts.CPU25.CAL:Function_call_interrupts
      1228 ± 19%    +245.2%       4240 ± 35%  interrupts.CPU25.TLB:TLB_shootdowns
      6494 ±  8%     +25.1%       8123 ±  9%  interrupts.CPU26.CAL:Function_call_interrupts
      1141 ± 13%    +262.5%       4139 ± 32%  interrupts.CPU26.TLB:TLB_shootdowns
      6852           +19.2%       8166 ±  7%  interrupts.CPU27.CAL:Function_call_interrupts
      1298 ±  8%    +197.1%       3857 ± 31%  interrupts.CPU27.TLB:TLB_shootdowns
      6563 ±  6%     +25.2%       8214 ±  8%  interrupts.CPU28.CAL:Function_call_interrupts
      1176 ±  8%    +237.1%       3964 ± 33%  interrupts.CPU28.TLB:TLB_shootdowns
      6842 ±  2%     +21.4%       8308 ±  8%  interrupts.CPU29.CAL:Function_call_interrupts
      1271 ± 11%    +223.8%       4118 ± 33%  interrupts.CPU29.TLB:TLB_shootdowns
      8418 ±  3%     -21.1%       6643 ±  2%  interrupts.CPU3.CAL:Function_call_interrupts
      4677 ± 11%     -75.1%       1164 ± 16%  interrupts.CPU3.TLB:TLB_shootdowns
      6798 ±  3%     +21.8%       8284 ±  7%  interrupts.CPU30.CAL:Function_call_interrupts
      1219 ± 12%    +236.3%       4102 ± 30%  interrupts.CPU30.TLB:TLB_shootdowns
      6503 ±  4%     +25.9%       8186 ±  6%  interrupts.CPU31.CAL:Function_call_interrupts
      1046 ± 15%    +289.1%       4072 ± 32%  interrupts.CPU31.TLB:TLB_shootdowns
      6949 ±  3%     +17.2%       8141 ±  8%  interrupts.CPU32.CAL:Function_call_interrupts
      1241 ± 23%    +210.6%       3854 ± 34%  interrupts.CPU32.TLB:TLB_shootdowns
      1487 ± 26%    +161.6%       3889 ± 46%  interrupts.CPU33.TLB:TLB_shootdowns
      1710 ± 44%    +140.1%       4105 ± 36%  interrupts.CPU34.TLB:TLB_shootdowns
      6957 ±  2%     +15.2%       8012 ±  9%  interrupts.CPU35.CAL:Function_call_interrupts
      1165 ±  8%    +223.1%       3765 ± 38%  interrupts.CPU35.TLB:TLB_shootdowns
      1423 ± 24%    +173.4%       3892 ± 33%  interrupts.CPU36.TLB:TLB_shootdowns
      1279 ± 29%    +224.2%       4148 ± 39%  interrupts.CPU37.TLB:TLB_shootdowns
      1301 ± 20%    +226.1%       4244 ± 35%  interrupts.CPU38.TLB:TLB_shootdowns
      6906 ±  2%     +18.5%       8181 ±  8%  interrupts.CPU39.CAL:Function_call_interrupts
    368828 ± 20%     +96.2%     723710 ±  7%  interrupts.CPU39.RES:Rescheduling_interrupts
      1438 ± 12%    +174.8%       3951 ± 33%  interrupts.CPU39.TLB:TLB_shootdowns
      8399 ±  5%     -19.2%       6788 ±  2%  interrupts.CPU4.CAL:Function_call_interrupts
      4567 ± 18%     -72.7%       1245 ± 28%  interrupts.CPU4.TLB:TLB_shootdowns
      6895           +22.4%       8439 ±  9%  interrupts.CPU40.CAL:Function_call_interrupts
      1233 ± 11%    +247.1%       4280 ± 36%  interrupts.CPU40.TLB:TLB_shootdowns
      6819 ±  2%     +21.3%       8274 ±  9%  interrupts.CPU41.CAL:Function_call_interrupts
      1260 ± 14%    +207.1%       3871 ± 38%  interrupts.CPU41.TLB:TLB_shootdowns
      1301 ±  9%    +204.7%       3963 ± 36%  interrupts.CPU42.TLB:TLB_shootdowns
      6721 ±  3%     +22.3%       8221 ±  7%  interrupts.CPU43.CAL:Function_call_interrupts
      1237 ± 19%    +224.8%       4017 ± 35%  interrupts.CPU43.TLB:TLB_shootdowns
      8422 ±  8%     -22.7%       6506 ±  5%  interrupts.CPU44.CAL:Function_call_interrupts
  15261375 ±  7%      -7.8%   14064176        interrupts.CPU44.LOC:Local_timer_interrupts
      4376 ± 25%     -75.7%       1063 ± 26%  interrupts.CPU44.TLB:TLB_shootdowns
      8451 ±  5%     -23.7%       6448 ±  6%  interrupts.CPU45.CAL:Function_call_interrupts
      4351 ± 18%     -74.9%       1094 ± 12%  interrupts.CPU45.TLB:TLB_shootdowns
      8705 ±  6%     -21.2%       6860 ±  2%  interrupts.CPU46.CAL:Function_call_interrupts
      4787 ± 20%     -69.5%       1462 ± 16%  interrupts.CPU46.TLB:TLB_shootdowns
      8334 ±  3%     -18.9%       6763        interrupts.CPU47.CAL:Function_call_interrupts
      4126 ± 10%     -71.3%       1186 ± 18%  interrupts.CPU47.TLB:TLB_shootdowns
      8578 ±  4%     -21.7%       6713        interrupts.CPU48.CAL:Function_call_interrupts
      4520 ± 15%     -74.5%       1154 ± 23%  interrupts.CPU48.TLB:TLB_shootdowns
      8450 ±  8%     -18.8%       6863 ±  3%  interrupts.CPU49.CAL:Function_call_interrupts
      4494 ± 24%     -66.5%       1505 ± 22%  interrupts.CPU49.TLB:TLB_shootdowns
      8307 ±  4%     -18.0%       6816 ±  2%  interrupts.CPU5.CAL:Function_call_interrupts
      7845           -37.4%       4908 ± 34%  interrupts.CPU5.NMI:Non-maskable_interrupts
      7845           -37.4%       4908 ± 34%  interrupts.CPU5.PMI:Performance_monitoring_interrupts
      4429 ± 17%     -69.8%       1339 ± 20%  interrupts.CPU5.TLB:TLB_shootdowns
      8444 ±  4%     -21.7%       6613        interrupts.CPU50.CAL:Function_call_interrupts
      4282 ± 16%     -76.0%       1029 ± 17%  interrupts.CPU50.TLB:TLB_shootdowns
      8750 ±  6%     -22.2%       6803        interrupts.CPU51.CAL:Function_call_interrupts
      4755 ± 20%     -73.1%       1277 ± 15%  interrupts.CPU51.TLB:TLB_shootdowns
      8478 ±  6%     -20.2%       6766 ±  2%  interrupts.CPU52.CAL:Function_call_interrupts
      4337 ± 20%     -72.6%       1190 ± 22%  interrupts.CPU52.TLB:TLB_shootdowns
      8604 ±  7%     -21.5%       6750 ±  4%  interrupts.CPU53.CAL:Function_call_interrupts
      4649 ± 17%     -74.3%       1193 ± 23%  interrupts.CPU53.TLB:TLB_shootdowns
      8317 ±  9%     -19.4%       6706 ±  3%  interrupts.CPU54.CAL:Function_call_interrupts
      4372 ± 12%     -75.4%       1076 ± 29%  interrupts.CPU54.TLB:TLB_shootdowns
      8439 ±  3%     -18.5%       6876        interrupts.CPU55.CAL:Function_call_interrupts
      4415 ± 11%     -71.6%       1254 ± 17%  interrupts.CPU55.TLB:TLB_shootdowns
      8869 ±  6%     -22.6%       6864 ±  2%  interrupts.CPU56.CAL:Function_call_interrupts
    517594 ± 13%    +123.3%    1155539 ± 25%  interrupts.CPU56.RES:Rescheduling_interrupts
      5085 ± 22%     -74.9%       1278 ± 17%  interrupts.CPU56.TLB:TLB_shootdowns
      8682 ±  4%     -21.7%       6796 ±  2%  interrupts.CPU57.CAL:Function_call_interrupts
      4808 ± 19%     -74.1%       1243 ± 13%  interrupts.CPU57.TLB:TLB_shootdowns
      8626 ±  7%     -21.8%       6746 ±  2%  interrupts.CPU58.CAL:Function_call_interrupts
      4816 ± 20%     -79.1%       1007 ± 28%  interrupts.CPU58.TLB:TLB_shootdowns
      8759 ±  8%     -20.3%       6984        interrupts.CPU59.CAL:Function_call_interrupts
      4840 ± 22%     -70.6%       1423 ± 14%  interrupts.CPU59.TLB:TLB_shootdowns
      8167 ±  6%     -19.0%       6615 ±  2%  interrupts.CPU6.CAL:Function_call_interrupts
      4129 ± 21%     -75.4%       1017 ± 24%  interrupts.CPU6.TLB:TLB_shootdowns
      8910 ±  4%     -23.7%       6794 ±  3%  interrupts.CPU60.CAL:Function_call_interrupts
      5017 ± 12%     -77.8%       1113 ± 15%  interrupts.CPU60.TLB:TLB_shootdowns
      8689 ±  5%     -21.6%       6808        interrupts.CPU61.CAL:Function_call_interrupts
      4715 ± 20%     -77.6%       1055 ± 19%  interrupts.CPU61.TLB:TLB_shootdowns
      8574 ±  4%     -18.9%       6953 ±  2%  interrupts.CPU62.CAL:Function_call_interrupts
      4494 ± 17%     -72.3%       1244 ±  7%  interrupts.CPU62.TLB:TLB_shootdowns
      8865 ±  3%     -25.4%       6614 ±  7%  interrupts.CPU63.CAL:Function_call_interrupts
      4870 ± 12%     -76.8%       1130 ± 12%  interrupts.CPU63.TLB:TLB_shootdowns
      8724 ±  7%     -20.2%       6958 ±  3%  interrupts.CPU64.CAL:Function_call_interrupts
      4736 ± 16%     -72.6%       1295 ±  7%  interrupts.CPU64.TLB:TLB_shootdowns
      8717 ±  6%     -23.7%       6653 ±  4%  interrupts.CPU65.CAL:Function_call_interrupts
      4626 ± 19%     -76.5%       1087 ± 21%  interrupts.CPU65.TLB:TLB_shootdowns
      6671           +24.7%       8318 ±  9%  interrupts.CPU66.CAL:Function_call_interrupts
      1091 ±  8%    +249.8%       3819 ± 32%  interrupts.CPU66.TLB:TLB_shootdowns
      6795 ±  2%     +26.9%       8624 ±  9%  interrupts.CPU67.CAL:Function_call_interrupts
      1098 ± 24%    +299.5%       4388 ± 39%  interrupts.CPU67.TLB:TLB_shootdowns
      6704 ±  5%     +25.8%       8431 ±  8%  interrupts.CPU68.CAL:Function_call_interrupts
      1214 ± 15%    +236.1%       4083 ± 36%  interrupts.CPU68.TLB:TLB_shootdowns
      1049 ± 15%    +326.2%       4473 ± 33%  interrupts.CPU69.TLB:TLB_shootdowns
      8554 ±  6%     -19.6%       6874 ±  2%  interrupts.CPU7.CAL:Function_call_interrupts
      4753 ± 19%     -71.7%       1344 ± 16%  interrupts.CPU7.TLB:TLB_shootdowns
      1298 ± 13%    +227.4%       4249 ± 38%  interrupts.CPU70.TLB:TLB_shootdowns
      6976           +19.9%       8362 ±  7%  interrupts.CPU71.CAL:Function_call_interrupts
   1232748 ± 18%     -57.3%     525824 ± 33%  interrupts.CPU71.RES:Rescheduling_interrupts
      1253 ±  9%    +211.8%       3909 ± 31%  interrupts.CPU71.TLB:TLB_shootdowns
      1316 ± 22%    +188.7%       3800 ± 33%  interrupts.CPU72.TLB:TLB_shootdowns
      6665 ±  5%     +26.5%       8429 ±  8%  interrupts.CPU73.CAL:Function_call_interrupts
      1202 ± 13%    +234.1%       4017 ± 37%  interrupts.CPU73.TLB:TLB_shootdowns
      6639 ±  5%     +27.0%       8434 ±  8%  interrupts.CPU74.CAL:Function_call_interrupts
      1079 ± 16%    +269.4%       3986 ± 36%  interrupts.CPU74.TLB:TLB_shootdowns
      1055 ± 12%    +301.2%       4235 ± 34%  interrupts.CPU75.TLB:TLB_shootdowns
      7011 ±  3%     +21.6%       8522 ±  8%  interrupts.CPU76.CAL:Function_call_interrupts
      1223 ± 13%    +230.7%       4047 ± 35%  interrupts.CPU76.TLB:TLB_shootdowns
      6886 ±  7%     +25.6%       8652 ± 10%  interrupts.CPU77.CAL:Function_call_interrupts
      1316 ± 16%    +229.8%       4339 ± 36%  interrupts.CPU77.TLB:TLB_shootdowns
      7343 ±  5%     +19.1%       8743 ±  9%  interrupts.CPU78.CAL:Function_call_interrupts
      1699 ± 37%    +144.4%       4152 ± 31%  interrupts.CPU78.TLB:TLB_shootdowns
      7136 ±  4%     +21.4%       8666 ±  9%  interrupts.CPU79.CAL:Function_call_interrupts
      1094 ± 13%    +276.2%       4118 ± 34%  interrupts.CPU79.TLB:TLB_shootdowns
      8531 ±  5%     -19.5%       6869 ±  2%  interrupts.CPU8.CAL:Function_call_interrupts
      4764 ± 16%     -71.0%       1382 ± 14%  interrupts.CPU8.TLB:TLB_shootdowns
      1387 ± 29%    +181.8%       3910 ± 38%  interrupts.CPU80.TLB:TLB_shootdowns
      1114 ± 30%    +259.7%       4007 ± 36%  interrupts.CPU81.TLB:TLB_shootdowns
      7012           +23.9%       8685 ±  8%  interrupts.CPU82.CAL:Function_call_interrupts
      1274 ± 12%    +255.4%       4530 ± 27%  interrupts.CPU82.TLB:TLB_shootdowns
      6971 ±  3%     +23.8%       8628 ±  9%  interrupts.CPU83.CAL:Function_call_interrupts
      1156 ± 18%    +260.1%       4162 ± 34%  interrupts.CPU83.TLB:TLB_shootdowns
      7030 ±  4%     +21.0%       8504 ±  8%  interrupts.CPU84.CAL:Function_call_interrupts
      1286 ± 23%    +224.0%       4166 ± 31%  interrupts.CPU84.TLB:TLB_shootdowns
      7059           +22.4%       8644 ± 11%  interrupts.CPU85.CAL:Function_call_interrupts
      1421 ± 22%    +208.8%       4388 ± 33%  interrupts.CPU85.TLB:TLB_shootdowns
      7018 ±  2%     +22.8%       8615 ±  9%  interrupts.CPU86.CAL:Function_call_interrupts
      1258 ±  8%    +231.1%       4167 ± 34%  interrupts.CPU86.TLB:TLB_shootdowns
      1338 ±  3%    +217.9%       4255 ± 31%  interrupts.CPU87.TLB:TLB_shootdowns
      8376 ±  4%     -19.0%       6787 ±  2%  interrupts.CPU9.CAL:Function_call_interrupts
      4466 ± 17%     -71.2%       1286 ± 18%  interrupts.CPU9.TLB:TLB_shootdowns





Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


Thanks,
Rong Chen


View attachment "config-5.4.0-rc1-00010-g0b0695f2b34a4" of type "text/plain" (200620 bytes)

View attachment "job-script" of type "text/plain" (7538 bytes)

View attachment "job.yaml" of type "text/plain" (5217 bytes)

View attachment "reproduce" of type "text/plain" (9948 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ