[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200514141526.GA30976@xsang-OptiPlex-9020>
Date: Thu, 14 May 2020 22:15:26 +0800
From: kernel test robot <oliver.sang@...el.com>
To: Vincent Guittot <vincent.guittot@...aro.org>
Cc: Ingo Molnar <mingo@...nel.org>, Ben Segall <bsegall@...gle.com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Juri Lelli <juri.lelli@...hat.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Mel Gorman <mgorman@...e.de>, Mike Galbraith <efault@....de>,
Peter Zijlstra <peterz@...radead.org>,
Steven Rostedt <rostedt@...dmis.org>,
Thomas Gleixner <tglx@...utronix.de>,
LKML <linux-kernel@...r.kernel.org>, lkp@...ts.01.org,
OTC LSE PnP <otc.lse.pnp@...el.com>
Subject: [sched/fair] 0b0695f2b3:
phoronix-test-suite.compress-gzip.0.seconds 19.8% regression
Hi Vincent Guittot,
Below report FYI.
Last year, we actually reported an improvement "[sched/fair] 0b0695f2b3:
vm-scalability.median 3.1% improvement" on link [1].
but now we found the regression on pts.compress-gzip.
This seems align with what showed in "[v4,00/10] sched/fair: rework the CFS
load balance" (link [2]), where showed the reworked load balance could have
both positive and negative effect for different test suites.
And also from link [3], the patch set risks regressions.
We also confirmed this regression on another platform
(Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz with 8G memory),
below is the data (lower is better).
v5.4 4.1
fcf0553db6f4c79387864f6e4ab4a891601f395e 4.01
0b0695f2b34a4afa3f6e9aa1ff0e5336d8dad912 4.89
v5.5 5.18
v5.6 4.62
v5.7-rc2 4.53
v5.7-rc3 4.59
It seems there are some recovery on latest kernels, but not fully back.
We were just wondering whether you could share some lights the further works
on the load balance after patch set [2] which could cause the performance
change?
And whether you have plan to refine the load balance algorithm further?
thanks
[1] https://lists.01.org/hyperkitty/list/lkp@lists.01.org/thread/SANC7QLYZKUNMM6O7UNR3OAQAKS5BESE/
[2] https://lore.kernel.org/patchwork/cover/1141687/
[3] https://www.phoronix.com/scan.php?page=news_item&px=Linux-5.5-Scheduler
Below is the detail regression report FYI.
Greeting,
FYI, we noticed a 19.8% regression of phoronix-test-suite.compress-gzip.0.seconds due to commit:
commit: 0b0695f2b34a4afa3f6e9aa1ff0e5336d8dad912 ("sched/fair: Rework load_balance()")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: phoronix-test-suite
on test machine: 16 threads Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz with 32G memory
with following parameters:
test: compress-gzip-1.2.0
cpufreq_governor: performance
ucode: 0xca
test-description: The Phoronix Test Suite is the most comprehensive testing and benchmarking platform available that provides an extensible framework for which new tests can be easily added.
test-url: http://www.phoronix-test-suite.com/
In addition to that, the commit also has significant impact on the following tests:
+------------------+-----------------------------------------------------------------------+
| testcase: change | phoronix-test-suite: |
| test machine | 12 threads Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz with 8G memory |
| test parameters | cpufreq_governor=performance |
| | test=compress-gzip-1.2.0 |
+------------------+-----------------------------------------------------------------------+
| testcase: change | vm-scalability: vm-scalability.median 3.1% improvement |
| test machine | 104 threads Skylake with 192G memory |
| test parameters | cpufreq_governor=performance |
| | runtime=300s |
| | size=8T |
| | test=anon-cow-seq |
| | ucode=0x2000064 |
+------------------+-----------------------------------------------------------------------+
| testcase: change | stress-ng: stress-ng.fault.ops_per_sec -23.1% regression |
| test machine | 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory |
| test parameters | class=scheduler |
| | cpufreq_governor=performance |
| | disk=1HDD |
| | nr_threads=100% |
| | sc_pid_max=4194304 |
| | testtime=1s |
| | ucode=0xb000038 |
+------------------+-----------------------------------------------------------------------+
| testcase: change | stress-ng: stress-ng.schedpolicy.ops_per_sec -33.3% regression |
| test machine | 96 threads Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 192G memory |
| test parameters | class=interrupt |
| | cpufreq_governor=performance |
| | disk=1HDD |
| | nr_threads=100% |
| | testtime=1s |
| | ucode=0x500002c |
+------------------+-----------------------------------------------------------------------+
| testcase: change | stress-ng: stress-ng.schedpolicy.ops_per_sec 42.3% improvement |
| test machine | 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory |
| test parameters | class=interrupt |
| | cpufreq_governor=performance |
| | disk=1HDD |
| | nr_threads=100% |
| | testtime=30s |
| | ucode=0xb000038 |
+------------------+-----------------------------------------------------------------------+
| testcase: change | stress-ng: stress-ng.schedpolicy.ops_per_sec 35.1% improvement |
| test machine | 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory |
| test parameters | class=interrupt |
| | cpufreq_governor=performance |
| | disk=1HDD |
| | nr_threads=100% |
| | testtime=1s |
| | ucode=0xb000038 |
+------------------+-----------------------------------------------------------------------+
| testcase: change | stress-ng: stress-ng.ioprio.ops_per_sec -20.7% regression |
| test machine | 96 threads Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 192G memory |
| test parameters | class=os |
| | cpufreq_governor=performance |
| | disk=1HDD |
| | fs=ext4 |
| | nr_threads=100% |
| | testtime=1s |
| | ucode=0x500002b |
+------------------+-----------------------------------------------------------------------+
| testcase: change | stress-ng: stress-ng.schedpolicy.ops_per_sec 43.0% improvement |
| test machine | 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory |
| test parameters | class=interrupt |
| | cpufreq_governor=performance |
| | disk=1HDD |
| | nr_threads=100% |
| | testtime=30s |
| | ucode=0xb000038 |
+------------------+-----------------------------------------------------------------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <oliver.sang@...el.com>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/tbox_group/test/testcase/ucode:
gcc-7/performance/x86_64-lck-7983/clear-x86_64-phoronix-30140/lkp-cfl-e1/compress-gzip-1.2.0/phoronix-test-suite/0xca
commit:
fcf0553db6 ("sched/fair: Remove meaningless imbalance calculation")
0b0695f2b3 ("sched/fair: Rework load_balance()")
fcf0553db6f4c793 0b0695f2b34a4afa3f6e9aa1ff0
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
:4 4% 0:7 perf-profile.children.cycles-pp.error_entry
%stddev %change %stddev
\ | \
6.01 +19.8% 7.20 phoronix-test-suite.compress-gzip.0.seconds
147.57 ± 8% +25.1% 184.54 phoronix-test-suite.time.elapsed_time
147.57 ± 8% +25.1% 184.54 phoronix-test-suite.time.elapsed_time.max
52926 ± 8% -23.8% 40312 meminfo.max_used_kB
0.11 ± 7% -0.0 0.09 ± 3% mpstat.cpu.all.soft%
242384 -1.4% 238931 proc-vmstat.nr_inactive_anon
242384 -1.4% 238931 proc-vmstat.nr_zone_inactive_anon
1.052e+08 ± 27% +56.5% 1.647e+08 ± 10% cpuidle.C1E.time
1041078 ± 22% +54.7% 1610786 ± 7% cpuidle.C1E.usage
3.414e+08 ± 6% +57.6% 5.381e+08 ± 28% cpuidle.C6.time
817897 ± 3% +50.1% 1227607 ± 11% cpuidle.C6.usage
2884 -4.2% 2762 turbostat.Avg_MHz
1041024 ± 22% +54.7% 1610657 ± 7% turbostat.C1E
817802 ± 3% +50.1% 1227380 ± 11% turbostat.C6
66.75 -2.0% 65.42 turbostat.CorWatt
67.28 -2.0% 65.94 turbostat.PkgWatt
32.50 +6.2% 34.50 vmstat.cpu.id
62.50 -2.4% 61.00 vmstat.cpu.us
2443 ± 2% -28.9% 1738 ± 2% vmstat.io.bi
23765 ± 4% +16.5% 27685 vmstat.system.cs
37860 -7.1% 35180 ± 2% vmstat.system.in
3.474e+09 ± 3% -12.7% 3.032e+09 perf-stat.i.branch-instructions
1.344e+08 ± 2% -11.6% 1.188e+08 perf-stat.i.branch-misses
13033225 ± 4% -19.0% 10561032 perf-stat.i.cache-misses
5.105e+08 ± 3% -15.3% 4.322e+08 perf-stat.i.cache-references
24205 ± 4% +16.3% 28161 perf-stat.i.context-switches
30.25 ± 2% +39.7% 42.27 ± 2% perf-stat.i.cpi
4.63e+10 -4.7% 4.412e+10 perf-stat.i.cpu-cycles
3147 ± 4% -8.4% 2882 ± 2% perf-stat.i.cpu-migrations
16724 ± 2% +45.9% 24406 ± 5% perf-stat.i.cycles-between-cache-misses
0.18 ± 13% -0.1 0.12 ± 4% perf-stat.i.dTLB-load-miss-rate%
4.822e+09 ± 3% -11.9% 4.248e+09 perf-stat.i.dTLB-loads
0.07 ± 8% -0.0 0.05 ± 16% perf-stat.i.dTLB-store-miss-rate%
1.623e+09 ± 2% -11.5% 1.436e+09 perf-stat.i.dTLB-stores
1007120 ± 3% -8.9% 917854 ± 2% perf-stat.i.iTLB-load-misses
1.816e+10 ± 3% -12.2% 1.594e+10 perf-stat.i.instructions
2.06 ± 54% -66.0% 0.70 perf-stat.i.major-faults
29896 ± 13% -35.2% 19362 ± 8% perf-stat.i.minor-faults
0.00 ± 9% -0.0 0.00 ± 6% perf-stat.i.node-load-miss-rate%
1295134 ± 3% -14.2% 1111173 perf-stat.i.node-loads
3064949 ± 4% -18.7% 2491063 ± 2% perf-stat.i.node-stores
29898 ± 13% -35.2% 19363 ± 8% perf-stat.i.page-faults
28.10 -3.5% 27.12 perf-stat.overall.MPKI
2.55 -0.1 2.44 ± 2% perf-stat.overall.cache-miss-rate%
2.56 ± 3% +8.5% 2.77 perf-stat.overall.cpi
3567 ± 5% +17.3% 4186 perf-stat.overall.cycles-between-cache-misses
0.02 ± 3% +0.0 0.02 ± 3% perf-stat.overall.dTLB-load-miss-rate%
18031 -3.6% 17375 ± 2% perf-stat.overall.instructions-per-iTLB-miss
0.39 ± 3% -7.9% 0.36 perf-stat.overall.ipc
3.446e+09 ± 3% -12.6% 3.011e+09 perf-stat.ps.branch-instructions
1.333e+08 ± 2% -11.5% 1.18e+08 perf-stat.ps.branch-misses
12927998 ± 4% -18.8% 10491818 perf-stat.ps.cache-misses
5.064e+08 ± 3% -15.2% 4.293e+08 perf-stat.ps.cache-references
24011 ± 4% +16.5% 27973 perf-stat.ps.context-switches
4.601e+10 -4.6% 4.391e+10 perf-stat.ps.cpu-cycles
3121 ± 4% -8.3% 2863 ± 2% perf-stat.ps.cpu-migrations
4.783e+09 ± 3% -11.8% 4.219e+09 perf-stat.ps.dTLB-loads
1.61e+09 ± 2% -11.4% 1.426e+09 perf-stat.ps.dTLB-stores
999100 ± 3% -8.7% 911974 ± 2% perf-stat.ps.iTLB-load-misses
1.802e+10 ± 3% -12.1% 1.584e+10 perf-stat.ps.instructions
2.04 ± 54% -65.9% 0.70 perf-stat.ps.major-faults
29656 ± 13% -35.1% 19237 ± 8% perf-stat.ps.minor-faults
1284601 ± 3% -14.1% 1103823 perf-stat.ps.node-loads
3039931 ± 4% -18.6% 2474451 ± 2% perf-stat.ps.node-stores
29658 ± 13% -35.1% 19238 ± 8% perf-stat.ps.page-faults
50384 ± 2% +16.5% 58713 ± 4% softirqs.CPU0.RCU
33143 ± 2% +19.9% 39731 ± 2% softirqs.CPU0.SCHED
72672 +24.0% 90109 softirqs.CPU0.TIMER
22182 ± 4% +26.3% 28008 ± 4% softirqs.CPU1.SCHED
74465 ± 4% +26.3% 94027 ± 3% softirqs.CPU1.TIMER
18680 ± 7% +29.2% 24135 ± 3% softirqs.CPU10.SCHED
75941 ± 2% +21.8% 92486 ± 7% softirqs.CPU10.TIMER
48991 ± 4% +22.7% 60105 ± 5% softirqs.CPU11.RCU
18666 ± 6% +28.4% 23976 ± 4% softirqs.CPU11.SCHED
74896 ± 6% +24.4% 93173 ± 3% softirqs.CPU11.TIMER
49490 +20.5% 59659 ± 2% softirqs.CPU12.RCU
18973 ± 7% +26.0% 23909 ± 3% softirqs.CPU12.SCHED
50620 +19.9% 60677 ± 6% softirqs.CPU13.RCU
19136 ± 6% +23.2% 23577 ± 4% softirqs.CPU13.SCHED
74812 +33.3% 99756 ± 7% softirqs.CPU13.TIMER
50824 +15.9% 58881 ± 3% softirqs.CPU14.RCU
19550 ± 5% +24.1% 24270 ± 4% softirqs.CPU14.SCHED
76801 +22.8% 94309 ± 4% softirqs.CPU14.TIMER
51844 +11.5% 57795 ± 3% softirqs.CPU15.RCU
19204 ± 8% +28.4% 24662 ± 2% softirqs.CPU15.SCHED
74751 +29.9% 97127 ± 3% softirqs.CPU15.TIMER
50307 +17.4% 59062 ± 4% softirqs.CPU2.RCU
22150 +12.2% 24848 softirqs.CPU2.SCHED
79653 ± 2% +21.6% 96829 ± 10% softirqs.CPU2.TIMER
50833 +21.1% 61534 ± 4% softirqs.CPU3.RCU
18935 ± 2% +32.0% 25002 ± 3% softirqs.CPU3.SCHED
50569 +15.8% 58570 ± 4% softirqs.CPU4.RCU
20509 ± 5% +18.3% 24271 softirqs.CPU4.SCHED
80942 ± 2% +15.3% 93304 ± 3% softirqs.CPU4.TIMER
50692 +16.5% 59067 ± 4% softirqs.CPU5.RCU
20237 ± 3% +18.2% 23914 ± 3% softirqs.CPU5.SCHED
78963 +21.8% 96151 ± 2% softirqs.CPU5.TIMER
19709 ± 7% +20.1% 23663 softirqs.CPU6.SCHED
81250 +15.9% 94185 softirqs.CPU6.TIMER
51379 +15.0% 59108 softirqs.CPU7.RCU
19642 ± 5% +28.4% 25227 ± 3% softirqs.CPU7.SCHED
78299 ± 2% +30.3% 102021 ± 4% softirqs.CPU7.TIMER
49723 +19.0% 59169 ± 4% softirqs.CPU8.RCU
20138 ± 6% +21.7% 24501 ± 2% softirqs.CPU8.SCHED
75256 ± 3% +22.8% 92419 ± 2% softirqs.CPU8.TIMER
50406 ± 2% +17.4% 59178 ± 4% softirqs.CPU9.RCU
19182 ± 9% +24.2% 23831 ± 6% softirqs.CPU9.SCHED
73572 ± 5% +30.4% 95951 ± 8% softirqs.CPU9.TIMER
812363 +16.6% 946858 ± 3% softirqs.RCU
330042 ± 4% +23.5% 407533 softirqs.SCHED
1240046 +22.5% 1519539 softirqs.TIMER
251015 ± 21% -84.2% 39587 ±106% sched_debug.cfs_rq:/.MIN_vruntime.avg
537847 ± 4% -44.8% 297100 ± 66% sched_debug.cfs_rq:/.MIN_vruntime.max
257990 ± 5% -63.4% 94515 ± 82% sched_debug.cfs_rq:/.MIN_vruntime.stddev
38935 +47.9% 57601 sched_debug.cfs_rq:/.exec_clock.avg
44119 +40.6% 62013 sched_debug.cfs_rq:/.exec_clock.max
37622 +49.9% 56404 sched_debug.cfs_rq:/.exec_clock.min
47287 ± 7% -70.3% 14036 ± 88% sched_debug.cfs_rq:/.load.min
67.17 -52.9% 31.62 ± 31% sched_debug.cfs_rq:/.load_avg.min
251015 ± 21% -84.2% 39588 ±106% sched_debug.cfs_rq:/.max_vruntime.avg
537847 ± 4% -44.8% 297103 ± 66% sched_debug.cfs_rq:/.max_vruntime.max
257991 ± 5% -63.4% 94516 ± 82% sched_debug.cfs_rq:/.max_vruntime.stddev
529078 ± 3% +45.2% 768398 sched_debug.cfs_rq:/.min_vruntime.avg
547175 ± 2% +44.1% 788582 sched_debug.cfs_rq:/.min_vruntime.max
496420 +48.3% 736148 ± 2% sched_debug.cfs_rq:/.min_vruntime.min
1.33 ± 15% -44.0% 0.75 ± 32% sched_debug.cfs_rq:/.nr_running.avg
0.83 ± 20% -70.0% 0.25 ± 70% sched_debug.cfs_rq:/.nr_running.min
0.54 ± 8% -15.9% 0.45 ± 7% sched_debug.cfs_rq:/.nr_running.stddev
0.33 +62.9% 0.54 ± 8% sched_debug.cfs_rq:/.nr_spread_over.avg
1.33 +54.7% 2.06 ± 17% sched_debug.cfs_rq:/.nr_spread_over.max
0.44 ± 7% +56.4% 0.69 ± 6% sched_debug.cfs_rq:/.nr_spread_over.stddev
130.83 ± 14% -25.6% 97.37 ± 15% sched_debug.cfs_rq:/.runnable_load_avg.avg
45.33 ± 6% -79.3% 9.38 ± 70% sched_debug.cfs_rq:/.runnable_load_avg.min
47283 ± 7% -70.9% 13741 ± 89% sched_debug.cfs_rq:/.runnable_weight.min
1098 ± 8% -27.6% 795.02 ± 24% sched_debug.cfs_rq:/.util_avg.avg
757.50 ± 9% -51.3% 369.25 ± 10% sched_debug.cfs_rq:/.util_avg.min
762.39 ± 11% -44.4% 424.04 ± 46% sched_debug.cfs_rq:/.util_est_enqueued.avg
314.00 ± 18% -78.5% 67.38 ±100% sched_debug.cfs_rq:/.util_est_enqueued.min
142951 ± 5% +22.8% 175502 ± 3% sched_debug.cpu.avg_idle.avg
72112 -18.3% 58937 ± 13% sched_debug.cpu.avg_idle.stddev
127638 ± 7% +39.3% 177858 ± 5% sched_debug.cpu.clock.avg
127643 ± 7% +39.3% 177862 ± 5% sched_debug.cpu.clock.max
127633 ± 7% +39.3% 177855 ± 5% sched_debug.cpu.clock.min
126720 ± 7% +39.4% 176681 ± 5% sched_debug.cpu.clock_task.avg
127168 ± 7% +39.3% 177179 ± 5% sched_debug.cpu.clock_task.max
125240 ± 7% +39.5% 174767 ± 5% sched_debug.cpu.clock_task.min
563.60 ± 2% +25.9% 709.78 ± 9% sched_debug.cpu.clock_task.stddev
1.66 ± 18% -37.5% 1.04 ± 32% sched_debug.cpu.nr_running.avg
0.83 ± 20% -62.5% 0.31 ± 87% sched_debug.cpu.nr_running.min
127617 ± 3% +52.9% 195080 sched_debug.cpu.nr_switches.avg
149901 ± 6% +45.2% 217652 sched_debug.cpu.nr_switches.max
108182 ± 5% +61.6% 174808 sched_debug.cpu.nr_switches.min
0.20 ± 5% -62.5% 0.07 ± 67% sched_debug.cpu.nr_uninterruptible.avg
-29.33 -13.5% -25.38 sched_debug.cpu.nr_uninterruptible.min
92666 ± 8% +66.8% 154559 sched_debug.cpu.sched_count.avg
104565 ± 11% +57.2% 164374 sched_debug.cpu.sched_count.max
80272 ± 10% +77.2% 142238 sched_debug.cpu.sched_count.min
38029 ± 10% +80.4% 68608 sched_debug.cpu.sched_goidle.avg
43413 ± 11% +68.5% 73149 sched_debug.cpu.sched_goidle.max
32420 ± 11% +94.5% 63069 sched_debug.cpu.sched_goidle.min
51567 ± 8% +60.7% 82878 sched_debug.cpu.ttwu_count.avg
79015 ± 9% +45.2% 114717 ± 4% sched_debug.cpu.ttwu_count.max
42919 ± 9% +63.3% 70086 sched_debug.cpu.ttwu_count.min
127632 ± 7% +39.3% 177854 ± 5% sched_debug.cpu_clk
125087 ± 7% +40.1% 175285 ± 5% sched_debug.ktime
127882 ± 6% +39.3% 178163 ± 5% sched_debug.sched_clk
146.00 ± 13% +902.9% 1464 ±143% interrupts.133:IR-PCI-MSI.2097153-edge.eth1-TxRx-0
3375 ± 93% -94.8% 174.75 ± 26% interrupts.134:IR-PCI-MSI.2097154-edge.eth1-TxRx-1
297595 ± 8% +22.8% 365351 ± 2% interrupts.CPU0.LOC:Local_timer_interrupts
8402 -21.7% 6577 ± 25% interrupts.CPU0.NMI:Non-maskable_interrupts
8402 -21.7% 6577 ± 25% interrupts.CPU0.PMI:Performance_monitoring_interrupts
937.00 ± 2% +18.1% 1106 ± 11% interrupts.CPU0.RES:Rescheduling_interrupts
146.00 ± 13% +902.9% 1464 ±143% interrupts.CPU1.133:IR-PCI-MSI.2097153-edge.eth1-TxRx-0
297695 ± 8% +22.7% 365189 ± 2% interrupts.CPU1.LOC:Local_timer_interrupts
8412 -20.9% 6655 ± 25% interrupts.CPU1.NMI:Non-maskable_interrupts
8412 -20.9% 6655 ± 25% interrupts.CPU1.PMI:Performance_monitoring_interrupts
297641 ± 8% +22.7% 365268 ± 2% interrupts.CPU10.LOC:Local_timer_interrupts
8365 -10.9% 7455 ± 3% interrupts.CPU10.NMI:Non-maskable_interrupts
8365 -10.9% 7455 ± 3% interrupts.CPU10.PMI:Performance_monitoring_interrupts
297729 ± 8% +22.7% 365238 ± 2% interrupts.CPU11.LOC:Local_timer_interrupts
8376 -21.8% 6554 ± 26% interrupts.CPU11.NMI:Non-maskable_interrupts
8376 -21.8% 6554 ± 26% interrupts.CPU11.PMI:Performance_monitoring_interrupts
297394 ± 8% +22.8% 365274 ± 2% interrupts.CPU12.LOC:Local_timer_interrupts
8393 -10.5% 7512 ± 3% interrupts.CPU12.NMI:Non-maskable_interrupts
8393 -10.5% 7512 ± 3% interrupts.CPU12.PMI:Performance_monitoring_interrupts
297744 ± 8% +22.7% 365243 ± 2% interrupts.CPU13.LOC:Local_timer_interrupts
8353 -10.6% 7469 ± 3% interrupts.CPU13.NMI:Non-maskable_interrupts
8353 -10.6% 7469 ± 3% interrupts.CPU13.PMI:Performance_monitoring_interrupts
148.50 ± 17% -24.2% 112.50 ± 8% interrupts.CPU13.TLB:TLB_shootdowns
297692 ± 8% +22.7% 365311 ± 2% interrupts.CPU14.LOC:Local_timer_interrupts
8374 -10.4% 7501 ± 4% interrupts.CPU14.NMI:Non-maskable_interrupts
8374 -10.4% 7501 ± 4% interrupts.CPU14.PMI:Performance_monitoring_interrupts
297453 ± 8% +22.8% 365311 ± 2% interrupts.CPU15.LOC:Local_timer_interrupts
8336 -22.8% 6433 ± 26% interrupts.CPU15.NMI:Non-maskable_interrupts
8336 -22.8% 6433 ± 26% interrupts.CPU15.PMI:Performance_monitoring_interrupts
699.50 ± 21% +51.3% 1058 ± 10% interrupts.CPU15.RES:Rescheduling_interrupts
3375 ± 93% -94.8% 174.75 ± 26% interrupts.CPU2.134:IR-PCI-MSI.2097154-edge.eth1-TxRx-1
297685 ± 8% +22.7% 365273 ± 2% interrupts.CPU2.LOC:Local_timer_interrupts
8357 -21.2% 6584 ± 25% interrupts.CPU2.NMI:Non-maskable_interrupts
8357 -21.2% 6584 ± 25% interrupts.CPU2.PMI:Performance_monitoring_interrupts
164.00 ± 30% -23.0% 126.25 ± 32% interrupts.CPU2.TLB:TLB_shootdowns
297352 ± 8% +22.9% 365371 ± 2% interrupts.CPU3.LOC:Local_timer_interrupts
8383 -10.6% 7493 ± 4% interrupts.CPU3.NMI:Non-maskable_interrupts
8383 -10.6% 7493 ± 4% interrupts.CPU3.PMI:Performance_monitoring_interrupts
780.50 ± 3% +32.7% 1035 ± 6% interrupts.CPU3.RES:Rescheduling_interrupts
297595 ± 8% +22.8% 365415 ± 2% interrupts.CPU4.LOC:Local_timer_interrupts
8382 -21.4% 6584 ± 25% interrupts.CPU4.NMI:Non-maskable_interrupts
8382 -21.4% 6584 ± 25% interrupts.CPU4.PMI:Performance_monitoring_interrupts
297720 ± 8% +22.7% 365347 ± 2% interrupts.CPU5.LOC:Local_timer_interrupts
8353 -32.0% 5679 ± 34% interrupts.CPU5.NMI:Non-maskable_interrupts
8353 -32.0% 5679 ± 34% interrupts.CPU5.PMI:Performance_monitoring_interrupts
727.00 ± 16% +98.3% 1442 ± 47% interrupts.CPU5.RES:Rescheduling_interrupts
297620 ± 8% +22.8% 365343 ± 2% interrupts.CPU6.LOC:Local_timer_interrupts
8388 -10.3% 7526 ± 4% interrupts.CPU6.NMI:Non-maskable_interrupts
8388 -10.3% 7526 ± 4% interrupts.CPU6.PMI:Performance_monitoring_interrupts
156.50 ± 3% -27.6% 113.25 ± 16% interrupts.CPU6.TLB:TLB_shootdowns
297690 ± 8% +22.7% 365363 ± 2% interrupts.CPU7.LOC:Local_timer_interrupts
8390 -23.1% 6449 ± 25% interrupts.CPU7.NMI:Non-maskable_interrupts
8390 -23.1% 6449 ± 25% interrupts.CPU7.PMI:Performance_monitoring_interrupts
918.00 ± 16% +49.4% 1371 ± 7% interrupts.CPU7.RES:Rescheduling_interrupts
120.00 ± 35% +70.8% 205.00 ± 17% interrupts.CPU7.TLB:TLB_shootdowns
297731 ± 8% +22.7% 365368 ± 2% interrupts.CPU8.LOC:Local_timer_interrupts
8393 -32.5% 5668 ± 35% interrupts.CPU8.NMI:Non-maskable_interrupts
8393 -32.5% 5668 ± 35% interrupts.CPU8.PMI:Performance_monitoring_interrupts
297779 ± 8% +22.7% 365399 ± 2% interrupts.CPU9.LOC:Local_timer_interrupts
8430 -10.8% 7517 ± 2% interrupts.CPU9.NMI:Non-maskable_interrupts
8430 -10.8% 7517 ± 2% interrupts.CPU9.PMI:Performance_monitoring_interrupts
956.50 +13.5% 1085 ± 4% interrupts.CPU9.RES:Rescheduling_interrupts
4762118 ± 8% +22.7% 5845069 ± 2% interrupts.LOC:Local_timer_interrupts
134093 -18.2% 109662 ± 11% interrupts.NMI:Non-maskable_interrupts
134093 -18.2% 109662 ± 11% interrupts.PMI:Performance_monitoring_interrupts
66.97 ± 9% -29.9 37.12 ± 49% perf-profile.calltrace.cycles-pp.deflate
66.67 ± 9% -29.7 36.97 ± 50% perf-profile.calltrace.cycles-pp.deflate_medium.deflate
43.24 ± 9% -18.6 24.61 ± 49% perf-profile.calltrace.cycles-pp.longest_match.deflate_medium.deflate
2.29 ± 14% -1.2 1.05 ± 58% perf-profile.calltrace.cycles-pp.deflateSetDictionary
0.74 ± 6% -0.5 0.27 ±100% perf-profile.calltrace.cycles-pp.read.__libc_start_main
0.74 ± 7% -0.5 0.27 ±100% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.read.__libc_start_main
0.73 ± 7% -0.5 0.27 ±100% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.read.__libc_start_main
0.73 ± 7% -0.5 0.27 ±100% perf-profile.calltrace.cycles-pp.__x64_sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read.__libc_start_main
0.73 ± 7% -0.5 0.27 ±100% perf-profile.calltrace.cycles-pp.ksys_read.__x64_sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
0.26 ±100% +0.6 0.88 ± 45% perf-profile.calltrace.cycles-pp.vfs_statx.__do_sys_newfstatat.__x64_sys_newfstatat.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.34 ±100% +0.7 1.02 ± 42% perf-profile.calltrace.cycles-pp.do_sys_open.__x64_sys_openat.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.28 ±100% +0.7 0.96 ± 44% perf-profile.calltrace.cycles-pp.__do_sys_newfstatat.__x64_sys_newfstatat.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.28 ±100% +0.7 0.96 ± 44% perf-profile.calltrace.cycles-pp.__x64_sys_newfstatat.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.34 ±100% +0.7 1.03 ± 42% perf-profile.calltrace.cycles-pp.__x64_sys_openat.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.8 0.77 ± 35% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter
0.56 ± 9% +0.8 1.40 ± 45% perf-profile.calltrace.cycles-pp.__schedule.schedule.futex_wait_queue_me.futex_wait.do_futex
0.58 ± 10% +0.9 1.43 ± 45% perf-profile.calltrace.cycles-pp.schedule.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex
0.33 ±100% +0.9 1.21 ± 28% perf-profile.calltrace.cycles-pp.menu_select.cpuidle_select.do_idle.cpu_startup_entry.start_secondary
0.34 ± 99% +0.9 1.27 ± 30% perf-profile.calltrace.cycles-pp.cpuidle_select.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
0.00 +1.0 0.96 ± 62% perf-profile.calltrace.cycles-pp.poll_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
0.62 ± 9% +1.0 1.60 ± 52% perf-profile.calltrace.cycles-pp.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex.do_syscall_64
0.68 ± 10% +1.0 1.73 ± 51% perf-profile.calltrace.cycles-pp.futex_wait.do_futex.__x64_sys_futex.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.46 ±100% +1.1 1.60 ± 43% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle
0.47 ±100% +1.2 1.62 ± 43% perf-profile.calltrace.cycles-pp.apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
17.73 ± 21% +19.1 36.84 ± 25% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
17.75 ± 20% +19.9 37.63 ± 26% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary
17.84 ± 20% +20.0 37.82 ± 26% perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
18.83 ± 20% +21.2 40.05 ± 27% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
18.89 ± 20% +21.2 40.11 ± 27% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
18.89 ± 20% +21.2 40.12 ± 27% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
20.14 ± 20% +22.5 42.66 ± 27% perf-profile.calltrace.cycles-pp.secondary_startup_64
66.97 ± 9% -29.9 37.12 ± 49% perf-profile.children.cycles-pp.deflate
66.83 ± 9% -29.8 37.06 ± 49% perf-profile.children.cycles-pp.deflate_medium
43.58 ± 9% -18.8 24.80 ± 49% perf-profile.children.cycles-pp.longest_match
2.29 ± 14% -1.2 1.12 ± 43% perf-profile.children.cycles-pp.deflateSetDictionary
0.84 -0.3 0.58 ± 19% perf-profile.children.cycles-pp.read
0.52 ± 13% -0.2 0.27 ± 43% perf-profile.children.cycles-pp.fill_window
0.06 +0.0 0.08 ± 13% perf-profile.children.cycles-pp.update_stack_state
0.07 ± 14% +0.0 0.11 ± 24% perf-profile.children.cycles-pp._raw_spin_unlock_irqrestore
0.03 ±100% +0.1 0.09 ± 19% perf-profile.children.cycles-pp.find_next_and_bit
0.00 +0.1 0.06 ± 15% perf-profile.children.cycles-pp.refcount_inc_not_zero_checked
0.03 ±100% +0.1 0.08 ± 33% perf-profile.children.cycles-pp.free_pcppages_bulk
0.07 ± 7% +0.1 0.12 ± 36% perf-profile.children.cycles-pp.syscall_return_via_sysret
0.00 +0.1 0.06 ± 28% perf-profile.children.cycles-pp.rb_erase
0.03 ±100% +0.1 0.09 ± 24% perf-profile.children.cycles-pp.shmem_undo_range
0.03 ±100% +0.1 0.09 ± 28% perf-profile.children.cycles-pp.unlinkat
0.03 ±100% +0.1 0.09 ± 28% perf-profile.children.cycles-pp.__x64_sys_unlinkat
0.03 ±100% +0.1 0.09 ± 28% perf-profile.children.cycles-pp.do_unlinkat
0.03 ±100% +0.1 0.09 ± 28% perf-profile.children.cycles-pp.ovl_destroy_inode
0.03 ±100% +0.1 0.09 ± 28% perf-profile.children.cycles-pp.shmem_evict_inode
0.03 ±100% +0.1 0.09 ± 28% perf-profile.children.cycles-pp.shmem_truncate_range
0.05 +0.1 0.12 ± 38% perf-profile.children.cycles-pp.unmap_vmas
0.00 +0.1 0.07 ± 30% perf-profile.children.cycles-pp.timerqueue_del
0.00 +0.1 0.07 ± 26% perf-profile.children.cycles-pp.idle_cpu
0.09 ± 17% +0.1 0.15 ± 19% perf-profile.children.cycles-pp.__update_load_avg_se
0.00 +0.1 0.07 ± 33% perf-profile.children.cycles-pp.unmap_region
0.00 +0.1 0.07 ± 33% perf-profile.children.cycles-pp.__alloc_fd
0.03 ±100% +0.1 0.10 ± 31% perf-profile.children.cycles-pp.destroy_inode
0.03 ±100% +0.1 0.10 ± 30% perf-profile.children.cycles-pp.evict
0.06 ± 16% +0.1 0.13 ± 35% perf-profile.children.cycles-pp.ovl_override_creds
0.00 +0.1 0.07 ± 26% perf-profile.children.cycles-pp.kernel_text_address
0.00 +0.1 0.07 ± 41% perf-profile.children.cycles-pp.file_remove_privs
0.07 ± 23% +0.1 0.14 ± 47% perf-profile.children.cycles-pp.hrtimer_next_event_without
0.03 ±100% +0.1 0.10 ± 24% perf-profile.children.cycles-pp.__dentry_kill
0.03 ±100% +0.1 0.10 ± 29% perf-profile.children.cycles-pp.dentry_unlink_inode
0.03 ±100% +0.1 0.10 ± 29% perf-profile.children.cycles-pp.iput
0.03 ±100% +0.1 0.10 ± 54% perf-profile.children.cycles-pp.__close_fd
0.08 ± 25% +0.1 0.15 ± 35% perf-profile.children.cycles-pp.__switch_to
0.00 +0.1 0.07 ± 29% perf-profile.children.cycles-pp.__switch_to_asm
0.00 +0.1 0.08 ± 24% perf-profile.children.cycles-pp.__kernel_text_address
0.03 ±100% +0.1 0.11 ± 51% perf-profile.children.cycles-pp.enqueue_hrtimer
0.03 ±100% +0.1 0.11 ± 33% perf-profile.children.cycles-pp.rcu_gp_kthread_wake
0.03 ±100% +0.1 0.11 ± 33% perf-profile.children.cycles-pp.swake_up_one
0.03 ±100% +0.1 0.11 ± 33% perf-profile.children.cycles-pp.swake_up_locked
0.10 ± 30% +0.1 0.18 ± 17% perf-profile.children.cycles-pp.irqtime_account_irq
0.03 ±100% +0.1 0.11 ± 38% perf-profile.children.cycles-pp.unmap_page_range
0.00 +0.1 0.09 ± 37% perf-profile.children.cycles-pp.putname
0.03 ±100% +0.1 0.11 ± 28% perf-profile.children.cycles-pp.filemap_map_pages
0.07 ± 28% +0.1 0.16 ± 35% perf-profile.children.cycles-pp.getname
0.03 ±100% +0.1 0.11 ± 40% perf-profile.children.cycles-pp.unmap_single_vma
0.08 ± 29% +0.1 0.17 ± 38% perf-profile.children.cycles-pp.queued_spin_lock_slowpath
0.03 ±100% +0.1 0.12 ± 54% perf-profile.children.cycles-pp.setlocale
0.03 ±100% +0.1 0.12 ± 60% perf-profile.children.cycles-pp.__open64_nocancel
0.00 +0.1 0.09 ± 31% perf-profile.children.cycles-pp.__hrtimer_get_next_event
0.00 +0.1 0.10 ± 28% perf-profile.children.cycles-pp.get_unused_fd_flags
0.00 +0.1 0.10 ± 65% perf-profile.children.cycles-pp.timerqueue_add
0.07 ± 28% +0.1 0.17 ± 42% perf-profile.children.cycles-pp.__hrtimer_next_event_base
0.03 ±100% +0.1 0.12 ± 51% perf-profile.children.cycles-pp.__x64_sys_close
0.00 +0.1 0.10 ± 38% perf-profile.children.cycles-pp.do_lookup_x
0.03 ±100% +0.1 0.12 ± 23% perf-profile.children.cycles-pp.kmem_cache_free
0.04 ±100% +0.1 0.14 ± 53% perf-profile.children.cycles-pp.__do_munmap
0.00 +0.1 0.10 ± 35% perf-profile.children.cycles-pp.unwind_get_return_address
0.00 +0.1 0.10 ± 49% perf-profile.children.cycles-pp.shmem_add_to_page_cache
0.07 ± 20% +0.1 0.18 ± 25% perf-profile.children.cycles-pp.find_next_bit
0.08 ± 25% +0.1 0.18 ± 34% perf-profile.children.cycles-pp.dput
0.11 ± 33% +0.1 0.21 ± 37% perf-profile.children.cycles-pp.perf_mux_hrtimer_handler
0.08 ± 5% +0.1 0.19 ± 27% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.00 +0.1 0.11 ± 52% perf-profile.children.cycles-pp.rcu_idle_exit
0.03 ±100% +0.1 0.14 ± 18% perf-profile.children.cycles-pp.entry_SYSCALL_64
0.08 +0.1 0.19 ± 43% perf-profile.children.cycles-pp.exit_mmap
0.09 ± 22% +0.1 0.20 ± 57% perf-profile.children.cycles-pp.set_next_entity
0.07 ± 7% +0.1 0.18 ± 60% perf-profile.children.cycles-pp.switch_mm_irqs_off
0.10 ± 26% +0.1 0.21 ± 32% perf-profile.children.cycles-pp.sched_clock
0.12 ± 25% +0.1 0.23 ± 39% perf-profile.children.cycles-pp.update_cfs_group
0.07 ± 14% +0.1 0.18 ± 45% perf-profile.children.cycles-pp.lapic_next_deadline
0.08 ± 5% +0.1 0.20 ± 44% perf-profile.children.cycles-pp.mmput
0.11 ± 18% +0.1 0.23 ± 41% perf-profile.children.cycles-pp.clockevents_program_event
0.07 ± 7% +0.1 0.18 ± 40% perf-profile.children.cycles-pp.strncpy_from_user
0.00 +0.1 0.12 ± 48% perf-profile.children.cycles-pp.flush_old_exec
0.11 ± 18% +0.1 0.23 ± 37% perf-profile.children.cycles-pp.native_sched_clock
0.09 ± 17% +0.1 0.21 ± 46% perf-profile.children.cycles-pp._dl_sysdep_start
0.12 ± 19% +0.1 0.26 ± 37% perf-profile.children.cycles-pp.tick_program_event
0.09 ± 33% +0.1 0.23 ± 61% perf-profile.children.cycles-pp.mmap_region
0.14 ± 21% +0.1 0.28 ± 39% perf-profile.children.cycles-pp.sched_clock_cpu
0.11 ± 27% +0.1 0.25 ± 56% perf-profile.children.cycles-pp.do_mmap
0.11 ± 36% +0.1 0.26 ± 57% perf-profile.children.cycles-pp.ksys_mmap_pgoff
0.09 ± 17% +0.1 0.23 ± 48% perf-profile.children.cycles-pp.lookup_fast
0.04 ±100% +0.2 0.19 ± 48% perf-profile.children.cycles-pp.open_path
0.11 ± 30% +0.2 0.27 ± 58% perf-profile.children.cycles-pp.vm_mmap_pgoff
0.11 ± 27% +0.2 0.28 ± 37% perf-profile.children.cycles-pp.update_blocked_averages
0.11 +0.2 0.29 ± 38% perf-profile.children.cycles-pp.search_binary_handler
0.11 ± 4% +0.2 0.29 ± 39% perf-profile.children.cycles-pp.load_elf_binary
0.11 ± 30% +0.2 0.30 ± 50% perf-profile.children.cycles-pp.inode_permission
0.04 ±100% +0.2 0.24 ± 55% perf-profile.children.cycles-pp.__GI___open64_nocancel
0.15 ± 29% +0.2 0.35 ± 34% perf-profile.children.cycles-pp.getname_flags
0.16 ± 25% +0.2 0.38 ± 26% perf-profile.children.cycles-pp.get_next_timer_interrupt
0.18 ± 11% +0.2 0.41 ± 39% perf-profile.children.cycles-pp.execve
0.19 ± 5% +0.2 0.42 ± 37% perf-profile.children.cycles-pp.__x64_sys_execve
0.18 ± 2% +0.2 0.42 ± 37% perf-profile.children.cycles-pp.__do_execve_file
0.32 ± 18% +0.3 0.57 ± 33% perf-profile.children.cycles-pp.__account_scheduler_latency
0.21 ± 17% +0.3 0.48 ± 47% perf-profile.children.cycles-pp.schedule_idle
0.20 ± 19% +0.3 0.49 ± 33% perf-profile.children.cycles-pp.tick_nohz_next_event
0.21 ± 26% +0.3 0.55 ± 41% perf-profile.children.cycles-pp.find_busiest_group
0.32 ± 26% +0.3 0.67 ± 52% perf-profile.children.cycles-pp.__handle_mm_fault
0.22 ± 25% +0.4 0.57 ± 49% perf-profile.children.cycles-pp.filename_lookup
0.34 ± 27% +0.4 0.72 ± 50% perf-profile.children.cycles-pp.handle_mm_fault
0.42 ± 32% +0.4 0.80 ± 43% perf-profile.children.cycles-pp.shmem_getpage_gfp
0.36 ± 23% +0.4 0.77 ± 41% perf-profile.children.cycles-pp.load_balance
0.41 ± 30% +0.4 0.82 ± 50% perf-profile.children.cycles-pp.do_page_fault
0.39 ± 30% +0.4 0.80 ± 50% perf-profile.children.cycles-pp.__do_page_fault
0.28 ± 22% +0.4 0.70 ± 37% perf-profile.children.cycles-pp.tick_nohz_get_sleep_length
0.43 ± 31% +0.4 0.86 ± 49% perf-profile.children.cycles-pp.page_fault
0.31 ± 25% +0.5 0.77 ± 46% perf-profile.children.cycles-pp.user_path_at_empty
0.36 ± 20% +0.5 0.84 ± 34% perf-profile.children.cycles-pp.newidle_balance
0.45 ± 21% +0.5 0.95 ± 44% perf-profile.children.cycles-pp.vfs_statx
0.46 ± 20% +0.5 0.97 ± 43% perf-profile.children.cycles-pp.__do_sys_newfstatat
0.47 ± 20% +0.5 0.98 ± 44% perf-profile.children.cycles-pp.__x64_sys_newfstatat
0.29 ± 37% +0.5 0.81 ± 32% perf-profile.children.cycles-pp.io_serial_in
0.53 ± 25% +0.5 1.06 ± 49% perf-profile.children.cycles-pp.path_openat
0.55 ± 24% +0.5 1.09 ± 50% perf-profile.children.cycles-pp.do_filp_open
0.35 ± 2% +0.5 0.90 ± 60% perf-profile.children.cycles-pp.uart_console_write
0.35 ± 4% +0.6 0.91 ± 60% perf-profile.children.cycles-pp.console_unlock
0.35 ± 4% +0.6 0.91 ± 60% perf-profile.children.cycles-pp.univ8250_console_write
0.35 ± 4% +0.6 0.91 ± 60% perf-profile.children.cycles-pp.serial8250_console_write
0.82 ± 23% +0.6 1.42 ± 31% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.47 ± 28% +0.6 1.10 ± 39% perf-profile.children.cycles-pp.irq_work_interrupt
0.47 ± 28% +0.6 1.10 ± 39% perf-profile.children.cycles-pp.smp_irq_work_interrupt
0.47 ± 28% +0.6 1.10 ± 39% perf-profile.children.cycles-pp.irq_work_run
0.47 ± 28% +0.6 1.10 ± 39% perf-profile.children.cycles-pp.perf_duration_warn
0.47 ± 28% +0.6 1.10 ± 39% perf-profile.children.cycles-pp.printk
0.47 ± 28% +0.6 1.10 ± 39% perf-profile.children.cycles-pp.vprintk_func
0.47 ± 28% +0.6 1.10 ± 39% perf-profile.children.cycles-pp.vprintk_default
0.47 ± 28% +0.6 1.11 ± 39% perf-profile.children.cycles-pp.irq_work_run_list
0.49 ± 31% +0.6 1.13 ± 39% perf-profile.children.cycles-pp.vprintk_emit
0.54 ± 19% +0.6 1.17 ± 38% perf-profile.children.cycles-pp.pick_next_task_fair
0.32 ± 7% +0.7 1.02 ± 56% perf-profile.children.cycles-pp.poll_idle
0.60 ± 15% +0.7 1.31 ± 29% perf-profile.children.cycles-pp.menu_select
0.65 ± 27% +0.7 1.36 ± 45% perf-profile.children.cycles-pp.do_sys_open
0.62 ± 15% +0.7 1.36 ± 31% perf-profile.children.cycles-pp.cpuidle_select
0.66 ± 26% +0.7 1.39 ± 44% perf-profile.children.cycles-pp.__x64_sys_openat
1.11 ± 22% +0.9 2.03 ± 31% perf-profile.children.cycles-pp.hrtimer_interrupt
0.89 ± 24% +0.9 1.81 ± 42% perf-profile.children.cycles-pp.futex_wait_queue_me
1.16 ± 27% +1.0 2.13 ± 36% perf-profile.children.cycles-pp.schedule
0.97 ± 23% +1.0 1.97 ± 42% perf-profile.children.cycles-pp.futex_wait
1.33 ± 25% +1.2 2.55 ± 39% perf-profile.children.cycles-pp.__schedule
1.84 ± 26% +1.6 3.42 ± 36% perf-profile.children.cycles-pp.smp_apic_timer_interrupt
1.76 ± 22% +1.6 3.41 ± 40% perf-profile.children.cycles-pp.do_futex
1.79 ± 22% +1.7 3.49 ± 41% perf-profile.children.cycles-pp.__x64_sys_futex
2.23 ± 20% +1.7 3.98 ± 37% perf-profile.children.cycles-pp.apic_timer_interrupt
17.73 ± 21% +19.1 36.86 ± 25% perf-profile.children.cycles-pp.intel_idle
19.00 ± 21% +21.1 40.13 ± 26% perf-profile.children.cycles-pp.cpuidle_enter_state
19.02 ± 21% +21.2 40.19 ± 26% perf-profile.children.cycles-pp.cpuidle_enter
18.89 ± 20% +21.2 40.12 ± 27% perf-profile.children.cycles-pp.start_secondary
20.14 ± 20% +22.5 42.65 ± 27% perf-profile.children.cycles-pp.cpu_startup_entry
20.08 ± 20% +22.5 42.59 ± 27% perf-profile.children.cycles-pp.do_idle
20.14 ± 20% +22.5 42.66 ± 27% perf-profile.children.cycles-pp.secondary_startup_64
43.25 ± 9% -18.6 24.63 ± 49% perf-profile.self.cycles-pp.longest_match
22.74 ± 11% -10.8 11.97 ± 50% perf-profile.self.cycles-pp.deflate_medium
2.26 ± 14% -1.2 1.11 ± 44% perf-profile.self.cycles-pp.deflateSetDictionary
0.51 ± 12% -0.3 0.24 ± 57% perf-profile.self.cycles-pp.fill_window
0.07 ± 7% +0.0 0.10 ± 24% perf-profile.self.cycles-pp._raw_spin_unlock_irqrestore
0.07 ± 7% +0.1 0.12 ± 36% perf-profile.self.cycles-pp.syscall_return_via_sysret
0.08 ± 12% +0.1 0.14 ± 15% perf-profile.self.cycles-pp.__update_load_avg_se
0.06 +0.1 0.13 ± 27% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.08 ± 25% +0.1 0.15 ± 37% perf-profile.self.cycles-pp.__switch_to
0.06 ± 16% +0.1 0.13 ± 29% perf-profile.self.cycles-pp.get_page_from_freelist
0.00 +0.1 0.07 ± 29% perf-profile.self.cycles-pp.__switch_to_asm
0.05 +0.1 0.13 ± 57% perf-profile.self.cycles-pp.switch_mm_irqs_off
0.00 +0.1 0.08 ± 41% perf-profile.self.cycles-pp.interrupt_entry
0.00 +0.1 0.08 ± 61% perf-profile.self.cycles-pp.run_timer_softirq
0.07 ± 23% +0.1 0.15 ± 43% perf-profile.self.cycles-pp.__hrtimer_next_event_base
0.03 ±100% +0.1 0.12 ± 43% perf-profile.self.cycles-pp.update_cfs_group
0.08 ± 29% +0.1 0.17 ± 38% perf-profile.self.cycles-pp.queued_spin_lock_slowpath
0.00 +0.1 0.09 ± 29% perf-profile.self.cycles-pp.strncpy_from_user
0.06 ± 16% +0.1 0.15 ± 24% perf-profile.self.cycles-pp.find_next_bit
0.00 +0.1 0.10 ± 35% perf-profile.self.cycles-pp.do_lookup_x
0.00 +0.1 0.10 ± 13% perf-profile.self.cycles-pp.kmem_cache_free
0.06 ± 16% +0.1 0.16 ± 30% perf-profile.self.cycles-pp.do_idle
0.03 ±100% +0.1 0.13 ± 18% perf-profile.self.cycles-pp.entry_SYSCALL_64
0.03 ±100% +0.1 0.14 ± 41% perf-profile.self.cycles-pp.update_blocked_averages
0.11 ± 18% +0.1 0.22 ± 37% perf-profile.self.cycles-pp.native_sched_clock
0.07 ± 14% +0.1 0.18 ± 46% perf-profile.self.cycles-pp.lapic_next_deadline
0.00 +0.1 0.12 ± 65% perf-profile.self.cycles-pp.set_next_entity
0.12 ± 28% +0.1 0.27 ± 32% perf-profile.self.cycles-pp.cpuidle_enter_state
0.15 ± 3% +0.2 0.32 ± 39% perf-profile.self.cycles-pp.io_serial_out
0.25 ± 4% +0.2 0.48 ± 19% perf-profile.self.cycles-pp.menu_select
0.15 ± 22% +0.3 0.42 ± 46% perf-profile.self.cycles-pp.find_busiest_group
0.29 ± 37% +0.4 0.71 ± 42% perf-profile.self.cycles-pp.io_serial_in
0.32 ± 7% +0.7 1.02 ± 56% perf-profile.self.cycles-pp.poll_idle
17.73 ± 21% +19.1 36.79 ± 25% perf-profile.self.cycles-pp.intel_idle
phoronix-test-suite.compress-gzip.0.seconds
8 +-----------------------------------------------------------------------+
| O O O O O O O O |
7 |-+ O O O O O O O O O |
6 |-+ + + + |
| + : + + : + + + : |
5 |-+ : : : : :: : : : : |
| :: : : : :: : : : :: : : |
4 |:+: : : : : : : : : : : : : : : : : |
|: : : : : : : : : + + : : + : : : : : : : |
3 |:+: : : : : : : : : : : : : : : : : : : : |
2 |:+: : : : : : : : : : : : : : : : : : : : : : : |
|: : : : : : : : : : : : : : : : : : : : : : : : |
1 |-: :: : : : : : : : : :: :: :: : : |
| : : : : : : : : : : : : |
0 +-----------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
***************************************************************************************************
lkp-cfl-d1: 12 threads Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz with 8G memory
***************************************************************************************************
lkp-skl-fpga01: 104 threads Skylake with 192G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/size/tbox_group/test/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.6/debian-x86_64-2019-09-23.cgz/300s/8T/lkp-skl-fpga01/anon-cow-seq/vm-scalability/0x2000064
commit:
fcf0553db6 ("sched/fair: Remove meaningless imbalance calculation")
0b0695f2b3 ("sched/fair: Rework load_balance()")
fcf0553db6f4c793 0b0695f2b34a4afa3f6e9aa1ff0
---------------- ---------------------------
%stddev %change %stddev
\ | \
413301 +3.1% 426103 vm-scalability.median
0.04 ± 2% -34.0% 0.03 ± 12% vm-scalability.median_stddev
43837589 +2.4% 44902458 vm-scalability.throughput
181085 -18.7% 147221 vm-scalability.time.involuntary_context_switches
12762365 ± 2% +3.9% 13262025 vm-scalability.time.minor_page_faults
7773 +2.9% 7997 vm-scalability.time.percent_of_cpu_this_job_got
11449 +1.2% 11589 vm-scalability.time.system_time
12024 +4.7% 12584 vm-scalability.time.user_time
439194 ± 2% +46.0% 641402 ± 2% vm-scalability.time.voluntary_context_switches
1.148e+10 +5.0% 1.206e+10 vm-scalability.workload
0.00 ± 54% +0.0 0.00 ± 17% mpstat.cpu.all.iowait%
4767597 +52.5% 7268430 ± 41% numa-numastat.node1.local_node
4781030 +52.3% 7280347 ± 41% numa-numastat.node1.numa_hit
24.75 -9.1% 22.50 ± 2% vmstat.cpu.id
37.50 +4.7% 39.25 vmstat.cpu.us
6643 ± 3% +15.1% 7647 vmstat.system.cs
12220504 +33.4% 16298593 ± 4% cpuidle.C1.time
260215 ± 6% +55.3% 404158 ± 3% cpuidle.C1.usage
4986034 ± 3% +56.2% 7786811 ± 2% cpuidle.POLL.time
145941 ± 3% +61.2% 235218 ± 2% cpuidle.POLL.usage
1990 +3.0% 2049 turbostat.Avg_MHz
254633 ± 6% +56.7% 398892 ± 4% turbostat.C1
0.04 +0.0 0.05 turbostat.C1%
309.99 +1.5% 314.75 turbostat.RAMWatt
1688 ± 11% +17.4% 1983 ± 5% slabinfo.UNIX.active_objs
1688 ± 11% +17.4% 1983 ± 5% slabinfo.UNIX.num_objs
2460 ± 3% -15.8% 2072 ± 11% slabinfo.dmaengine-unmap-16.active_objs
2460 ± 3% -15.8% 2072 ± 11% slabinfo.dmaengine-unmap-16.num_objs
2814 ± 9% +14.6% 3225 ± 4% slabinfo.sock_inode_cache.active_objs
2814 ± 9% +14.6% 3225 ± 4% slabinfo.sock_inode_cache.num_objs
0.67 ± 5% +0.1 0.73 ± 3% perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.alloc_pages_vma.do_huge_pmd_wp_page.__handle_mm_fault.handle_mm_fault
0.68 ± 6% +0.1 0.74 ± 2% perf-profile.calltrace.cycles-pp.alloc_pages_vma.do_huge_pmd_wp_page.__handle_mm_fault.handle_mm_fault.__do_page_fault
0.05 +0.0 0.07 ± 7% perf-profile.children.cycles-pp.schedule
0.06 +0.0 0.08 ± 6% perf-profile.children.cycles-pp.__wake_up_common
0.06 ± 7% +0.0 0.08 ± 6% perf-profile.children.cycles-pp.wake_up_page_bit
0.23 ± 7% +0.0 0.28 ± 5% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.00 +0.1 0.05 perf-profile.children.cycles-pp.drm_fb_helper_sys_imageblit
0.00 +0.1 0.05 perf-profile.children.cycles-pp.sys_imageblit
29026 ± 3% -26.7% 21283 ± 44% numa-vmstat.node0.nr_inactive_anon
30069 ± 3% -20.5% 23905 ± 26% numa-vmstat.node0.nr_shmem
12120 ± 2% -15.5% 10241 ± 12% numa-vmstat.node0.nr_slab_reclaimable
29026 ± 3% -26.7% 21283 ± 44% numa-vmstat.node0.nr_zone_inactive_anon
4010893 +16.1% 4655889 ± 9% numa-vmstat.node1.nr_active_anon
3982581 +16.3% 4632344 ± 9% numa-vmstat.node1.nr_anon_pages
6861 +16.1% 7964 ± 8% numa-vmstat.node1.nr_anon_transparent_hugepages
2317 ± 42% +336.9% 10125 ± 93% numa-vmstat.node1.nr_inactive_anon
6596 ± 4% +18.2% 7799 ± 14% numa-vmstat.node1.nr_kernel_stack
9629 ± 8% +66.4% 16020 ± 41% numa-vmstat.node1.nr_shmem
7558 ± 3% +26.5% 9561 ± 14% numa-vmstat.node1.nr_slab_reclaimable
4010227 +16.1% 4655056 ± 9% numa-vmstat.node1.nr_zone_active_anon
2317 ± 42% +336.9% 10125 ± 93% numa-vmstat.node1.nr_zone_inactive_anon
2859663 ± 2% +46.2% 4179500 ± 36% numa-vmstat.node1.numa_hit
2680260 ± 2% +49.3% 4002218 ± 37% numa-vmstat.node1.numa_local
116661 ± 3% -26.3% 86010 ± 44% numa-meminfo.node0.Inactive
116192 ± 3% -26.7% 85146 ± 44% numa-meminfo.node0.Inactive(anon)
48486 ± 2% -15.5% 40966 ± 12% numa-meminfo.node0.KReclaimable
48486 ± 2% -15.5% 40966 ± 12% numa-meminfo.node0.SReclaimable
120367 ± 3% -20.5% 95642 ± 26% numa-meminfo.node0.Shmem
16210528 +15.2% 18673368 ± 6% numa-meminfo.node1.Active
16210394 +15.2% 18673287 ± 6% numa-meminfo.node1.Active(anon)
14170064 +15.6% 16379835 ± 7% numa-meminfo.node1.AnonHugePages
16113351 +15.3% 18577254 ± 7% numa-meminfo.node1.AnonPages
10534 ± 33% +293.8% 41480 ± 92% numa-meminfo.node1.Inactive
9262 ± 42% +338.2% 40589 ± 93% numa-meminfo.node1.Inactive(anon)
30235 ± 3% +26.5% 38242 ± 14% numa-meminfo.node1.KReclaimable
6594 ± 4% +18.3% 7802 ± 14% numa-meminfo.node1.KernelStack
17083646 +15.1% 19656922 ± 7% numa-meminfo.node1.MemUsed
30235 ± 3% +26.5% 38242 ± 14% numa-meminfo.node1.SReclaimable
38540 ± 8% +66.4% 64117 ± 42% numa-meminfo.node1.Shmem
106342 +19.8% 127451 ± 11% numa-meminfo.node1.Slab
9479688 +4.5% 9905902 proc-vmstat.nr_active_anon
9434298 +4.5% 9856978 proc-vmstat.nr_anon_pages
16194 +4.3% 16895 proc-vmstat.nr_anon_transparent_hugepages
276.75 +3.6% 286.75 proc-vmstat.nr_dirtied
3888633 -1.1% 3845882 proc-vmstat.nr_dirty_background_threshold
7786774 -1.1% 7701168 proc-vmstat.nr_dirty_threshold
39168820 -1.1% 38741444 proc-vmstat.nr_free_pages
50391 +1.0% 50904 proc-vmstat.nr_slab_unreclaimable
257.50 +3.6% 266.75 proc-vmstat.nr_written
9479678 +4.5% 9905895 proc-vmstat.nr_zone_active_anon
1501517 -5.9% 1412958 proc-vmstat.numa_hint_faults
1075936 -13.1% 934706 proc-vmstat.numa_hint_faults_local
17306395 +4.8% 18141722 proc-vmstat.numa_hit
5211079 +4.2% 5427541 proc-vmstat.numa_huge_pte_updates
17272620 +4.8% 18107691 proc-vmstat.numa_local
33774 +0.8% 34031 proc-vmstat.numa_other
690793 ± 3% -13.7% 596166 ± 2% proc-vmstat.numa_pages_migrated
2.669e+09 +4.2% 2.78e+09 proc-vmstat.numa_pte_updates
2.755e+09 +5.6% 2.909e+09 proc-vmstat.pgalloc_normal
13573227 ± 2% +3.6% 14060842 proc-vmstat.pgfault
2.752e+09 +5.6% 2.906e+09 proc-vmstat.pgfree
1.723e+08 ± 2% +14.3% 1.97e+08 ± 8% proc-vmstat.pgmigrate_fail
690793 ± 3% -13.7% 596166 ± 2% proc-vmstat.pgmigrate_success
5015265 +5.0% 5266730 proc-vmstat.thp_deferred_split_page
5019661 +5.0% 5271482 proc-vmstat.thp_fault_alloc
18284 ± 62% -79.9% 3681 ±172% sched_debug.cfs_rq:/.MIN_vruntime.avg
1901618 ± 62% -89.9% 192494 ±172% sched_debug.cfs_rq:/.MIN_vruntime.max
185571 ± 62% -85.8% 26313 ±172% sched_debug.cfs_rq:/.MIN_vruntime.stddev
15241 ± 6% -36.6% 9655 ± 6% sched_debug.cfs_rq:/.exec_clock.stddev
18284 ± 62% -79.9% 3681 ±172% sched_debug.cfs_rq:/.max_vruntime.avg
1901618 ± 62% -89.9% 192494 ±172% sched_debug.cfs_rq:/.max_vruntime.max
185571 ± 62% -85.8% 26313 ±172% sched_debug.cfs_rq:/.max_vruntime.stddev
898812 ± 7% -31.2% 618552 ± 5% sched_debug.cfs_rq:/.min_vruntime.stddev
10.30 ± 12% +34.5% 13.86 ± 6% sched_debug.cfs_rq:/.nr_spread_over.avg
34.75 ± 8% +95.9% 68.08 ± 4% sched_debug.cfs_rq:/.nr_spread_over.max
9.12 ± 11% +82.3% 16.62 ± 9% sched_debug.cfs_rq:/.nr_spread_over.stddev
-1470498 -31.9% -1000709 sched_debug.cfs_rq:/.spread0.min
899820 ± 7% -31.2% 618970 ± 5% sched_debug.cfs_rq:/.spread0.stddev
1589 ± 9% -19.2% 1284 ± 9% sched_debug.cfs_rq:/.util_avg.max
0.54 ± 39% +7484.6% 41.08 ± 92% sched_debug.cfs_rq:/.util_est_enqueued.min
238.84 ± 8% -33.2% 159.61 ± 26% sched_debug.cfs_rq:/.util_est_enqueued.stddev
10787 ± 2% +13.8% 12274 sched_debug.cpu.nr_switches.avg
35242 ± 9% +32.3% 46641 ± 25% sched_debug.cpu.nr_switches.max
9139 ± 3% +16.4% 10636 sched_debug.cpu.sched_count.avg
32025 ± 10% +34.6% 43091 ± 27% sched_debug.cpu.sched_count.max
4016 ± 2% +14.7% 4606 ± 5% sched_debug.cpu.sched_count.min
2960 +38.3% 4093 sched_debug.cpu.sched_goidle.avg
11201 ± 24% +75.8% 19691 ± 26% sched_debug.cpu.sched_goidle.max
1099 ± 6% +56.9% 1725 ± 6% sched_debug.cpu.sched_goidle.min
1877 ± 10% +32.5% 2487 ± 17% sched_debug.cpu.sched_goidle.stddev
4348 ± 3% +19.3% 5188 sched_debug.cpu.ttwu_count.avg
17832 ± 11% +78.6% 31852 ± 29% sched_debug.cpu.ttwu_count.max
1699 ± 6% +28.2% 2178 ± 7% sched_debug.cpu.ttwu_count.min
1357 ± 10% -22.6% 1050 ± 4% sched_debug.cpu.ttwu_local.avg
11483 ± 5% -25.0% 8614 ± 15% sched_debug.cpu.ttwu_local.max
1979 ± 12% -36.8% 1251 ± 10% sched_debug.cpu.ttwu_local.stddev
3.941e+10 +5.0% 4.137e+10 perf-stat.i.branch-instructions
0.02 ± 50% -0.0 0.02 ± 5% perf-stat.i.branch-miss-rate%
67.94 -3.9 63.99 perf-stat.i.cache-miss-rate%
8.329e+08 -1.9% 8.17e+08 perf-stat.i.cache-misses
1.224e+09 +4.5% 1.28e+09 perf-stat.i.cache-references
6650 ± 3% +15.5% 7678 perf-stat.i.context-switches
1.64 -1.8% 1.61 perf-stat.i.cpi
2.037e+11 +2.8% 2.095e+11 perf-stat.i.cpu-cycles
257.56 -4.0% 247.13 perf-stat.i.cpu-migrations
244.94 +4.5% 255.91 perf-stat.i.cycles-between-cache-misses
1189446 ± 2% +3.2% 1227527 perf-stat.i.dTLB-load-misses
2.669e+10 +4.7% 2.794e+10 perf-stat.i.dTLB-loads
0.00 ± 7% -0.0 0.00 perf-stat.i.dTLB-store-miss-rate%
337782 +4.5% 353044 perf-stat.i.dTLB-store-misses
9.096e+09 +4.7% 9.526e+09 perf-stat.i.dTLB-stores
39.50 +2.1 41.64 perf-stat.i.iTLB-load-miss-rate%
296305 ± 2% +9.0% 323020 perf-stat.i.iTLB-load-misses
1.238e+11 +4.9% 1.299e+11 perf-stat.i.instructions
428249 ± 2% -4.4% 409553 perf-stat.i.instructions-per-iTLB-miss
0.61 +1.6% 0.62 perf-stat.i.ipc
44430 +3.8% 46121 perf-stat.i.minor-faults
54.82 +3.9 58.73 perf-stat.i.node-load-miss-rate%
68519419 ± 4% -11.7% 60479057 ± 6% perf-stat.i.node-load-misses
49879161 ± 3% -20.7% 39554915 ± 4% perf-stat.i.node-loads
44428 +3.8% 46119 perf-stat.i.page-faults
0.02 -0.0 0.01 ± 5% perf-stat.overall.branch-miss-rate%
68.03 -4.2 63.83 perf-stat.overall.cache-miss-rate%
1.65 -2.0% 1.61 perf-stat.overall.cpi
244.61 +4.8% 256.41 perf-stat.overall.cycles-between-cache-misses
30.21 +2.2 32.38 perf-stat.overall.iTLB-load-miss-rate%
417920 ± 2% -3.7% 402452 perf-stat.overall.instructions-per-iTLB-miss
0.61 +2.1% 0.62 perf-stat.overall.ipc
57.84 +2.6 60.44 perf-stat.overall.node-load-miss-rate%
3.925e+10 +5.1% 4.124e+10 perf-stat.ps.branch-instructions
8.295e+08 -1.8% 8.144e+08 perf-stat.ps.cache-misses
1.219e+09 +4.6% 1.276e+09 perf-stat.ps.cache-references
6625 ± 3% +15.4% 7648 perf-stat.ps.context-switches
2.029e+11 +2.9% 2.088e+11 perf-stat.ps.cpu-cycles
256.82 -4.2% 246.09 perf-stat.ps.cpu-migrations
1184763 ± 2% +3.3% 1223366 perf-stat.ps.dTLB-load-misses
2.658e+10 +4.8% 2.786e+10 perf-stat.ps.dTLB-loads
336658 +4.5% 351710 perf-stat.ps.dTLB-store-misses
9.059e+09 +4.8% 9.497e+09 perf-stat.ps.dTLB-stores
295140 ± 2% +9.0% 321824 perf-stat.ps.iTLB-load-misses
1.233e+11 +5.0% 1.295e+11 perf-stat.ps.instructions
44309 +3.7% 45933 perf-stat.ps.minor-faults
68208972 ± 4% -11.6% 60272675 ± 6% perf-stat.ps.node-load-misses
49689740 ± 3% -20.7% 39401789 ± 4% perf-stat.ps.node-loads
44308 +3.7% 45932 perf-stat.ps.page-faults
3.732e+13 +5.1% 3.922e+13 perf-stat.total.instructions
14949 ± 2% +14.5% 17124 ± 11% softirqs.CPU0.SCHED
9940 +37.8% 13700 ± 24% softirqs.CPU1.SCHED
9370 ± 2% +28.2% 12014 ± 16% softirqs.CPU10.SCHED
17637 ± 2% -16.5% 14733 ± 16% softirqs.CPU101.SCHED
17846 ± 3% -17.4% 14745 ± 16% softirqs.CPU103.SCHED
9552 +24.7% 11916 ± 17% softirqs.CPU11.SCHED
9210 ± 5% +27.9% 11784 ± 16% softirqs.CPU12.SCHED
9378 ± 3% +27.7% 11974 ± 16% softirqs.CPU13.SCHED
9164 ± 2% +29.4% 11856 ± 18% softirqs.CPU14.SCHED
9215 +21.2% 11170 ± 19% softirqs.CPU15.SCHED
9118 ± 2% +29.1% 11772 ± 16% softirqs.CPU16.SCHED
9413 +29.2% 12165 ± 18% softirqs.CPU17.SCHED
9309 ± 2% +29.9% 12097 ± 17% softirqs.CPU18.SCHED
9423 +26.1% 11880 ± 15% softirqs.CPU19.SCHED
9010 ± 7% +37.8% 12420 ± 18% softirqs.CPU2.SCHED
9382 ± 3% +27.0% 11916 ± 15% softirqs.CPU20.SCHED
9102 ± 4% +30.0% 11830 ± 16% softirqs.CPU21.SCHED
9543 ± 3% +23.4% 11780 ± 18% softirqs.CPU22.SCHED
8998 ± 5% +29.2% 11630 ± 18% softirqs.CPU24.SCHED
9254 ± 2% +23.9% 11462 ± 19% softirqs.CPU25.SCHED
18450 ± 4% -16.9% 15341 ± 16% softirqs.CPU26.SCHED
17551 ± 4% -14.8% 14956 ± 13% softirqs.CPU27.SCHED
17575 ± 4% -14.6% 15010 ± 14% softirqs.CPU28.SCHED
17515 ± 5% -14.2% 15021 ± 13% softirqs.CPU29.SCHED
17715 ± 2% -16.1% 14856 ± 13% softirqs.CPU30.SCHED
17754 ± 4% -16.1% 14904 ± 13% softirqs.CPU31.SCHED
17675 ± 2% -17.0% 14679 ± 21% softirqs.CPU32.SCHED
17625 ± 2% -16.0% 14813 ± 13% softirqs.CPU34.SCHED
17619 ± 2% -14.7% 15024 ± 14% softirqs.CPU35.SCHED
17887 ± 3% -17.0% 14841 ± 14% softirqs.CPU36.SCHED
17658 ± 3% -16.3% 14771 ± 12% softirqs.CPU38.SCHED
17501 ± 2% -15.3% 14816 ± 14% softirqs.CPU39.SCHED
9360 ± 2% +25.4% 11740 ± 14% softirqs.CPU4.SCHED
17699 ± 4% -16.2% 14827 ± 14% softirqs.CPU42.SCHED
17580 ± 3% -16.5% 14679 ± 15% softirqs.CPU43.SCHED
17658 ± 3% -17.1% 14644 ± 14% softirqs.CPU44.SCHED
17452 ± 4% -14.0% 15001 ± 15% softirqs.CPU46.SCHED
17599 ± 4% -17.4% 14544 ± 14% softirqs.CPU47.SCHED
17792 ± 3% -16.5% 14864 ± 14% softirqs.CPU48.SCHED
17333 ± 2% -16.7% 14445 ± 14% softirqs.CPU49.SCHED
9483 +32.3% 12547 ± 24% softirqs.CPU5.SCHED
17842 ± 3% -15.9% 14997 ± 16% softirqs.CPU51.SCHED
9051 ± 2% +23.3% 11160 ± 13% softirqs.CPU52.SCHED
9385 ± 3% +25.2% 11752 ± 16% softirqs.CPU53.SCHED
9446 ± 6% +24.9% 11798 ± 14% softirqs.CPU54.SCHED
10006 ± 6% +22.4% 12249 ± 14% softirqs.CPU55.SCHED
9657 +22.0% 11780 ± 16% softirqs.CPU57.SCHED
9399 +27.5% 11980 ± 15% softirqs.CPU58.SCHED
9234 ± 3% +27.7% 11795 ± 14% softirqs.CPU59.SCHED
9726 ± 6% +24.0% 12062 ± 16% softirqs.CPU6.SCHED
9165 ± 2% +23.7% 11342 ± 14% softirqs.CPU60.SCHED
9357 ± 2% +25.8% 11774 ± 15% softirqs.CPU61.SCHED
9406 ± 3% +25.2% 11780 ± 16% softirqs.CPU62.SCHED
9489 +23.2% 11688 ± 15% softirqs.CPU63.SCHED
9399 ± 2% +23.5% 11604 ± 16% softirqs.CPU65.SCHED
8950 ± 2% +31.6% 11774 ± 16% softirqs.CPU66.SCHED
9260 +21.7% 11267 ± 19% softirqs.CPU67.SCHED
9187 +27.1% 11672 ± 17% softirqs.CPU68.SCHED
9443 ± 2% +25.5% 11847 ± 17% softirqs.CPU69.SCHED
9144 ± 3% +28.0% 11706 ± 16% softirqs.CPU7.SCHED
9276 ± 2% +28.0% 11871 ± 17% softirqs.CPU70.SCHED
9494 +21.4% 11526 ± 14% softirqs.CPU71.SCHED
9124 ± 3% +27.8% 11657 ± 17% softirqs.CPU72.SCHED
9189 ± 3% +25.9% 11568 ± 16% softirqs.CPU73.SCHED
9392 ± 2% +23.7% 11619 ± 16% softirqs.CPU74.SCHED
17821 ± 3% -14.7% 15197 ± 17% softirqs.CPU78.SCHED
17581 ± 2% -15.7% 14827 ± 15% softirqs.CPU79.SCHED
9123 +28.2% 11695 ± 15% softirqs.CPU8.SCHED
17524 ± 2% -16.7% 14601 ± 14% softirqs.CPU80.SCHED
17644 ± 3% -16.2% 14782 ± 14% softirqs.CPU81.SCHED
17705 ± 3% -18.6% 14414 ± 22% softirqs.CPU84.SCHED
17679 ± 2% -14.1% 15185 ± 11% softirqs.CPU85.SCHED
17434 ± 3% -15.5% 14724 ± 14% softirqs.CPU86.SCHED
17409 ± 2% -15.0% 14794 ± 13% softirqs.CPU87.SCHED
17470 ± 3% -15.7% 14730 ± 13% softirqs.CPU88.SCHED
17748 ± 4% -17.1% 14721 ± 12% softirqs.CPU89.SCHED
9323 +28.0% 11929 ± 17% softirqs.CPU9.SCHED
17471 ± 2% -16.9% 14525 ± 13% softirqs.CPU90.SCHED
17900 ± 3% -17.0% 14850 ± 14% softirqs.CPU94.SCHED
17599 ± 4% -17.4% 14544 ± 15% softirqs.CPU95.SCHED
17697 ± 4% -17.7% 14569 ± 13% softirqs.CPU96.SCHED
17561 ± 3% -15.1% 14901 ± 13% softirqs.CPU97.SCHED
17404 ± 3% -16.1% 14601 ± 13% softirqs.CPU98.SCHED
17802 ± 3% -19.4% 14344 ± 15% softirqs.CPU99.SCHED
1310 ± 10% -17.0% 1088 ± 5% interrupts.CPU1.RES:Rescheduling_interrupts
3427 +13.3% 3883 ± 9% interrupts.CPU10.CAL:Function_call_interrupts
736.50 ± 20% +34.4% 989.75 ± 17% interrupts.CPU100.RES:Rescheduling_interrupts
3421 ± 3% +14.6% 3921 ± 9% interrupts.CPU101.CAL:Function_call_interrupts
4873 ± 8% +16.2% 5662 ± 7% interrupts.CPU101.NMI:Non-maskable_interrupts
4873 ± 8% +16.2% 5662 ± 7% interrupts.CPU101.PMI:Performance_monitoring_interrupts
629.50 ± 19% +83.2% 1153 ± 46% interrupts.CPU101.RES:Rescheduling_interrupts
661.75 ± 14% +25.7% 832.00 ± 13% interrupts.CPU102.RES:Rescheduling_interrupts
4695 ± 5% +15.5% 5420 ± 9% interrupts.CPU103.NMI:Non-maskable_interrupts
4695 ± 5% +15.5% 5420 ± 9% interrupts.CPU103.PMI:Performance_monitoring_interrupts
3460 +12.1% 3877 ± 9% interrupts.CPU11.CAL:Function_call_interrupts
691.50 ± 7% +41.0% 975.00 ± 32% interrupts.CPU19.RES:Rescheduling_interrupts
3413 ± 2% +13.4% 3870 ± 10% interrupts.CPU20.CAL:Function_call_interrupts
3413 ± 2% +13.4% 3871 ± 10% interrupts.CPU22.CAL:Function_call_interrupts
863.00 ± 36% +45.3% 1254 ± 24% interrupts.CPU23.RES:Rescheduling_interrupts
659.75 ± 12% +83.4% 1209 ± 20% interrupts.CPU26.RES:Rescheduling_interrupts
615.00 ± 10% +87.8% 1155 ± 14% interrupts.CPU27.RES:Rescheduling_interrupts
663.75 ± 5% +67.9% 1114 ± 7% interrupts.CPU28.RES:Rescheduling_interrupts
3421 ± 4% +13.4% 3879 ± 9% interrupts.CPU29.CAL:Function_call_interrupts
805.25 ± 16% +33.0% 1071 ± 15% interrupts.CPU29.RES:Rescheduling_interrupts
3482 ± 3% +11.0% 3864 ± 8% interrupts.CPU3.CAL:Function_call_interrupts
819.75 ± 19% +48.4% 1216 ± 12% interrupts.CPU30.RES:Rescheduling_interrupts
777.25 ± 8% +31.6% 1023 ± 6% interrupts.CPU31.RES:Rescheduling_interrupts
844.50 ± 25% +41.7% 1196 ± 20% interrupts.CPU32.RES:Rescheduling_interrupts
722.75 ± 14% +94.2% 1403 ± 26% interrupts.CPU33.RES:Rescheduling_interrupts
3944 ± 25% +36.8% 5394 ± 9% interrupts.CPU34.NMI:Non-maskable_interrupts
3944 ± 25% +36.8% 5394 ± 9% interrupts.CPU34.PMI:Performance_monitoring_interrupts
781.75 ± 9% +45.3% 1136 ± 27% interrupts.CPU34.RES:Rescheduling_interrupts
735.50 ± 9% +33.3% 980.75 ± 4% interrupts.CPU35.RES:Rescheduling_interrupts
691.75 ± 10% +41.6% 979.50 ± 13% interrupts.CPU36.RES:Rescheduling_interrupts
727.00 ± 16% +47.7% 1074 ± 15% interrupts.CPU37.RES:Rescheduling_interrupts
4413 ± 7% +24.9% 5511 ± 9% interrupts.CPU38.NMI:Non-maskable_interrupts
4413 ± 7% +24.9% 5511 ± 9% interrupts.CPU38.PMI:Performance_monitoring_interrupts
708.75 ± 25% +62.6% 1152 ± 22% interrupts.CPU38.RES:Rescheduling_interrupts
666.50 ± 7% +57.8% 1052 ± 13% interrupts.CPU39.RES:Rescheduling_interrupts
765.75 ± 11% +25.2% 958.75 ± 14% interrupts.CPU4.RES:Rescheduling_interrupts
3395 ± 2% +15.1% 3908 ± 10% interrupts.CPU40.CAL:Function_call_interrupts
770.00 ± 16% +45.3% 1119 ± 18% interrupts.CPU40.RES:Rescheduling_interrupts
740.50 ± 26% +61.9% 1198 ± 19% interrupts.CPU41.RES:Rescheduling_interrupts
3459 ± 2% +12.9% 3905 ± 11% interrupts.CPU42.CAL:Function_call_interrupts
4530 ± 5% +22.8% 5564 ± 9% interrupts.CPU42.NMI:Non-maskable_interrupts
4530 ± 5% +22.8% 5564 ± 9% interrupts.CPU42.PMI:Performance_monitoring_interrupts
3330 ± 25% +60.0% 5328 ± 10% interrupts.CPU44.NMI:Non-maskable_interrupts
3330 ± 25% +60.0% 5328 ± 10% interrupts.CPU44.PMI:Performance_monitoring_interrupts
686.25 ± 9% +48.4% 1018 ± 10% interrupts.CPU44.RES:Rescheduling_interrupts
702.00 ± 15% +38.6% 973.25 ± 5% interrupts.CPU45.RES:Rescheduling_interrupts
4742 ± 7% +19.3% 5657 ± 8% interrupts.CPU46.NMI:Non-maskable_interrupts
4742 ± 7% +19.3% 5657 ± 8% interrupts.CPU46.PMI:Performance_monitoring_interrupts
732.75 ± 6% +51.9% 1113 ± 7% interrupts.CPU46.RES:Rescheduling_interrupts
775.50 ± 17% +41.3% 1095 ± 6% interrupts.CPU47.RES:Rescheduling_interrupts
670.75 ± 5% +60.7% 1078 ± 6% interrupts.CPU48.RES:Rescheduling_interrupts
4870 ± 8% +16.5% 5676 ± 7% interrupts.CPU49.NMI:Non-maskable_interrupts
4870 ± 8% +16.5% 5676 ± 7% interrupts.CPU49.PMI:Performance_monitoring_interrupts
694.75 ± 12% +25.8% 874.00 ± 11% interrupts.CPU49.RES:Rescheduling_interrupts
686.00 ± 9% +52.0% 1042 ± 20% interrupts.CPU50.RES:Rescheduling_interrupts
3361 +17.2% 3938 ± 9% interrupts.CPU51.CAL:Function_call_interrupts
4707 ± 6% +16.0% 5463 ± 8% interrupts.CPU51.NMI:Non-maskable_interrupts
4707 ± 6% +16.0% 5463 ± 8% interrupts.CPU51.PMI:Performance_monitoring_interrupts
638.75 ± 12% +28.6% 821.25 ± 15% interrupts.CPU54.RES:Rescheduling_interrupts
677.50 ± 8% +51.8% 1028 ± 29% interrupts.CPU58.RES:Rescheduling_interrupts
3465 ± 2% +12.0% 3880 ± 9% interrupts.CPU6.CAL:Function_call_interrupts
641.25 ± 2% +26.1% 808.75 ± 10% interrupts.CPU60.RES:Rescheduling_interrupts
599.75 ± 2% +45.6% 873.50 ± 8% interrupts.CPU62.RES:Rescheduling_interrupts
661.50 ± 9% +52.4% 1008 ± 27% interrupts.CPU63.RES:Rescheduling_interrupts
611.00 ± 12% +31.1% 801.00 ± 13% interrupts.CPU69.RES:Rescheduling_interrupts
3507 ± 2% +10.8% 3888 ± 9% interrupts.CPU7.CAL:Function_call_interrupts
664.00 ± 5% +32.3% 878.50 ± 23% interrupts.CPU70.RES:Rescheduling_interrupts
5780 ± 9% -38.8% 3540 ± 37% interrupts.CPU73.NMI:Non-maskable_interrupts
5780 ± 9% -38.8% 3540 ± 37% interrupts.CPU73.PMI:Performance_monitoring_interrupts
5787 ± 9% -26.7% 4243 ± 28% interrupts.CPU76.NMI:Non-maskable_interrupts
5787 ± 9% -26.7% 4243 ± 28% interrupts.CPU76.PMI:Performance_monitoring_interrupts
751.50 ± 15% +88.0% 1413 ± 37% interrupts.CPU78.RES:Rescheduling_interrupts
725.50 ± 12% +82.9% 1327 ± 36% interrupts.CPU79.RES:Rescheduling_interrupts
714.00 ± 18% +33.2% 951.00 ± 15% interrupts.CPU80.RES:Rescheduling_interrupts
706.25 ± 19% +55.6% 1098 ± 27% interrupts.CPU82.RES:Rescheduling_interrupts
4524 ± 6% +19.6% 5409 ± 8% interrupts.CPU83.NMI:Non-maskable_interrupts
4524 ± 6% +19.6% 5409 ± 8% interrupts.CPU83.PMI:Performance_monitoring_interrupts
666.75 ± 15% +37.3% 915.50 ± 4% interrupts.CPU83.RES:Rescheduling_interrupts
782.50 ± 26% +57.6% 1233 ± 21% interrupts.CPU84.RES:Rescheduling_interrupts
622.75 ± 12% +77.8% 1107 ± 17% interrupts.CPU85.RES:Rescheduling_interrupts
3465 ± 3% +13.5% 3933 ± 9% interrupts.CPU86.CAL:Function_call_interrupts
714.75 ± 14% +47.0% 1050 ± 10% interrupts.CPU86.RES:Rescheduling_interrupts
3519 ± 2% +11.7% 3929 ± 9% interrupts.CPU87.CAL:Function_call_interrupts
582.75 ± 10% +54.2% 898.75 ± 11% interrupts.CPU87.RES:Rescheduling_interrupts
713.00 ± 10% +36.6% 974.25 ± 11% interrupts.CPU88.RES:Rescheduling_interrupts
690.50 ± 13% +53.0% 1056 ± 13% interrupts.CPU89.RES:Rescheduling_interrupts
3477 +11.0% 3860 ± 8% interrupts.CPU9.CAL:Function_call_interrupts
684.50 ± 14% +39.7% 956.25 ± 11% interrupts.CPU90.RES:Rescheduling_interrupts
3946 ± 21% +39.8% 5516 ± 10% interrupts.CPU91.NMI:Non-maskable_interrupts
3946 ± 21% +39.8% 5516 ± 10% interrupts.CPU91.PMI:Performance_monitoring_interrupts
649.00 ± 13% +54.3% 1001 ± 6% interrupts.CPU91.RES:Rescheduling_interrupts
674.25 ± 21% +39.5% 940.25 ± 11% interrupts.CPU92.RES:Rescheduling_interrupts
3971 ± 26% +41.2% 5606 ± 8% interrupts.CPU94.NMI:Non-maskable_interrupts
3971 ± 26% +41.2% 5606 ± 8% interrupts.CPU94.PMI:Performance_monitoring_interrupts
4129 ± 22% +33.2% 5499 ± 9% interrupts.CPU95.NMI:Non-maskable_interrupts
4129 ± 22% +33.2% 5499 ± 9% interrupts.CPU95.PMI:Performance_monitoring_interrupts
685.75 ± 14% +38.0% 946.50 ± 9% interrupts.CPU96.RES:Rescheduling_interrupts
4630 ± 11% +18.3% 5477 ± 8% interrupts.CPU97.NMI:Non-maskable_interrupts
4630 ± 11% +18.3% 5477 ± 8% interrupts.CPU97.PMI:Performance_monitoring_interrupts
4835 ± 9% +16.3% 5622 ± 9% interrupts.CPU98.NMI:Non-maskable_interrupts
4835 ± 9% +16.3% 5622 ± 9% interrupts.CPU98.PMI:Performance_monitoring_interrupts
596.25 ± 11% +81.8% 1083 ± 9% interrupts.CPU98.RES:Rescheduling_interrupts
674.75 ± 17% +43.7% 969.50 ± 5% interrupts.CPU99.RES:Rescheduling_interrupts
78.25 ± 13% +21.4% 95.00 ± 10% interrupts.IWI:IRQ_work_interrupts
85705 ± 6% +26.0% 107990 ± 6% interrupts.RES:Rescheduling_interrupts
***************************************************************************************************
lkp-bdw-ep6: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
=========================================================================================
class/compiler/cpufreq_governor/disk/kconfig/nr_threads/rootfs/sc_pid_max/tbox_group/testcase/testtime/ucode:
scheduler/gcc-7/performance/1HDD/x86_64-rhel-7.6/100%/debian-x86_64-2019-11-14.cgz/4194304/lkp-bdw-ep6/stress-ng/1s/0xb000038
commit:
fcf0553db6 ("sched/fair: Remove meaningless imbalance calculation")
0b0695f2b3 ("sched/fair: Rework load_balance()")
fcf0553db6f4c793 0b0695f2b34a4afa3f6e9aa1ff0
---------------- ---------------------------
%stddev %change %stddev
\ | \
887157 ± 4% -23.1% 682080 ± 3% stress-ng.fault.ops
887743 ± 4% -23.1% 682337 ± 3% stress-ng.fault.ops_per_sec
9537184 ± 10% -21.2% 7518352 ± 14% stress-ng.hrtimers.ops_per_sec
360922 ± 13% -21.1% 284734 ± 6% stress-ng.kill.ops
361115 ± 13% -21.1% 284810 ± 6% stress-ng.kill.ops_per_sec
23260649 -26.9% 17006477 ± 24% stress-ng.mq.ops
23255884 -26.9% 17004540 ± 24% stress-ng.mq.ops_per_sec
3291588 ± 3% +42.5% 4690316 ± 2% stress-ng.schedpolicy.ops
3327913 ± 3% +41.5% 4709770 ± 2% stress-ng.schedpolicy.ops_per_sec
48.14 -2.2% 47.09 stress-ng.time.elapsed_time
48.14 -2.2% 47.09 stress-ng.time.elapsed_time.max
5480 +3.7% 5681 stress-ng.time.percent_of_cpu_this_job_got
2249 +1.3% 2278 stress-ng.time.system_time
902759 ± 4% -22.6% 698616 ± 3% proc-vmstat.unevictable_pgs_culled
98767954 ± 7% +16.4% 1.15e+08 ± 7% cpuidle.C1.time
1181676 ± 12% -43.2% 671022 ± 37% cpuidle.C6.usage
2.21 ± 7% +0.4 2.62 ± 10% turbostat.C1%
1176838 ± 12% -43.2% 668921 ± 37% turbostat.C6
3961223 ± 4% +12.8% 4469620 ± 5% vmstat.memory.cache
439.50 ± 3% +14.7% 504.00 ± 9% vmstat.procs.r
0.42 ± 7% -15.6% 0.35 ± 13% sched_debug.cfs_rq:/.nr_running.stddev
0.00 ± 4% -18.1% 0.00 ± 16% sched_debug.cpu.next_balance.stddev
0.41 ± 7% -15.1% 0.35 ± 13% sched_debug.cpu.nr_running.stddev
9367 ± 9% -12.8% 8166 ± 2% softirqs.CPU1.SCHED
35143 ± 6% -12.0% 30930 ± 2% softirqs.CPU22.TIMER
31997 ± 4% -7.5% 29595 ± 2% softirqs.CPU27.TIMER
3.64 ±173% -100.0% 0.00 iostat.sda.await.max
3.64 ±173% -100.0% 0.00 iostat.sda.r_await.max
3.90 ±173% -100.0% 0.00 iostat.sdc.await.max
3.90 ±173% -100.0% 0.00 iostat.sdc.r_await.max
12991737 ± 10% +61.5% 20979642 ± 8% numa-numastat.node0.local_node
13073590 ± 10% +61.1% 21059448 ± 8% numa-numastat.node0.numa_hit
20903562 ± 3% -32.2% 14164789 ± 3% numa-numastat.node1.local_node
20993788 ± 3% -32.1% 14245636 ± 3% numa-numastat.node1.numa_hit
90229 ± 4% -10.4% 80843 ± 9% numa-numastat.node1.other_node
50.75 ± 90% +1732.0% 929.75 ±147% interrupts.CPU23.IWI:IRQ_work_interrupts
40391 ± 59% -57.0% 17359 ± 11% interrupts.CPU24.RES:Rescheduling_interrupts
65670 ± 11% -48.7% 33716 ± 54% interrupts.CPU42.RES:Rescheduling_interrupts
42201 ± 46% -57.1% 18121 ± 35% interrupts.CPU49.RES:Rescheduling_interrupts
293869 ± 44% +103.5% 598082 ± 23% interrupts.CPU52.LOC:Local_timer_interrupts
17367 ± 8% +120.5% 38299 ± 44% interrupts.CPU55.RES:Rescheduling_interrupts
1.127e+08 +3.8% 1.17e+08 ± 2% perf-stat.i.branch-misses
11.10 +1.2 12.26 ± 6% perf-stat.i.cache-miss-rate%
4.833e+10 ± 3% +4.7% 5.06e+10 perf-stat.i.instructions
15009442 ± 4% +14.3% 17150138 ± 3% perf-stat.i.node-load-misses
47.12 ± 5% +3.2 50.37 ± 5% perf-stat.i.node-store-miss-rate%
6016833 ± 7% +17.0% 7036803 ± 3% perf-stat.i.node-store-misses
1.044e+10 ± 2% +4.0% 1.086e+10 perf-stat.ps.branch-instructions
1.364e+10 ± 3% +4.0% 1.418e+10 perf-stat.ps.dTLB-loads
4.804e+10 ± 2% +4.1% 5.003e+10 perf-stat.ps.instructions
14785608 ± 5% +11.3% 16451530 ± 3% perf-stat.ps.node-load-misses
5968712 ± 7% +13.4% 6769847 ± 3% perf-stat.ps.node-store-misses
13588 ± 4% +29.4% 17585 ± 9% slabinfo.Acpi-State.active_objs
13588 ± 4% +29.4% 17585 ± 9% slabinfo.Acpi-State.num_objs
20859 ± 3% -8.6% 19060 ± 4% slabinfo.kmalloc-192.num_objs
488.00 ± 25% +41.0% 688.00 ± 5% slabinfo.kmalloc-rcl-128.active_objs
488.00 ± 25% +41.0% 688.00 ± 5% slabinfo.kmalloc-rcl-128.num_objs
39660 ± 3% +11.8% 44348 ± 2% slabinfo.radix_tree_node.active_objs
44284 ± 3% +12.3% 49720 slabinfo.radix_tree_node.num_objs
5811 ± 15% +16.1% 6746 ± 14% slabinfo.sighand_cache.active_objs
402.00 ± 15% +17.5% 472.50 ± 14% slabinfo.sighand_cache.active_slabs
6035 ± 15% +17.5% 7091 ± 14% slabinfo.sighand_cache.num_objs
402.00 ± 15% +17.5% 472.50 ± 14% slabinfo.sighand_cache.num_slabs
10282 ± 10% +12.9% 11604 ± 9% slabinfo.signal_cache.active_objs
11350 ± 10% +12.8% 12808 ± 9% slabinfo.signal_cache.num_objs
732920 ± 9% +162.0% 1919987 ± 11% numa-meminfo.node0.Active
732868 ± 9% +162.0% 1919814 ± 11% numa-meminfo.node0.Active(anon)
545019 ± 6% +61.0% 877443 ± 17% numa-meminfo.node0.AnonHugePages
695015 ± 10% +46.8% 1020150 ± 14% numa-meminfo.node0.AnonPages
638322 ± 4% +448.2% 3499399 ± 5% numa-meminfo.node0.FilePages
81008 ± 14% +2443.4% 2060329 ± 3% numa-meminfo.node0.Inactive
80866 ± 14% +2447.4% 2060022 ± 3% numa-meminfo.node0.Inactive(anon)
86504 ± 10% +2287.3% 2065084 ± 3% numa-meminfo.node0.Mapped
2010104 +160.8% 5242366 ± 5% numa-meminfo.node0.MemUsed
16453 ± 15% +159.2% 42640 numa-meminfo.node0.PageTables
112769 ± 13% +2521.1% 2955821 ± 7% numa-meminfo.node0.Shmem
1839527 ± 4% -60.2% 732645 ± 23% numa-meminfo.node1.Active
1839399 ± 4% -60.2% 732637 ± 23% numa-meminfo.node1.Active(anon)
982237 ± 7% -45.9% 531445 ± 27% numa-meminfo.node1.AnonHugePages
1149348 ± 8% -41.2% 676067 ± 25% numa-meminfo.node1.AnonPages
3170649 ± 4% -77.2% 723230 ± 7% numa-meminfo.node1.FilePages
1960718 ± 4% -91.8% 160773 ± 31% numa-meminfo.node1.Inactive
1960515 ± 4% -91.8% 160722 ± 31% numa-meminfo.node1.Inactive(anon)
118489 ± 11% -20.2% 94603 ± 3% numa-meminfo.node1.KReclaimable
1966065 ± 4% -91.5% 166789 ± 29% numa-meminfo.node1.Mapped
5034310 ± 3% -60.2% 2003121 ± 9% numa-meminfo.node1.MemUsed
42684 ± 10% -64.2% 15283 ± 21% numa-meminfo.node1.PageTables
118489 ± 11% -20.2% 94603 ± 3% numa-meminfo.node1.SReclaimable
2644708 ± 5% -91.9% 214268 ± 24% numa-meminfo.node1.Shmem
147513 ± 20% +244.2% 507737 ± 7% numa-vmstat.node0.nr_active_anon
137512 ± 21% +105.8% 282999 ± 3% numa-vmstat.node0.nr_anon_pages
210.25 ± 33% +124.7% 472.50 ± 11% numa-vmstat.node0.nr_anon_transparent_hugepages
158008 ± 4% +454.7% 876519 ± 6% numa-vmstat.node0.nr_file_pages
18416 ± 27% +2711.4% 517747 ± 3% numa-vmstat.node0.nr_inactive_anon
26255 ± 22% +34.3% 35251 ± 10% numa-vmstat.node0.nr_kernel_stack
19893 ± 23% +2509.5% 519129 ± 3% numa-vmstat.node0.nr_mapped
3928 ± 22% +179.4% 10976 ± 4% numa-vmstat.node0.nr_page_table_pages
26623 ± 18% +2681.9% 740635 ± 7% numa-vmstat.node0.nr_shmem
147520 ± 20% +244.3% 507885 ± 7% numa-vmstat.node0.nr_zone_active_anon
18415 ± 27% +2711.5% 517739 ± 3% numa-vmstat.node0.nr_zone_inactive_anon
6937137 ± 8% +55.9% 10814957 ± 7% numa-vmstat.node0.numa_hit
6860210 ± 8% +56.6% 10739902 ± 7% numa-vmstat.node0.numa_local
425559 ± 13% -52.9% 200300 ± 17% numa-vmstat.node1.nr_active_anon
786341 ± 4% -76.6% 183664 ± 7% numa-vmstat.node1.nr_file_pages
483646 ± 4% -90.8% 44606 ± 29% numa-vmstat.node1.nr_inactive_anon
485120 ± 4% -90.5% 46130 ± 27% numa-vmstat.node1.nr_mapped
10471 ± 6% -61.3% 4048 ± 18% numa-vmstat.node1.nr_page_table_pages
654852 ± 5% -91.4% 56439 ± 25% numa-vmstat.node1.nr_shmem
29681 ± 11% -20.3% 23669 ± 3% numa-vmstat.node1.nr_slab_reclaimable
425556 ± 13% -52.9% 200359 ± 17% numa-vmstat.node1.nr_zone_active_anon
483649 ± 4% -90.8% 44600 ± 29% numa-vmstat.node1.nr_zone_inactive_anon
10527487 ± 5% -31.3% 7233899 ± 6% numa-vmstat.node1.numa_hit
10290625 ± 5% -31.9% 7006050 ± 7% numa-vmstat.node1.numa_local
***************************************************************************************************
lkp-csl-2sp5: 96 threads Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 192G memory
=========================================================================================
class/compiler/cpufreq_governor/disk/kconfig/nr_threads/rootfs/tbox_group/testcase/testtime/ucode:
interrupt/gcc-7/performance/1HDD/x86_64-fedora-25/100%/debian-x86_64-2019-11-14.cgz/lkp-csl-2sp5/stress-ng/1s/0x500002c
commit:
fcf0553db6 ("sched/fair: Remove meaningless imbalance calculation")
0b0695f2b3 ("sched/fair: Rework load_balance()")
fcf0553db6f4c793 0b0695f2b34a4afa3f6e9aa1ff0
---------------- ---------------------------
%stddev %change %stddev
\ | \
6684836 -33.3% 4457559 ± 4% stress-ng.schedpolicy.ops
6684766 -33.3% 4457633 ± 4% stress-ng.schedpolicy.ops_per_sec
19978129 -28.8% 14231813 ± 16% stress-ng.time.involuntary_context_switches
82.49 ± 2% -5.2% 78.23 stress-ng.time.user_time
106716 ± 29% +40.3% 149697 ± 2% meminfo.max_used_kB
4.07 ± 22% +1.2 5.23 ± 5% mpstat.cpu.all.irq%
2721317 ± 10% +66.5% 4531100 ± 22% cpuidle.POLL.time
71470 ± 18% +41.1% 100822 ± 11% cpuidle.POLL.usage
841.00 ± 41% -50.4% 417.25 ± 17% numa-meminfo.node0.Dirty
7096 ± 7% +25.8% 8930 ± 9% numa-meminfo.node1.KernelStack
68752 ± 90% -45.9% 37169 ±143% sched_debug.cfs_rq:/.runnable_weight.stddev
654.93 ± 11% +19.3% 781.09 ± 2% sched_debug.cpu.clock_task.stddev
183.06 ± 83% -76.9% 42.20 ± 17% iostat.sda.await.max
627.47 ±102% -96.7% 20.52 ± 38% iostat.sda.r_await.max
183.08 ± 83% -76.9% 42.24 ± 17% iostat.sda.w_await.max
209.00 ± 41% -50.2% 104.00 ± 17% numa-vmstat.node0.nr_dirty
209.50 ± 41% -50.4% 104.00 ± 17% numa-vmstat.node0.nr_zone_write_pending
6792 ± 8% +34.4% 9131 ± 7% numa-vmstat.node1.nr_kernel_stack
3.57 ±173% +9.8 13.38 ± 25% perf-profile.calltrace.cycles-pp.proc_reg_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.57 ±173% +9.8 13.38 ± 25% perf-profile.calltrace.cycles-pp.seq_read.proc_reg_read.vfs_read.ksys_read.do_syscall_64
3.57 ±173% +9.8 13.39 ± 25% perf-profile.children.cycles-pp.proc_reg_read
3.57 ±173% +12.6 16.16 ± 28% perf-profile.children.cycles-pp.seq_read
7948 ± 56% -53.1% 3730 ± 5% softirqs.CPU25.RCU
6701 ± 33% -46.7% 3570 ± 5% softirqs.CPU34.RCU
8232 ± 89% -60.5% 3247 softirqs.CPU50.RCU
326269 ± 16% -27.4% 236940 softirqs.RCU
68066 +7.9% 73438 proc-vmstat.nr_active_anon
67504 +7.8% 72783 proc-vmstat.nr_anon_pages
7198 ± 19% +34.2% 9658 ± 2% proc-vmstat.nr_page_table_pages
40664 ± 8% +10.1% 44766 proc-vmstat.nr_slab_unreclaimable
68066 +7.9% 73438 proc-vmstat.nr_zone_active_anon
1980169 ± 4% -5.3% 1875307 proc-vmstat.numa_hit
1960247 ± 4% -5.4% 1855033 proc-vmstat.numa_local
956008 ± 16% -17.8% 786247 proc-vmstat.pgfault
26598 ± 76% +301.2% 106716 ± 45% interrupts.CPU1.RES:Rescheduling_interrupts
151212 ± 39% -67.3% 49451 ± 57% interrupts.CPU26.RES:Rescheduling_interrupts
1013586 ± 2% -10.9% 903528 ± 7% interrupts.CPU27.LOC:Local_timer_interrupts
1000980 ± 2% -11.4% 886740 ± 8% interrupts.CPU31.LOC:Local_timer_interrupts
1021043 ± 3% -9.9% 919686 ± 6% interrupts.CPU32.LOC:Local_timer_interrupts
125222 ± 51% -86.0% 17483 ±106% interrupts.CPU33.RES:Rescheduling_interrupts
1003735 ± 2% -11.1% 891833 ± 8% interrupts.CPU34.LOC:Local_timer_interrupts
1021799 ± 2% -13.2% 886665 ± 8% interrupts.CPU38.LOC:Local_timer_interrupts
997788 ± 2% -13.2% 866427 ± 10% interrupts.CPU42.LOC:Local_timer_interrupts
1001618 -11.6% 885490 ± 9% interrupts.CPU45.LOC:Local_timer_interrupts
22321 ± 58% +550.3% 145153 ± 22% interrupts.CPU9.RES:Rescheduling_interrupts
3151 ± 53% +67.3% 5273 ± 8% slabinfo.avc_xperms_data.active_objs
3151 ± 53% +67.3% 5273 ± 8% slabinfo.avc_xperms_data.num_objs
348.75 ± 13% +39.8% 487.50 ± 5% slabinfo.biovec-128.active_objs
348.75 ± 13% +39.8% 487.50 ± 5% slabinfo.biovec-128.num_objs
13422 ± 97% +121.1% 29678 ± 2% slabinfo.btrfs_extent_map.active_objs
14638 ± 98% +117.8% 31888 ± 2% slabinfo.btrfs_extent_map.num_objs
3835 ± 18% +40.9% 5404 ± 7% slabinfo.dmaengine-unmap-16.active_objs
3924 ± 18% +39.9% 5490 ± 8% slabinfo.dmaengine-unmap-16.num_objs
3482 ± 96% +119.1% 7631 ± 10% slabinfo.khugepaged_mm_slot.active_objs
3573 ± 96% +119.4% 7839 ± 10% slabinfo.khugepaged_mm_slot.num_objs
8629 ± 52% -49.2% 4384 slabinfo.kmalloc-rcl-64.active_objs
8629 ± 52% -49.2% 4384 slabinfo.kmalloc-rcl-64.num_objs
2309 ± 57% +82.1% 4206 ± 5% slabinfo.mnt_cache.active_objs
2336 ± 57% +80.8% 4224 ± 5% slabinfo.mnt_cache.num_objs
5320 ± 48% +69.1% 8999 ± 23% slabinfo.pool_workqueue.active_objs
165.75 ± 48% +69.4% 280.75 ± 23% slabinfo.pool_workqueue.active_slabs
5320 ± 48% +69.2% 8999 ± 23% slabinfo.pool_workqueue.num_objs
165.75 ± 48% +69.4% 280.75 ± 23% slabinfo.pool_workqueue.num_slabs
3306 ± 15% +27.0% 4199 ± 3% slabinfo.task_group.active_objs
3333 ± 16% +30.1% 4336 ± 3% slabinfo.task_group.num_objs
14.74 ± 2% +1.8 16.53 ± 2% perf-stat.i.cache-miss-rate%
22459727 ± 20% +46.7% 32955572 ± 4% perf-stat.i.cache-misses
33575 ± 19% +68.8% 56658 ± 13% perf-stat.i.cpu-migrations
0.03 ± 20% +0.0 0.05 ± 8% perf-stat.i.dTLB-load-miss-rate%
6351703 ± 33% +47.2% 9352532 ± 9% perf-stat.i.dTLB-load-misses
0.45 ± 3% -3.0% 0.44 perf-stat.i.ipc
4711345 ± 18% +43.9% 6780944 ± 7% perf-stat.i.node-load-misses
82.51 +4.5 86.97 perf-stat.i.node-store-miss-rate%
2861142 ± 31% +60.8% 4601146 ± 5% perf-stat.i.node-store-misses
0.92 ± 6% -0.1 0.85 ± 2% perf-stat.overall.branch-miss-rate%
0.02 ± 3% +0.0 0.02 ± 4% perf-stat.overall.dTLB-store-miss-rate%
715.05 ± 5% +9.9% 785.50 ± 4% perf-stat.overall.instructions-per-iTLB-miss
0.44 ± 2% -5.4% 0.42 ± 2% perf-stat.overall.ipc
79.67 +2.1 81.80 ± 2% perf-stat.overall.node-store-miss-rate%
22237897 ± 19% +46.4% 32560557 ± 5% perf-stat.ps.cache-misses
32491 ± 18% +70.5% 55390 ± 13% perf-stat.ps.cpu-migrations
6071108 ± 31% +45.0% 8804767 ± 9% perf-stat.ps.dTLB-load-misses
1866 ± 98% -91.9% 150.48 ± 2% perf-stat.ps.major-faults
4593546 ± 16% +42.4% 6541402 ± 7% perf-stat.ps.node-load-misses
2757176 ± 29% +58.4% 4368169 ± 5% perf-stat.ps.node-store-misses
1.303e+12 ± 3% -9.8% 1.175e+12 ± 3% perf-stat.total.instructions
***************************************************************************************************
lkp-bdw-ep6: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
=========================================================================================
class/compiler/cpufreq_governor/disk/kconfig/nr_threads/rootfs/tbox_group/testcase/testtime/ucode:
interrupt/gcc-7/performance/1HDD/x86_64-rhel-7.6/100%/debian-x86_64-2019-11-14.cgz/lkp-bdw-ep6/stress-ng/30s/0xb000038
commit:
fcf0553db6 ("sched/fair: Remove meaningless imbalance calculation")
0b0695f2b3 ("sched/fair: Rework load_balance()")
fcf0553db6f4c793 0b0695f2b34a4afa3f6e9aa1ff0
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
1:4 -25% :4 dmesg.WARNING:at#for_ip_interrupt_entry/0x
%stddev %change %stddev
\ | \
98245522 +42.3% 1.398e+08 stress-ng.schedpolicy.ops
3274860 +42.3% 4661027 stress-ng.schedpolicy.ops_per_sec
3.473e+08 -9.7% 3.137e+08 stress-ng.sigq.ops
11576537 -9.7% 10454846 stress-ng.sigq.ops_per_sec
38097605 ± 6% +10.3% 42011440 ± 4% stress-ng.sigrt.ops
1269646 ± 6% +10.3% 1400024 ± 4% stress-ng.sigrt.ops_per_sec
3.628e+08 ± 4% -21.5% 2.848e+08 ± 10% stress-ng.time.involuntary_context_switches
7040 +2.9% 7245 stress-ng.time.percent_of_cpu_this_job_got
15.09 ± 3% -13.4% 13.07 ± 5% iostat.cpu.idle
14.82 ± 3% -2.0 12.80 ± 5% mpstat.cpu.all.idle%
3.333e+08 ± 17% +59.9% 5.331e+08 ± 22% cpuidle.C1.time
5985148 ± 23% +112.5% 12719679 ± 20% cpuidle.C1E.usage
14.50 ± 3% -12.1% 12.75 ± 6% vmstat.cpu.id
1113131 ± 2% -10.5% 996285 ± 3% vmstat.system.cs
2269 +2.4% 2324 turbostat.Avg_MHz
0.64 ± 17% +0.4 1.02 ± 23% turbostat.C1%
5984799 ± 23% +112.5% 12719086 ± 20% turbostat.C1E
4.17 ± 32% -46.0% 2.25 ± 38% turbostat.Pkg%pc2
216.57 +2.1% 221.12 turbostat.PkgWatt
13.33 ± 3% +3.9% 13.84 turbostat.RAMWatt
99920 +13.6% 113486 ± 15% proc-vmstat.nr_active_anon
5738 +1.2% 5806 proc-vmstat.nr_inactive_anon
46788 +2.1% 47749 proc-vmstat.nr_slab_unreclaimable
99920 +13.6% 113486 ± 15% proc-vmstat.nr_zone_active_anon
5738 +1.2% 5806 proc-vmstat.nr_zone_inactive_anon
3150 ± 2% +35.4% 4265 ± 33% proc-vmstat.numa_huge_pte_updates
1641223 +34.3% 2203844 ± 32% proc-vmstat.numa_pte_updates
13575 ± 18% +62.1% 21999 ± 4% slabinfo.ext4_extent_status.active_objs
13954 ± 17% +57.7% 21999 ± 4% slabinfo.ext4_extent_status.num_objs
2527 ± 4% +9.8% 2774 ± 2% slabinfo.khugepaged_mm_slot.active_objs
2527 ± 4% +9.8% 2774 ± 2% slabinfo.khugepaged_mm_slot.num_objs
57547 ± 8% -15.3% 48743 ± 9% slabinfo.kmalloc-rcl-64.active_objs
898.75 ± 8% -15.3% 761.00 ± 9% slabinfo.kmalloc-rcl-64.active_slabs
57547 ± 8% -15.3% 48743 ± 9% slabinfo.kmalloc-rcl-64.num_objs
898.75 ± 8% -15.3% 761.00 ± 9% slabinfo.kmalloc-rcl-64.num_slabs
1.014e+10 +1.7% 1.031e+10 perf-stat.i.branch-instructions
13.37 ± 4% +2.0 15.33 ± 3% perf-stat.i.cache-miss-rate%
1.965e+11 +2.6% 2.015e+11 perf-stat.i.cpu-cycles
20057708 ± 4% +13.9% 22841468 ± 4% perf-stat.i.iTLB-loads
4.973e+10 +1.4% 5.042e+10 perf-stat.i.instructions
3272 ± 2% +2.9% 3366 perf-stat.i.minor-faults
4500892 ± 3% +18.9% 5351518 ± 6% perf-stat.i.node-store-misses
3.91 +1.3% 3.96 perf-stat.overall.cpi
69.62 -1.5 68.11 perf-stat.overall.iTLB-load-miss-rate%
1.047e+10 +1.3% 1.061e+10 perf-stat.ps.branch-instructions
1117454 ± 2% -10.6% 999467 ± 3% perf-stat.ps.context-switches
1.986e+11 +2.4% 2.033e+11 perf-stat.ps.cpu-cycles
19614413 ± 4% +13.6% 22288555 ± 4% perf-stat.ps.iTLB-loads
3493 -1.1% 3453 perf-stat.ps.minor-faults
4546636 ± 3% +17.0% 5321658 ± 5% perf-stat.ps.node-store-misses
0.64 ± 3% -0.2 0.44 ± 57% perf-profile.calltrace.cycles-pp.common_timer_get.do_timer_gettime.__x64_sys_timer_gettime.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.66 ± 3% -0.1 0.58 ± 7% perf-profile.children.cycles-pp.common_timer_get
0.44 ± 4% -0.1 0.39 ± 5% perf-profile.children.cycles-pp.posix_ktime_get_ts
0.39 ± 5% -0.0 0.34 ± 6% perf-profile.children.cycles-pp.ktime_get_ts64
0.07 ± 17% +0.0 0.10 ± 8% perf-profile.children.cycles-pp.task_tick_fair
0.08 ± 15% +0.0 0.11 ± 7% perf-profile.children.cycles-pp.scheduler_tick
0.46 ± 5% +0.1 0.54 ± 6% perf-profile.children.cycles-pp.__might_sleep
0.69 ± 8% +0.2 0.85 ± 12% perf-profile.children.cycles-pp.___might_sleep
0.90 ± 5% -0.2 0.73 ± 9% perf-profile.self.cycles-pp.__might_fault
0.40 ± 6% -0.1 0.33 ± 9% perf-profile.self.cycles-pp.do_timer_gettime
0.50 ± 4% -0.1 0.45 ± 7% perf-profile.self.cycles-pp.put_itimerspec64
0.32 ± 2% -0.0 0.27 ± 9% perf-profile.self.cycles-pp.update_curr_fair
0.20 ± 6% -0.0 0.18 ± 2% perf-profile.self.cycles-pp.ktime_get_ts64
0.08 ± 23% +0.0 0.12 ± 8% perf-profile.self.cycles-pp._raw_spin_trylock
0.42 ± 5% +0.1 0.50 ± 6% perf-profile.self.cycles-pp.__might_sleep
0.66 ± 9% +0.2 0.82 ± 12% perf-profile.self.cycles-pp.___might_sleep
47297 ± 13% +19.7% 56608 ± 5% softirqs.CPU13.SCHED
47070 ± 3% +20.5% 56735 ± 7% softirqs.CPU2.SCHED
55443 ± 9% -20.2% 44250 ± 2% softirqs.CPU28.SCHED
56633 ± 3% -12.6% 49520 ± 7% softirqs.CPU34.SCHED
56599 ± 11% -18.0% 46384 ± 2% softirqs.CPU36.SCHED
56909 ± 9% -18.4% 46438 ± 6% softirqs.CPU40.SCHED
45062 ± 9% +28.1% 57709 ± 9% softirqs.CPU45.SCHED
43959 +28.7% 56593 ± 9% softirqs.CPU49.SCHED
46235 ± 10% +22.2% 56506 ± 11% softirqs.CPU5.SCHED
44779 ± 12% +22.5% 54859 ± 11% softirqs.CPU57.SCHED
46739 ± 10% +21.1% 56579 ± 8% softirqs.CPU6.SCHED
53129 ± 4% -13.1% 46149 ± 8% softirqs.CPU70.SCHED
55822 ± 7% -20.5% 44389 ± 8% softirqs.CPU73.SCHED
56011 ± 5% -11.4% 49610 ± 7% softirqs.CPU77.SCHED
55263 ± 9% -13.2% 47942 ± 12% softirqs.CPU78.SCHED
58792 ± 14% -21.3% 46291 ± 9% softirqs.CPU81.SCHED
53341 ± 7% -13.7% 46041 ± 10% softirqs.CPU83.SCHED
59096 ± 15% -23.9% 44998 ± 6% softirqs.CPU85.SCHED
36647 -98.5% 543.00 ± 61% numa-meminfo.node0.Active(file)
620922 ± 4% -10.4% 556566 ± 5% numa-meminfo.node0.FilePages
21243 ± 3% -36.2% 13543 ± 41% numa-meminfo.node0.Inactive
20802 ± 3% -35.3% 13455 ± 42% numa-meminfo.node0.Inactive(anon)
15374 ± 9% -27.2% 11193 ± 8% numa-meminfo.node0.KernelStack
21573 -34.7% 14084 ± 14% numa-meminfo.node0.Mapped
1136795 ± 5% -12.4% 995965 ± 6% numa-meminfo.node0.MemUsed
16420 ± 6% -66.0% 5580 ± 18% numa-meminfo.node0.PageTables
108182 ± 2% -18.5% 88150 ± 3% numa-meminfo.node0.SUnreclaim
166467 ± 2% -15.8% 140184 ± 4% numa-meminfo.node0.Slab
181705 ± 36% +63.8% 297623 ± 10% numa-meminfo.node1.Active
320.75 ± 27% +11187.0% 36203 numa-meminfo.node1.Active(file)
2208 ± 38% +362.1% 10207 ± 54% numa-meminfo.node1.Inactive
2150 ± 39% +356.0% 9804 ± 58% numa-meminfo.node1.Inactive(anon)
41819 ± 10% +17.3% 49068 ± 6% numa-meminfo.node1.KReclaimable
11711 ± 5% +47.2% 17238 ± 22% numa-meminfo.node1.KernelStack
10642 +68.3% 17911 ± 11% numa-meminfo.node1.Mapped
952520 ± 6% +20.3% 1146337 ± 3% numa-meminfo.node1.MemUsed
12342 ± 15% +92.4% 23741 ± 9% numa-meminfo.node1.PageTables
41819 ± 10% +17.3% 49068 ± 6% numa-meminfo.node1.SReclaimable
80394 ± 3% +27.1% 102206 ± 3% numa-meminfo.node1.SUnreclaim
122214 ± 3% +23.8% 151275 ± 3% numa-meminfo.node1.Slab
9160 -98.5% 135.25 ± 61% numa-vmstat.node0.nr_active_file
155223 ± 4% -10.4% 139122 ± 5% numa-vmstat.node0.nr_file_pages
5202 ± 3% -35.4% 3362 ± 42% numa-vmstat.node0.nr_inactive_anon
109.50 ± 14% -80.1% 21.75 ±160% numa-vmstat.node0.nr_inactive_file
14757 ± 3% -34.4% 9676 ± 12% numa-vmstat.node0.nr_kernel_stack
5455 -34.9% 3549 ± 12% numa-vmstat.node0.nr_mapped
4069 ± 6% -68.3% 1289 ± 24% numa-vmstat.node0.nr_page_table_pages
26943 ± 2% -19.2% 21761 ± 3% numa-vmstat.node0.nr_slab_unreclaimable
2240 ± 6% -97.8% 49.00 ± 69% numa-vmstat.node0.nr_written
9160 -98.5% 135.25 ± 61% numa-vmstat.node0.nr_zone_active_file
5202 ± 3% -35.4% 3362 ± 42% numa-vmstat.node0.nr_zone_inactive_anon
109.50 ± 14% -80.1% 21.75 ±160% numa-vmstat.node0.nr_zone_inactive_file
79.75 ± 28% +11247.0% 9049 numa-vmstat.node1.nr_active_file
542.25 ± 41% +352.1% 2451 ± 58% numa-vmstat.node1.nr_inactive_anon
14.00 ±140% +617.9% 100.50 ± 35% numa-vmstat.node1.nr_inactive_file
11182 ± 4% +28.9% 14415 ± 4% numa-vmstat.node1.nr_kernel_stack
2728 ± 3% +67.7% 4576 ± 9% numa-vmstat.node1.nr_mapped
3056 ± 15% +88.2% 5754 ± 8% numa-vmstat.node1.nr_page_table_pages
10454 ± 10% +17.3% 12262 ± 7% numa-vmstat.node1.nr_slab_reclaimable
20006 ± 3% +25.0% 25016 ± 3% numa-vmstat.node1.nr_slab_unreclaimable
19.00 ± 52% +11859.2% 2272 ± 2% numa-vmstat.node1.nr_written
79.75 ± 28% +11247.0% 9049 numa-vmstat.node1.nr_zone_active_file
542.25 ± 41% +352.1% 2451 ± 58% numa-vmstat.node1.nr_zone_inactive_anon
14.00 ±140% +617.9% 100.50 ± 35% numa-vmstat.node1.nr_zone_inactive_file
173580 ± 21% +349.5% 780280 ± 7% sched_debug.cfs_rq:/.MIN_vruntime.avg
6891819 ± 37% +109.1% 14412817 ± 9% sched_debug.cfs_rq:/.MIN_vruntime.max
1031500 ± 25% +189.1% 2982452 ± 8% sched_debug.cfs_rq:/.MIN_vruntime.stddev
149079 +13.6% 169354 ± 2% sched_debug.cfs_rq:/.exec_clock.min
8550 ± 3% -59.7% 3442 ± 32% sched_debug.cfs_rq:/.exec_clock.stddev
4.95 ± 6% -15.2% 4.20 ± 10% sched_debug.cfs_rq:/.load_avg.min
173580 ± 21% +349.5% 780280 ± 7% sched_debug.cfs_rq:/.max_vruntime.avg
6891819 ± 37% +109.1% 14412817 ± 9% sched_debug.cfs_rq:/.max_vruntime.max
1031500 ± 25% +189.1% 2982452 ± 8% sched_debug.cfs_rq:/.max_vruntime.stddev
16144141 +27.9% 20645199 ± 6% sched_debug.cfs_rq:/.min_vruntime.avg
17660392 +27.7% 22546402 ± 4% sched_debug.cfs_rq:/.min_vruntime.max
13747718 +36.8% 18802595 ± 5% sched_debug.cfs_rq:/.min_vruntime.min
0.17 ± 11% +35.0% 0.22 ± 15% sched_debug.cfs_rq:/.nr_running.stddev
10.64 ± 14% -26.4% 7.83 ± 12% sched_debug.cpu.clock.stddev
10.64 ± 14% -26.4% 7.83 ± 12% sched_debug.cpu.clock_task.stddev
7093 ± 42% -65.9% 2420 ±120% sched_debug.cpu.curr->pid.min
2434979 ± 2% -18.6% 1981697 ± 3% sched_debug.cpu.nr_switches.avg
3993189 ± 6% -22.2% 3104832 ± 5% sched_debug.cpu.nr_switches.max
-145.03 -42.8% -82.90 sched_debug.cpu.nr_uninterruptible.min
2097122 ± 6% +38.7% 2908923 ± 6% sched_debug.cpu.sched_count.min
809684 ± 13% -30.5% 562929 ± 17% sched_debug.cpu.sched_count.stddev
307565 ± 4% -15.1% 261231 ± 3% sched_debug.cpu.ttwu_count.min
207286 ± 6% -16.4% 173387 ± 3% sched_debug.cpu.ttwu_local.min
125963 ± 23% +53.1% 192849 ± 2% sched_debug.cpu.ttwu_local.stddev
2527246 +10.8% 2800959 ± 3% sched_debug.cpu.yld_count.avg
1294266 ± 4% +53.7% 1989264 ± 2% sched_debug.cpu.yld_count.min
621332 ± 9% -38.4% 382813 ± 22% sched_debug.cpu.yld_count.stddev
899.50 ± 28% -48.2% 465.75 ± 42% interrupts.37:IR-PCI-MSI.1572868-edge.eth0-TxRx-3
372.50 ± 7% +169.5% 1004 ± 40% interrupts.40:IR-PCI-MSI.1572871-edge.eth0-TxRx-6
6201 ± 8% +17.9% 7309 ± 3% interrupts.CPU0.CAL:Function_call_interrupts
653368 ± 47% +159.4% 1695029 ± 17% interrupts.CPU0.RES:Rescheduling_interrupts
7104 ± 7% +13.6% 8067 interrupts.CPU1.CAL:Function_call_interrupts
2094 ± 59% +89.1% 3962 ± 10% interrupts.CPU10.TLB:TLB_shootdowns
7309 ± 8% +11.2% 8125 interrupts.CPU11.CAL:Function_call_interrupts
2089 ± 62% +86.2% 3890 ± 11% interrupts.CPU13.TLB:TLB_shootdowns
7068 ± 8% +15.2% 8144 ± 2% interrupts.CPU14.CAL:Function_call_interrupts
7112 ± 7% +13.6% 8079 ± 3% interrupts.CPU15.CAL:Function_call_interrupts
1950 ± 61% +103.5% 3968 ± 11% interrupts.CPU15.TLB:TLB_shootdowns
899.50 ± 28% -48.2% 465.75 ± 42% interrupts.CPU16.37:IR-PCI-MSI.1572868-edge.eth0-TxRx-3
2252 ± 47% +62.6% 3664 ± 15% interrupts.CPU16.TLB:TLB_shootdowns
7111 ± 8% +14.8% 8167 ± 3% interrupts.CPU18.CAL:Function_call_interrupts
1972 ± 60% +96.3% 3872 ± 9% interrupts.CPU18.TLB:TLB_shootdowns
372.50 ± 7% +169.5% 1004 ± 40% interrupts.CPU19.40:IR-PCI-MSI.1572871-edge.eth0-TxRx-6
2942 ± 12% -57.5% 1251 ± 22% interrupts.CPU22.TLB:TLB_shootdowns
7819 -12.2% 6861 ± 3% interrupts.CPU23.CAL:Function_call_interrupts
3327 ± 12% -62.7% 1241 ± 29% interrupts.CPU23.TLB:TLB_shootdowns
7767 ± 3% -14.0% 6683 ± 5% interrupts.CPU24.CAL:Function_call_interrupts
3185 ± 21% -63.8% 1154 ± 14% interrupts.CPU24.TLB:TLB_shootdowns
7679 ± 4% -11.3% 6812 ± 2% interrupts.CPU25.CAL:Function_call_interrupts
3004 ± 28% -63.4% 1100 ± 7% interrupts.CPU25.TLB:TLB_shootdowns
3187 ± 17% -61.3% 1232 ± 35% interrupts.CPU26.TLB:TLB_shootdowns
3193 ± 16% -59.3% 1299 ± 34% interrupts.CPU27.TLB:TLB_shootdowns
3059 ± 21% -58.0% 1285 ± 32% interrupts.CPU28.TLB:TLB_shootdowns
7798 ± 4% -13.8% 6719 ± 7% interrupts.CPU29.CAL:Function_call_interrupts
3122 ± 20% -62.3% 1178 ± 37% interrupts.CPU29.TLB:TLB_shootdowns
7727 ± 2% -11.6% 6827 ± 5% interrupts.CPU30.CAL:Function_call_interrupts
3102 ± 18% -59.4% 1259 ± 33% interrupts.CPU30.TLB:TLB_shootdowns
3269 ± 24% -58.1% 1371 ± 48% interrupts.CPU31.TLB:TLB_shootdowns
7918 ± 3% -14.5% 6771 interrupts.CPU32.CAL:Function_call_interrupts
3324 ± 18% -70.7% 973.50 ± 18% interrupts.CPU32.TLB:TLB_shootdowns
2817 ± 27% -60.2% 1121 ± 26% interrupts.CPU33.TLB:TLB_shootdowns
7956 ± 3% -11.8% 7018 ± 4% interrupts.CPU34.CAL:Function_call_interrupts
3426 ± 21% -70.3% 1018 ± 29% interrupts.CPU34.TLB:TLB_shootdowns
3121 ± 17% -70.3% 926.75 ± 22% interrupts.CPU35.TLB:TLB_shootdowns
7596 ± 4% -10.6% 6793 ± 3% interrupts.CPU36.CAL:Function_call_interrupts
2900 ± 30% -62.3% 1094 ± 34% interrupts.CPU36.TLB:TLB_shootdowns
7863 -13.1% 6833 ± 2% interrupts.CPU37.CAL:Function_call_interrupts
3259 ± 15% -65.9% 1111 ± 20% interrupts.CPU37.TLB:TLB_shootdowns
3230 ± 26% -64.0% 1163 ± 39% interrupts.CPU38.TLB:TLB_shootdowns
7728 ± 5% -13.8% 6662 ± 7% interrupts.CPU39.CAL:Function_call_interrupts
2950 ± 29% -61.6% 1133 ± 26% interrupts.CPU39.TLB:TLB_shootdowns
6864 ± 3% +18.7% 8147 interrupts.CPU4.CAL:Function_call_interrupts
1847 ± 59% +118.7% 4039 ± 7% interrupts.CPU4.TLB:TLB_shootdowns
7951 ± 6% -15.0% 6760 ± 2% interrupts.CPU40.CAL:Function_call_interrupts
3200 ± 30% -72.3% 886.50 ± 39% interrupts.CPU40.TLB:TLB_shootdowns
7819 ± 6% -11.3% 6933 ± 2% interrupts.CPU41.CAL:Function_call_interrupts
3149 ± 28% -62.9% 1169 ± 24% interrupts.CPU41.TLB:TLB_shootdowns
7884 ± 4% -11.0% 7019 ± 2% interrupts.CPU42.CAL:Function_call_interrupts
3248 ± 16% -63.4% 1190 ± 23% interrupts.CPU42.TLB:TLB_shootdowns
7659 ± 5% -12.7% 6690 ± 3% interrupts.CPU43.CAL:Function_call_interrupts
490732 ± 20% +114.5% 1052606 ± 47% interrupts.CPU43.RES:Rescheduling_interrupts
1432688 ± 34% -67.4% 467217 ± 43% interrupts.CPU47.RES:Rescheduling_interrupts
7122 ± 8% +16.0% 8259 ± 3% interrupts.CPU48.CAL:Function_call_interrupts
1868 ± 65% +118.4% 4079 ± 8% interrupts.CPU48.TLB:TLB_shootdowns
7165 ± 8% +11.3% 7977 ± 5% interrupts.CPU49.CAL:Function_call_interrupts
1961 ± 59% +98.4% 3891 ± 4% interrupts.CPU49.TLB:TLB_shootdowns
461807 ± 47% +190.8% 1342990 ± 48% interrupts.CPU5.RES:Rescheduling_interrupts
7167 ± 7% +15.4% 8273 interrupts.CPU50.CAL:Function_call_interrupts
2027 ± 51% +103.9% 4134 ± 8% interrupts.CPU50.TLB:TLB_shootdowns
7163 ± 9% +16.3% 8328 interrupts.CPU51.CAL:Function_call_interrupts
660073 ± 33% +74.0% 1148640 ± 25% interrupts.CPU51.RES:Rescheduling_interrupts
2043 ± 64% +95.8% 4000 ± 5% interrupts.CPU51.TLB:TLB_shootdowns
7428 ± 9% +13.5% 8434 ± 2% interrupts.CPU52.CAL:Function_call_interrupts
2280 ± 61% +85.8% 4236 ± 9% interrupts.CPU52.TLB:TLB_shootdowns
7144 ± 11% +17.8% 8413 interrupts.CPU53.CAL:Function_call_interrupts
1967 ± 67% +104.7% 4026 ± 5% interrupts.CPU53.TLB:TLB_shootdowns
7264 ± 10% +15.6% 8394 ± 4% interrupts.CPU54.CAL:Function_call_interrupts
7045 ± 11% +18.7% 8365 ± 2% interrupts.CPU56.CAL:Function_call_interrupts
2109 ± 59% +91.6% 4041 ± 10% interrupts.CPU56.TLB:TLB_shootdowns
7307 ± 9% +15.3% 8428 ± 2% interrupts.CPU57.CAL:Function_call_interrupts
2078 ± 64% +96.5% 4085 ± 6% interrupts.CPU57.TLB:TLB_shootdowns
6834 ± 12% +19.8% 8190 ± 3% interrupts.CPU58.CAL:Function_call_interrupts
612496 ± 85% +122.5% 1362815 ± 27% interrupts.CPU58.RES:Rescheduling_interrupts
1884 ± 69% +112.0% 3995 ± 8% interrupts.CPU58.TLB:TLB_shootdowns
7185 ± 8% +15.9% 8329 interrupts.CPU59.CAL:Function_call_interrupts
1982 ± 58% +101.1% 3986 ± 5% interrupts.CPU59.TLB:TLB_shootdowns
7051 ± 6% +13.1% 7975 interrupts.CPU6.CAL:Function_call_interrupts
1831 ± 49% +102.1% 3701 ± 8% interrupts.CPU6.TLB:TLB_shootdowns
7356 ± 8% +16.2% 8548 interrupts.CPU60.CAL:Function_call_interrupts
2124 ± 57% +92.8% 4096 ± 5% interrupts.CPU60.TLB:TLB_shootdowns
7243 ± 9% +15.1% 8334 interrupts.CPU61.CAL:Function_call_interrupts
572423 ± 71% +110.0% 1201919 ± 40% interrupts.CPU61.RES:Rescheduling_interrupts
7295 ± 9% +14.7% 8369 interrupts.CPU63.CAL:Function_call_interrupts
2139 ± 57% +85.7% 3971 ± 3% interrupts.CPU63.TLB:TLB_shootdowns
7964 ± 2% -15.6% 6726 ± 5% interrupts.CPU66.CAL:Function_call_interrupts
3198 ± 21% -65.0% 1119 ± 24% interrupts.CPU66.TLB:TLB_shootdowns
8103 ± 2% -17.5% 6687 ± 9% interrupts.CPU67.CAL:Function_call_interrupts
3357 ± 18% -62.9% 1244 ± 32% interrupts.CPU67.TLB:TLB_shootdowns
7772 ± 2% -14.0% 6687 ± 8% interrupts.CPU68.CAL:Function_call_interrupts
2983 ± 17% -59.2% 1217 ± 15% interrupts.CPU68.TLB:TLB_shootdowns
7986 ± 4% -13.8% 6887 ± 4% interrupts.CPU69.CAL:Function_call_interrupts
3192 ± 24% -65.0% 1117 ± 30% interrupts.CPU69.TLB:TLB_shootdowns
7070 ± 6% +14.6% 8100 ± 2% interrupts.CPU7.CAL:Function_call_interrupts
697891 ± 32% +54.4% 1077890 ± 18% interrupts.CPU7.RES:Rescheduling_interrupts
1998 ± 55% +97.1% 3938 ± 10% interrupts.CPU7.TLB:TLB_shootdowns
8085 -13.4% 7002 ± 3% interrupts.CPU70.CAL:Function_call_interrupts
1064985 ± 35% -62.5% 398986 ± 29% interrupts.CPU70.RES:Rescheduling_interrupts
3347 ± 12% -61.7% 1280 ± 24% interrupts.CPU70.TLB:TLB_shootdowns
2916 ± 16% -58.8% 1201 ± 39% interrupts.CPU71.TLB:TLB_shootdowns
3314 ± 19% -61.3% 1281 ± 26% interrupts.CPU72.TLB:TLB_shootdowns
3119 ± 18% -61.5% 1200 ± 39% interrupts.CPU73.TLB:TLB_shootdowns
7992 ± 4% -12.6% 6984 ± 3% interrupts.CPU74.CAL:Function_call_interrupts
3187 ± 21% -56.8% 1378 ± 40% interrupts.CPU74.TLB:TLB_shootdowns
7953 ± 4% -12.0% 6999 ± 4% interrupts.CPU75.CAL:Function_call_interrupts
3072 ± 26% -56.8% 1327 ± 34% interrupts.CPU75.TLB:TLB_shootdowns
8119 ± 5% -12.4% 7109 ± 7% interrupts.CPU76.CAL:Function_call_interrupts
3418 ± 20% -67.5% 1111 ± 31% interrupts.CPU76.TLB:TLB_shootdowns
7804 ± 5% -11.4% 6916 ± 4% interrupts.CPU77.CAL:Function_call_interrupts
7976 ± 5% -14.4% 6826 ± 3% interrupts.CPU78.CAL:Function_call_interrupts
3209 ± 27% -71.8% 904.75 ± 28% interrupts.CPU78.TLB:TLB_shootdowns
8187 ± 4% -14.6% 6991 ± 3% interrupts.CPU79.CAL:Function_call_interrupts
3458 ± 20% -67.5% 1125 ± 36% interrupts.CPU79.TLB:TLB_shootdowns
7122 ± 7% +14.2% 8136 ± 2% interrupts.CPU8.CAL:Function_call_interrupts
2096 ± 63% +87.4% 3928 ± 8% interrupts.CPU8.TLB:TLB_shootdowns
8130 ± 5% -17.2% 6728 ± 5% interrupts.CPU81.CAL:Function_call_interrupts
3253 ± 24% -70.6% 955.00 ± 38% interrupts.CPU81.TLB:TLB_shootdowns
7940 ± 5% -13.9% 6839 ± 5% interrupts.CPU82.CAL:Function_call_interrupts
2952 ± 26% -66.3% 996.00 ± 51% interrupts.CPU82.TLB:TLB_shootdowns
7900 ± 6% -13.4% 6844 ± 3% interrupts.CPU83.CAL:Function_call_interrupts
3012 ± 34% -68.3% 956.00 ± 17% interrupts.CPU83.TLB:TLB_shootdowns
7952 ± 6% -15.8% 6695 ± 2% interrupts.CPU84.CAL:Function_call_interrupts
3049 ± 31% -75.5% 746.50 ± 27% interrupts.CPU84.TLB:TLB_shootdowns
8065 ± 6% -15.7% 6798 interrupts.CPU85.CAL:Function_call_interrupts
3222 ± 23% -69.7% 976.00 ± 13% interrupts.CPU85.TLB:TLB_shootdowns
8049 ± 5% -13.2% 6983 ± 4% interrupts.CPU86.CAL:Function_call_interrupts
3159 ± 19% -61.9% 1202 ± 27% interrupts.CPU86.TLB:TLB_shootdowns
8154 ± 8% -16.9% 6773 ± 3% interrupts.CPU87.CAL:Function_call_interrupts
1432962 ± 21% -48.5% 737989 ± 30% interrupts.CPU87.RES:Rescheduling_interrupts
3186 ± 33% -72.3% 881.75 ± 21% interrupts.CPU87.TLB:TLB_shootdowns
***************************************************************************************************
lkp-bdw-ep6: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
=========================================================================================
class/compiler/cpufreq_governor/disk/kconfig/nr_threads/rootfs/tbox_group/testcase/testtime/ucode:
interrupt/gcc-7/performance/1HDD/x86_64-rhel-7.6/100%/debian-x86_64-2019-11-14.cgz/lkp-bdw-ep6/stress-ng/1s/0xb000038
commit:
fcf0553db6 ("sched/fair: Remove meaningless imbalance calculation")
0b0695f2b3 ("sched/fair: Rework load_balance()")
fcf0553db6f4c793 0b0695f2b34a4afa3f6e9aa1ff0
---------------- ---------------------------
%stddev %change %stddev
\ | \
3345449 +35.1% 4518187 ± 5% stress-ng.schedpolicy.ops
3347036 +35.1% 4520740 ± 5% stress-ng.schedpolicy.ops_per_sec
11464910 ± 6% -23.3% 8796455 ± 11% stress-ng.sigq.ops
11452565 ± 6% -23.3% 8786844 ± 11% stress-ng.sigq.ops_per_sec
228736 +20.7% 276087 ± 20% stress-ng.sleep.ops
157479 +23.0% 193722 ± 21% stress-ng.sleep.ops_per_sec
14584704 -5.8% 13744640 ± 4% stress-ng.timerfd.ops
14546032 -5.7% 13718862 ± 4% stress-ng.timerfd.ops_per_sec
27.24 ±105% +283.9% 104.58 ±109% iostat.sdb.r_await.max
122324 ± 35% +63.9% 200505 ± 21% meminfo.AnonHugePages
47267 ± 26% +155.2% 120638 ± 45% numa-meminfo.node1.AnonHugePages
22880 ± 6% -9.9% 20605 ± 3% softirqs.CPU57.TIMER
636196 ± 24% +38.5% 880847 ± 7% cpuidle.C1.usage
55936214 ± 20% +63.9% 91684673 ± 18% cpuidle.C1E.time
1.175e+08 ± 22% +101.8% 2.372e+08 ± 29% cpuidle.C3.time
4.242e+08 ± 6% -39.1% 2.584e+08 ± 39% cpuidle.C6.time
59.50 ± 34% +66.0% 98.75 ± 22% proc-vmstat.nr_anon_transparent_hugepages
25612 ± 10% +13.8% 29146 ± 4% proc-vmstat.nr_kernel_stack
2783465 ± 9% +14.5% 3187157 ± 9% proc-vmstat.pgalloc_normal
1743 ± 28% +43.8% 2507 ± 23% proc-vmstat.thp_deferred_split_page
1765 ± 30% +43.2% 2529 ± 22% proc-vmstat.thp_fault_alloc
811.00 ± 3% -13.8% 699.00 ± 7% slabinfo.kmem_cache_node.active_objs
864.00 ± 3% -13.0% 752.00 ± 7% slabinfo.kmem_cache_node.num_objs
8686 ± 7% +13.6% 9869 ± 3% slabinfo.pid.active_objs
8690 ± 7% +13.8% 9890 ± 3% slabinfo.pid.num_objs
9813 ± 6% +15.7% 11352 ± 3% slabinfo.task_delay_info.active_objs
9813 ± 6% +15.7% 11352 ± 3% slabinfo.task_delay_info.num_objs
79.22 ± 10% -41.1% 46.68 ± 22% sched_debug.cfs_rq:/.load_avg.avg
242.49 ± 6% -29.6% 170.70 ± 17% sched_debug.cfs_rq:/.load_avg.stddev
43.14 ± 29% -67.1% 14.18 ± 66% sched_debug.cfs_rq:/.removed.load_avg.avg
201.73 ± 15% -50.1% 100.68 ± 60% sched_debug.cfs_rq:/.removed.load_avg.stddev
1987 ± 28% -67.3% 650.09 ± 66% sched_debug.cfs_rq:/.removed.runnable_sum.avg
9298 ± 15% -50.3% 4616 ± 60% sched_debug.cfs_rq:/.removed.runnable_sum.stddev
18.17 ± 27% -68.6% 5.70 ± 63% sched_debug.cfs_rq:/.removed.util_avg.avg
87.61 ± 13% -52.6% 41.48 ± 59% sched_debug.cfs_rq:/.removed.util_avg.stddev
633327 ± 24% +38.4% 876596 ± 7% turbostat.C1
2.75 ± 22% +1.8 4.52 ± 17% turbostat.C1E%
5.76 ± 22% +6.1 11.82 ± 30% turbostat.C3%
20.69 ± 5% -8.1 12.63 ± 38% turbostat.C6%
15.62 ± 6% +18.4% 18.50 ± 8% turbostat.CPU%c1
1.56 ± 16% +208.5% 4.82 ± 38% turbostat.CPU%c3
12.81 ± 4% -48.1% 6.65 ± 43% turbostat.CPU%c6
5.02 ± 8% -34.6% 3.28 ± 14% turbostat.Pkg%pc2
0.85 ± 57% -84.7% 0.13 ±173% turbostat.Pkg%pc6
88.25 ± 13% +262.6% 320.00 ± 71% interrupts.CPU10.TLB:TLB_shootdowns
116.25 ± 36% +151.6% 292.50 ± 68% interrupts.CPU19.TLB:TLB_shootdowns
109.25 ± 8% +217.4% 346.75 ±106% interrupts.CPU2.TLB:TLB_shootdowns
15180 ±111% +303.9% 61314 ± 32% interrupts.CPU23.RES:Rescheduling_interrupts
111.50 ± 26% +210.3% 346.00 ± 79% interrupts.CPU3.TLB:TLB_shootdowns
86.50 ± 35% +413.0% 443.75 ± 66% interrupts.CPU33.TLB:TLB_shootdowns
728.00 ± 8% +29.6% 943.50 ± 16% interrupts.CPU38.CAL:Function_call_interrupts
1070 ± 72% +84.9% 1979 ± 9% interrupts.CPU54.76:IR-PCI-MSI.512000-edge.ahci[0000:00:1f.2]
41429 ± 64% -73.7% 10882 ± 73% interrupts.CPU59.RES:Rescheduling_interrupts
26330 ± 85% -73.3% 7022 ± 86% interrupts.CPU62.RES:Rescheduling_interrupts
103.00 ± 22% +181.3% 289.75 ± 92% interrupts.CPU65.TLB:TLB_shootdowns
100.00 ± 40% +365.0% 465.00 ± 71% interrupts.CPU70.TLB:TLB_shootdowns
110.25 ± 18% +308.4% 450.25 ± 71% interrupts.CPU80.TLB:TLB_shootdowns
93.50 ± 42% +355.1% 425.50 ± 82% interrupts.CPU84.TLB:TLB_shootdowns
104.50 ± 18% +289.7% 407.25 ± 68% interrupts.CPU87.TLB:TLB_shootdowns
1.76 ± 3% -0.1 1.66 ± 4% perf-stat.i.branch-miss-rate%
8.08 ± 6% +2.0 10.04 perf-stat.i.cache-miss-rate%
18031213 ± 4% +27.2% 22939937 ± 3% perf-stat.i.cache-misses
4.041e+08 -1.9% 3.965e+08 perf-stat.i.cache-references
31764 ± 26% -40.6% 18859 ± 10% perf-stat.i.cycles-between-cache-misses
66.18 -1.5 64.71 perf-stat.i.iTLB-load-miss-rate%
4503482 ± 8% +19.5% 5382698 ± 5% perf-stat.i.node-load-misses
3892859 ± 2% +16.6% 4538750 ± 4% perf-stat.i.node-store-misses
1526815 ± 13% +25.8% 1921178 ± 9% perf-stat.i.node-stores
4.72 ± 4% +1.3 6.00 ± 3% perf-stat.overall.cache-miss-rate%
9120 ± 6% -18.9% 7394 ± 2% perf-stat.overall.cycles-between-cache-misses
18237318 ± 4% +25.4% 22866104 ± 3% perf-stat.ps.cache-misses
4392089 ± 8% +18.1% 5189251 ± 5% perf-stat.ps.node-load-misses
1629766 ± 2% +17.9% 1920947 ± 13% perf-stat.ps.node-loads
3694566 ± 2% +16.1% 4288126 ± 4% perf-stat.ps.node-store-misses
1536866 ± 12% +23.7% 1901141 ± 7% perf-stat.ps.node-stores
38.20 ± 18% -13.2 24.96 ± 10% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
38.20 ± 18% -13.2 24.96 ± 10% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
7.98 ± 67% -7.2 0.73 ±173% perf-profile.calltrace.cycles-pp.smp_call_function_single.event_function_call.perf_remove_from_context.perf_event_release_kernel.perf_release
7.98 ± 67% -7.2 0.73 ±173% perf-profile.calltrace.cycles-pp.event_function_call.perf_remove_from_context.perf_event_release_kernel.perf_release.__fput
7.98 ± 67% -7.2 0.73 ±173% perf-profile.calltrace.cycles-pp.perf_remove_from_context.perf_event_release_kernel.perf_release.__fput.task_work_run
11.86 ± 41% -6.8 5.07 ± 62% perf-profile.calltrace.cycles-pp.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
11.86 ± 41% -6.8 5.07 ± 62% perf-profile.calltrace.cycles-pp.do_signal.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
11.86 ± 41% -6.8 5.07 ± 62% perf-profile.calltrace.cycles-pp.get_signal.do_signal.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
11.86 ± 41% -6.8 5.07 ± 62% perf-profile.calltrace.cycles-pp.do_group_exit.get_signal.do_signal.exit_to_usermode_loop.do_syscall_64
11.86 ± 41% -6.8 5.07 ± 62% perf-profile.calltrace.cycles-pp.do_exit.do_group_exit.get_signal.do_signal.exit_to_usermode_loop
11.86 ± 41% -6.8 5.07 ± 62% perf-profile.calltrace.cycles-pp.task_work_run.do_exit.do_group_exit.get_signal.do_signal
4.27 ± 66% -3.5 0.73 ±173% perf-profile.calltrace.cycles-pp.read
4.05 ± 71% -3.3 0.73 ±173% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.read
4.05 ± 71% -3.3 0.73 ±173% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
13.30 ± 38% -8.2 5.07 ± 62% perf-profile.children.cycles-pp.task_work_run
12.47 ± 46% -7.4 5.07 ± 62% perf-profile.children.cycles-pp.exit_to_usermode_loop
12.47 ± 46% -7.4 5.07 ± 62% perf-profile.children.cycles-pp.__fput
7.98 ± 67% -7.2 0.73 ±173% perf-profile.children.cycles-pp.perf_remove_from_context
11.86 ± 41% -6.8 5.07 ± 62% perf-profile.children.cycles-pp.do_signal
11.86 ± 41% -6.8 5.07 ± 62% perf-profile.children.cycles-pp.get_signal
9.43 ± 21% -4.7 4.72 ± 67% perf-profile.children.cycles-pp.ksys_read
9.43 ± 21% -4.7 4.72 ± 67% perf-profile.children.cycles-pp.vfs_read
4.27 ± 66% -3.5 0.73 ±173% perf-profile.children.cycles-pp.read
3.86 ±101% -3.1 0.71 ±173% perf-profile.children.cycles-pp._raw_spin_lock
3.86 ±101% -3.1 0.71 ±173% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
3.86 ±101% -3.1 0.71 ±173% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
***************************************************************************************************
lkp-csl-2sp5: 96 threads Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 192G memory
=========================================================================================
class/compiler/cpufreq_governor/disk/fs/kconfig/nr_threads/rootfs/tbox_group/testcase/testtime/ucode:
os/gcc-7/performance/1HDD/ext4/x86_64-rhel-7.6/100%/debian-x86_64-2019-09-23.cgz/lkp-csl-2sp5/stress-ng/1s/0x500002b
commit:
fcf0553db6 ("sched/fair: Remove meaningless imbalance calculation")
0b0695f2b3 ("sched/fair: Rework load_balance()")
fcf0553db6f4c793 0b0695f2b34a4afa3f6e9aa1ff0
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
:2 50% 1:8 dmesg.WARNING:at_ip_selinux_file_ioctl/0x
%stddev %change %stddev
\ | \
122451 ± 11% -19.9% 98072 ± 15% stress-ng.ioprio.ops
116979 ± 11% -20.7% 92815 ± 16% stress-ng.ioprio.ops_per_sec
274187 ± 21% -26.7% 201013 ± 11% stress-ng.kill.ops
274219 ± 21% -26.7% 201040 ± 11% stress-ng.kill.ops_per_sec
3973765 -10.1% 3570462 ± 5% stress-ng.lockf.ops
3972581 -10.2% 3568935 ± 5% stress-ng.lockf.ops_per_sec
10719 ± 8% -39.9% 6442 ± 22% stress-ng.procfs.ops
9683 ± 3% -39.3% 5878 ± 22% stress-ng.procfs.ops_per_sec
6562721 -35.1% 4260609 ± 8% stress-ng.schedpolicy.ops
6564233 -35.1% 4261479 ± 8% stress-ng.schedpolicy.ops_per_sec
1070988 +21.4% 1299977 ± 7% stress-ng.sigrt.ops
1061773 +21.2% 1286618 ± 7% stress-ng.sigrt.ops_per_sec
1155684 ± 5% -14.8% 984531 ± 16% stress-ng.symlink.ops
991624 ± 4% -23.8% 755147 ± 41% stress-ng.symlink.ops_per_sec
6925 -12.1% 6086 ± 27% stress-ng.time.percent_of_cpu_this_job_got
24.68 +9.3 33.96 ± 52% mpstat.cpu.all.idle%
171.00 ± 2% -55.3% 76.50 ± 60% numa-vmstat.node1.nr_inactive_file
171.00 ± 2% -55.3% 76.50 ± 60% numa-vmstat.node1.nr_zone_inactive_file
2.032e+11 -12.5% 1.777e+11 ± 27% perf-stat.i.cpu-cycles
2.025e+11 -12.0% 1.782e+11 ± 27% perf-stat.ps.cpu-cycles
25.00 +37.5% 34.38 ± 51% vmstat.cpu.id
68.00 -13.2% 59.00 ± 27% vmstat.cpu.sy
25.24 +37.0% 34.57 ± 51% iostat.cpu.idle
68.21 -12.7% 59.53 ± 27% iostat.cpu.system
4.31 ±100% +200.6% 12.96 ± 63% iostat.sda.r_await.max
1014 ± 2% -17.1% 841.00 ± 10% meminfo.Inactive(file)
30692 ± 12% -20.9% 24280 ± 30% meminfo.Mlocked
103627 ± 27% -32.7% 69720 meminfo.Percpu
255.50 ± 2% -18.1% 209.25 ± 10% proc-vmstat.nr_inactive_file
255.50 ± 2% -18.1% 209.25 ± 10% proc-vmstat.nr_zone_inactive_file
185035 ± 22% -22.2% 143917 ± 25% proc-vmstat.pgmigrate_success
2107 -12.3% 1848 ± 27% turbostat.Avg_MHz
69.00 -7.1% 64.12 ± 8% turbostat.PkgTmp
94.63 -2.2% 92.58 ± 4% turbostat.RAMWatt
96048 +26.8% 121800 ± 8% softirqs.CPU10.NET_RX
96671 ± 4% +34.2% 129776 ± 6% softirqs.CPU15.NET_RX
171243 ± 3% -12.9% 149135 ± 8% softirqs.CPU25.NET_RX
165317 ± 4% -11.4% 146494 ± 9% softirqs.CPU27.NET_RX
139558 -24.5% 105430 ± 14% softirqs.CPU58.NET_RX
147836 -15.8% 124408 ± 6% softirqs.CPU63.NET_RX
129568 -13.8% 111624 ± 10% softirqs.CPU66.NET_RX
1050 ± 2% +14.2% 1198 ± 9% slabinfo.biovec-128.active_objs
1050 ± 2% +14.2% 1198 ± 9% slabinfo.biovec-128.num_objs
23129 +19.6% 27668 ± 6% slabinfo.kmalloc-512.active_objs
766.50 +17.4% 899.75 ± 6% slabinfo.kmalloc-512.active_slabs
24535 +17.4% 28806 ± 6% slabinfo.kmalloc-512.num_objs
766.50 +17.4% 899.75 ± 6% slabinfo.kmalloc-512.num_slabs
1039 ± 4% -4.3% 994.12 ± 6% slabinfo.sock_inode_cache.active_slabs
40527 ± 4% -4.3% 38785 ± 6% slabinfo.sock_inode_cache.num_objs
1039 ± 4% -4.3% 994.12 ± 6% slabinfo.sock_inode_cache.num_slabs
1549456 -43.6% 873443 ± 24% sched_debug.cfs_rq:/.min_vruntime.stddev
73.25 ± 5% +74.8% 128.03 ± 31% sched_debug.cfs_rq:/.nr_spread_over.stddev
18.60 ± 57% -63.8% 6.73 ± 64% sched_debug.cfs_rq:/.removed.load_avg.avg
79.57 ± 44% -44.1% 44.52 ± 55% sched_debug.cfs_rq:/.removed.load_avg.stddev
857.10 ± 57% -63.8% 310.09 ± 64% sched_debug.cfs_rq:/.removed.runnable_sum.avg
3664 ± 44% -44.1% 2049 ± 55% sched_debug.cfs_rq:/.removed.runnable_sum.stddev
4.91 ± 42% -45.3% 2.69 ± 61% sched_debug.cfs_rq:/.removed.util_avg.avg
1549544 -43.6% 874006 ± 24% sched_debug.cfs_rq:/.spread0.stddev
786.14 ± 6% -20.1% 628.46 ± 23% sched_debug.cfs_rq:/.util_avg.avg
1415 ± 8% -16.7% 1178 ± 18% sched_debug.cfs_rq:/.util_avg.max
467435 ± 15% +46.7% 685829 ± 15% sched_debug.cpu.avg_idle.avg
17972 ± 8% +631.2% 131410 ± 34% sched_debug.cpu.avg_idle.min
7.66 ± 26% +209.7% 23.72 ± 54% sched_debug.cpu.clock.stddev
7.66 ± 26% +209.7% 23.72 ± 54% sched_debug.cpu.clock_task.stddev
618063 ± 5% -17.0% 513085 ± 5% sched_debug.cpu.max_idle_balance_cost.max
12083 ± 28% -85.4% 1768 ±231% sched_debug.cpu.max_idle_balance_cost.stddev
12857 ± 16% +2117.7% 285128 ±106% sched_debug.cpu.yld_count.min
0.55 ± 6% -0.2 0.37 ± 51% perf-profile.children.cycles-pp.fpregs_assert_state_consistent
0.30 ± 21% -0.2 0.14 ±105% perf-profile.children.cycles-pp.yield_task_fair
0.32 ± 6% -0.2 0.16 ± 86% perf-profile.children.cycles-pp.rmap_walk_anon
0.19 -0.1 0.10 ± 86% perf-profile.children.cycles-pp.page_mapcount_is_zero
0.19 -0.1 0.10 ± 86% perf-profile.children.cycles-pp.total_mapcount
0.14 -0.1 0.09 ± 29% perf-profile.children.cycles-pp.start_kernel
0.11 ± 9% -0.0 0.07 ± 47% perf-profile.children.cycles-pp.__switch_to
0.10 ± 14% -0.0 0.06 ± 45% perf-profile.children.cycles-pp.switch_fpu_return
0.08 ± 6% -0.0 0.04 ± 79% perf-profile.children.cycles-pp.__update_load_avg_se
0.12 ± 13% -0.0 0.09 ± 23% perf-profile.children.cycles-pp.native_write_msr
0.31 ± 6% -0.2 0.15 ± 81% perf-profile.self.cycles-pp.poll_idle
0.50 ± 6% -0.2 0.35 ± 50% perf-profile.self.cycles-pp.fpregs_assert_state_consistent
0.18 ± 2% -0.1 0.10 ± 86% perf-profile.self.cycles-pp.total_mapcount
0.10 ± 14% -0.0 0.06 ± 45% perf-profile.self.cycles-pp.switch_fpu_return
0.10 ± 10% -0.0 0.06 ± 47% perf-profile.self.cycles-pp.__switch_to
0.07 ± 7% -0.0 0.03 ±100% perf-profile.self.cycles-pp.prep_new_page
0.07 ± 7% -0.0 0.03 ±100% perf-profile.self.cycles-pp.llist_add_batch
0.07 ± 14% -0.0 0.04 ± 79% perf-profile.self.cycles-pp.__update_load_avg_se
0.12 ± 13% -0.0 0.09 ± 23% perf-profile.self.cycles-pp.native_write_msr
66096 ± 99% -99.8% 148.50 ± 92% interrupts.109:PCI-MSI.31981642-edge.i40e-eth0-TxRx-73
543.50 ± 39% -73.3% 145.38 ± 81% interrupts.112:PCI-MSI.31981645-edge.i40e-eth0-TxRx-76
169.00 ± 28% -55.3% 75.50 ± 83% interrupts.116:PCI-MSI.31981649-edge.i40e-eth0-TxRx-80
224.00 ± 14% -57.6% 95.00 ± 87% interrupts.121:PCI-MSI.31981654-edge.i40e-eth0-TxRx-85
680.00 ± 28% -80.5% 132.75 ± 82% interrupts.43:PCI-MSI.31981576-edge.i40e-eth0-TxRx-7
327.50 ± 31% -39.0% 199.62 ± 60% interrupts.60:PCI-MSI.31981593-edge.i40e-eth0-TxRx-24
217.50 ± 19% -51.7% 105.12 ± 79% interrupts.63:PCI-MSI.31981596-edge.i40e-eth0-TxRx-27
375.00 ± 46% -78.5% 80.50 ± 82% interrupts.70:PCI-MSI.31981603-edge.i40e-eth0-TxRx-34
196.50 ± 3% -51.6% 95.12 ± 74% interrupts.72:PCI-MSI.31981605-edge.i40e-eth0-TxRx-36
442.50 ± 45% -73.1% 118.88 ± 90% interrupts.79:PCI-MSI.31981612-edge.i40e-eth0-TxRx-43
271.00 ± 8% -53.2% 126.88 ± 75% interrupts.82:PCI-MSI.31981615-edge.i40e-eth0-TxRx-46
145448 ± 4% -41.6% 84975 ± 42% interrupts.CPU1.RES:Rescheduling_interrupts
11773 ± 19% -38.1% 7290 ± 52% interrupts.CPU13.TLB:TLB_shootdowns
24177 ± 15% +356.5% 110368 ± 58% interrupts.CPU16.RES:Rescheduling_interrupts
3395 ± 3% +78.3% 6055 ± 18% interrupts.CPU17.NMI:Non-maskable_interrupts
3395 ± 3% +78.3% 6055 ± 18% interrupts.CPU17.PMI:Performance_monitoring_interrupts
106701 ± 41% -55.6% 47425 ± 56% interrupts.CPU18.RES:Rescheduling_interrupts
327.50 ± 31% -39.3% 198.88 ± 60% interrupts.CPU24.60:PCI-MSI.31981593-edge.i40e-eth0-TxRx-24
411618 +53.6% 632283 ± 77% interrupts.CPU25.LOC:Local_timer_interrupts
16189 ± 26% -53.0% 7611 ± 66% interrupts.CPU25.TLB:TLB_shootdowns
407253 +54.4% 628596 ± 78% interrupts.CPU26.LOC:Local_timer_interrupts
216.50 ± 19% -51.8% 104.25 ± 80% interrupts.CPU27.63:PCI-MSI.31981596-edge.i40e-eth0-TxRx-27
7180 -20.9% 5682 ± 25% interrupts.CPU29.NMI:Non-maskable_interrupts
7180 -20.9% 5682 ± 25% interrupts.CPU29.PMI:Performance_monitoring_interrupts
15186 ± 12% -45.5% 8276 ± 49% interrupts.CPU3.TLB:TLB_shootdowns
13092 ± 19% -29.5% 9231 ± 35% interrupts.CPU30.TLB:TLB_shootdowns
13204 ± 26% -29.3% 9336 ± 19% interrupts.CPU31.TLB:TLB_shootdowns
374.50 ± 46% -78.7% 79.62 ± 83% interrupts.CPU34.70:PCI-MSI.31981603-edge.i40e-eth0-TxRx-34
7188 -25.6% 5345 ± 26% interrupts.CPU35.NMI:Non-maskable_interrupts
7188 -25.6% 5345 ± 26% interrupts.CPU35.PMI:Performance_monitoring_interrupts
196.00 ± 4% -52.0% 94.12 ± 75% interrupts.CPU36.72:PCI-MSI.31981605-edge.i40e-eth0-TxRx-36
12170 ± 20% -34.3% 7998 ± 32% interrupts.CPU39.TLB:TLB_shootdowns
442.00 ± 45% -73.3% 118.12 ± 91% interrupts.CPU43.79:PCI-MSI.31981612-edge.i40e-eth0-TxRx-43
12070 ± 15% -37.2% 7581 ± 49% interrupts.CPU43.TLB:TLB_shootdowns
7177 -27.6% 5195 ± 26% interrupts.CPU45.NMI:Non-maskable_interrupts
7177 -27.6% 5195 ± 26% interrupts.CPU45.PMI:Performance_monitoring_interrupts
271.00 ± 8% -53.4% 126.38 ± 75% interrupts.CPU46.82:PCI-MSI.31981615-edge.i40e-eth0-TxRx-46
3591 +84.0% 6607 ± 12% interrupts.CPU46.NMI:Non-maskable_interrupts
3591 +84.0% 6607 ± 12% interrupts.CPU46.PMI:Performance_monitoring_interrupts
57614 ± 30% -34.0% 38015 ± 28% interrupts.CPU46.RES:Rescheduling_interrupts
149154 ± 41% -47.2% 78808 ± 51% interrupts.CPU51.RES:Rescheduling_interrupts
30366 ± 28% +279.5% 115229 ± 42% interrupts.CPU52.RES:Rescheduling_interrupts
29690 +355.5% 135237 ± 57% interrupts.CPU54.RES:Rescheduling_interrupts
213106 ± 2% -66.9% 70545 ± 43% interrupts.CPU59.RES:Rescheduling_interrupts
225753 ± 7% -72.9% 61212 ± 72% interrupts.CPU60.RES:Rescheduling_interrupts
12430 ± 14% -41.5% 7276 ± 52% interrupts.CPU61.TLB:TLB_shootdowns
44552 ± 22% +229.6% 146864 ± 36% interrupts.CPU65.RES:Rescheduling_interrupts
126088 ± 56% -35.3% 81516 ± 73% interrupts.CPU66.RES:Rescheduling_interrupts
170880 ± 15% -62.9% 63320 ± 52% interrupts.CPU68.RES:Rescheduling_interrupts
186033 ± 10% -39.8% 112012 ± 41% interrupts.CPU69.RES:Rescheduling_interrupts
679.50 ± 29% -80.5% 132.25 ± 82% interrupts.CPU7.43:PCI-MSI.31981576-edge.i40e-eth0-TxRx-7
124750 ± 18% -39.4% 75553 ± 43% interrupts.CPU7.RES:Rescheduling_interrupts
158500 ± 47% -52.1% 75915 ± 67% interrupts.CPU71.RES:Rescheduling_interrupts
11846 ± 11% -32.5% 8001 ± 47% interrupts.CPU72.TLB:TLB_shootdowns
66095 ± 99% -99.8% 147.62 ± 93% interrupts.CPU73.109:PCI-MSI.31981642-edge.i40e-eth0-TxRx-73
7221 ± 2% -31.0% 4982 ± 35% interrupts.CPU73.NMI:Non-maskable_interrupts
7221 ± 2% -31.0% 4982 ± 35% interrupts.CPU73.PMI:Performance_monitoring_interrupts
15304 ± 14% -47.9% 7972 ± 31% interrupts.CPU73.TLB:TLB_shootdowns
10918 ± 3% -31.9% 7436 ± 36% interrupts.CPU74.TLB:TLB_shootdowns
543.00 ± 39% -73.3% 144.75 ± 81% interrupts.CPU76.112:PCI-MSI.31981645-edge.i40e-eth0-TxRx-76
12214 ± 14% -40.9% 7220 ± 38% interrupts.CPU79.TLB:TLB_shootdowns
168.00 ± 29% -55.7% 74.50 ± 85% interrupts.CPU80.116:PCI-MSI.31981649-edge.i40e-eth0-TxRx-80
28619 ± 3% +158.4% 73939 ± 44% interrupts.CPU80.RES:Rescheduling_interrupts
12258 -34.3% 8056 ± 29% interrupts.CPU80.TLB:TLB_shootdowns
7214 -19.5% 5809 ± 24% interrupts.CPU82.NMI:Non-maskable_interrupts
7214 -19.5% 5809 ± 24% interrupts.CPU82.PMI:Performance_monitoring_interrupts
13522 ± 11% -41.2% 7949 ± 29% interrupts.CPU84.TLB:TLB_shootdowns
223.50 ± 14% -57.8% 94.25 ± 88% interrupts.CPU85.121:PCI-MSI.31981654-edge.i40e-eth0-TxRx-85
11989 ± 2% -31.7% 8194 ± 22% interrupts.CPU85.TLB:TLB_shootdowns
121153 ± 29% -41.4% 70964 ± 58% interrupts.CPU86.RES:Rescheduling_interrupts
11731 ± 8% -40.7% 6957 ± 36% interrupts.CPU86.TLB:TLB_shootdowns
12192 ± 22% -35.8% 7824 ± 43% interrupts.CPU87.TLB:TLB_shootdowns
11603 ± 19% -31.8% 7915 ± 41% interrupts.CPU89.TLB:TLB_shootdowns
10471 ± 5% -27.0% 7641 ± 31% interrupts.CPU91.TLB:TLB_shootdowns
7156 -20.9% 5658 ± 23% interrupts.CPU92.NMI:Non-maskable_interrupts
7156 -20.9% 5658 ± 23% interrupts.CPU92.PMI:Performance_monitoring_interrupts
99802 ± 20% -43.6% 56270 ± 47% interrupts.CPU92.RES:Rescheduling_interrupts
109162 ± 18% -28.7% 77839 ± 26% interrupts.CPU93.RES:Rescheduling_interrupts
15044 ± 29% -44.4% 8359 ± 30% interrupts.CPU93.TLB:TLB_shootdowns
110749 ± 19% -47.3% 58345 ± 48% interrupts.CPU94.RES:Rescheduling_interrupts
7245 -21.4% 5697 ± 25% interrupts.CPU95.NMI:Non-maskable_interrupts
7245 -21.4% 5697 ± 25% interrupts.CPU95.PMI:Performance_monitoring_interrupts
1969 ± 5% +491.7% 11653 ± 81% interrupts.IWI:IRQ_work_interrupts
***************************************************************************************************
lkp-bdw-ep6: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
=========================================================================================
class/compiler/cpufreq_governor/disk/kconfig/nr_threads/rootfs/tbox_group/testcase/testtime/ucode:
interrupt/gcc-7/performance/1HDD/x86_64-rhel-7.6/100%/debian-x86_64-2019-09-23.cgz/lkp-bdw-ep6/stress-ng/30s/0xb000038
commit:
fcf0553db6 ("sched/fair: Remove meaningless imbalance calculation")
0b0695f2b3 ("sched/fair: Rework load_balance()")
fcf0553db6f4c793 0b0695f2b34a4afa3f6e9aa1ff0
---------------- ---------------------------
%stddev %change %stddev
\ | \
98318389 +43.0% 1.406e+08 stress-ng.schedpolicy.ops
3277346 +43.0% 4685146 stress-ng.schedpolicy.ops_per_sec
3.506e+08 ± 4% -10.3% 3.146e+08 ± 3% stress-ng.sigq.ops
11684738 ± 4% -10.3% 10485353 ± 3% stress-ng.sigq.ops_per_sec
3.628e+08 ± 6% -19.4% 2.925e+08 ± 6% stress-ng.time.involuntary_context_switches
29456 +2.8% 30285 stress-ng.time.system_time
7636655 ± 9% +46.6% 11197377 ± 27% cpuidle.C1E.usage
1111483 ± 3% -9.5% 1005829 vmstat.system.cs
22638222 ± 4% +16.5% 26370816 ± 11% meminfo.Committed_AS
28908 ± 6% +24.6% 36020 ± 16% meminfo.KernelStack
7636543 ± 9% +46.6% 11196090 ± 27% turbostat.C1E
3.46 ± 16% -61.2% 1.35 ± 7% turbostat.Pkg%pc2
217.54 +1.7% 221.33 turbostat.PkgWatt
13.34 ± 2% +5.8% 14.11 turbostat.RAMWatt
525.50 ± 8% -15.7% 443.00 ± 12% slabinfo.biovec-128.active_objs
525.50 ± 8% -15.7% 443.00 ± 12% slabinfo.biovec-128.num_objs
28089 ± 12% -33.0% 18833 ± 22% slabinfo.pool_workqueue.active_objs
877.25 ± 12% -32.6% 591.00 ± 21% slabinfo.pool_workqueue.active_slabs
28089 ± 12% -32.6% 18925 ± 21% slabinfo.pool_workqueue.num_objs
877.25 ± 12% -32.6% 591.00 ± 21% slabinfo.pool_workqueue.num_slabs
846.75 ± 6% -18.0% 694.75 ± 9% slabinfo.skbuff_fclone_cache.active_objs
846.75 ± 6% -18.0% 694.75 ± 9% slabinfo.skbuff_fclone_cache.num_objs
63348 ± 6% -20.7% 50261 ± 4% softirqs.CPU14.SCHED
44394 ± 4% +21.4% 53880 ± 8% softirqs.CPU42.SCHED
52246 ± 7% -15.1% 44352 softirqs.CPU47.SCHED
58350 ± 4% -11.0% 51914 ± 7% softirqs.CPU6.SCHED
58009 ± 7% -23.8% 44206 ± 4% softirqs.CPU63.SCHED
49166 ± 6% +23.4% 60683 ± 9% softirqs.CPU68.SCHED
44594 ± 7% +14.3% 50951 ± 8% softirqs.CPU78.SCHED
46407 ± 9% +19.6% 55515 ± 8% softirqs.CPU84.SCHED
55555 ± 8% -15.5% 46933 ± 4% softirqs.CPU9.SCHED
198757 ± 18% +44.1% 286316 ± 9% numa-meminfo.node0.Active
189280 ± 19% +37.1% 259422 ± 7% numa-meminfo.node0.Active(anon)
110438 ± 33% +68.3% 185869 ± 16% numa-meminfo.node0.AnonHugePages
143458 ± 28% +67.7% 240547 ± 13% numa-meminfo.node0.AnonPages
12438 ± 16% +61.9% 20134 ± 37% numa-meminfo.node0.KernelStack
1004379 ± 7% +16.4% 1168764 ± 4% numa-meminfo.node0.MemUsed
357111 ± 24% -41.6% 208655 ± 29% numa-meminfo.node1.Active
330094 ± 22% -39.6% 199339 ± 32% numa-meminfo.node1.Active(anon)
265924 ± 25% -52.2% 127138 ± 46% numa-meminfo.node1.AnonHugePages
314059 ± 22% -49.6% 158305 ± 36% numa-meminfo.node1.AnonPages
15386 ± 16% -25.1% 11525 ± 15% numa-meminfo.node1.KernelStack
1200805 ± 11% -18.6% 977595 ± 7% numa-meminfo.node1.MemUsed
965.50 ± 15% -29.3% 682.25 ± 43% numa-meminfo.node1.Mlocked
46762 ± 18% +37.8% 64452 ± 8% numa-vmstat.node0.nr_active_anon
35393 ± 27% +68.9% 59793 ± 12% numa-vmstat.node0.nr_anon_pages
52.75 ± 33% +71.1% 90.25 ± 15% numa-vmstat.node0.nr_anon_transparent_hugepages
15.00 ± 96% +598.3% 104.75 ± 15% numa-vmstat.node0.nr_inactive_file
11555 ± 22% +68.9% 19513 ± 41% numa-vmstat.node0.nr_kernel_stack
550.25 ±162% +207.5% 1691 ± 48% numa-vmstat.node0.nr_written
46762 ± 18% +37.8% 64452 ± 8% numa-vmstat.node0.nr_zone_active_anon
15.00 ± 96% +598.3% 104.75 ± 15% numa-vmstat.node0.nr_zone_inactive_file
82094 ± 22% -39.5% 49641 ± 32% numa-vmstat.node1.nr_active_anon
78146 ± 23% -49.5% 39455 ± 37% numa-vmstat.node1.nr_anon_pages
129.00 ± 25% -52.3% 61.50 ± 47% numa-vmstat.node1.nr_anon_transparent_hugepages
107.75 ± 12% -85.4% 15.75 ±103% numa-vmstat.node1.nr_inactive_file
14322 ± 11% -21.1% 11304 ± 11% numa-vmstat.node1.nr_kernel_stack
241.00 ± 15% -29.5% 170.00 ± 43% numa-vmstat.node1.nr_mlock
82094 ± 22% -39.5% 49641 ± 32% numa-vmstat.node1.nr_zone_active_anon
107.75 ± 12% -85.4% 15.75 ±103% numa-vmstat.node1.nr_zone_inactive_file
0.81 ± 5% +0.2 0.99 ± 10% perf-profile.calltrace.cycles-pp.task_rq_lock.task_sched_runtime.cpu_clock_sample.posix_cpu_timer_get.do_timer_gettime
0.60 ± 11% +0.2 0.83 ± 9% perf-profile.calltrace.cycles-pp.___might_sleep.__might_fault._copy_to_user.put_itimerspec64.__x64_sys_timer_gettime
1.73 ± 9% +0.3 2.05 ± 8% perf-profile.calltrace.cycles-pp.__might_fault._copy_to_user.put_itimerspec64.__x64_sys_timer_gettime.do_syscall_64
3.92 ± 5% +0.6 4.49 ± 7% perf-profile.calltrace.cycles-pp.task_sched_runtime.cpu_clock_sample.posix_cpu_timer_get.do_timer_gettime.__x64_sys_timer_gettime
4.17 ± 4% +0.6 4.78 ± 7% perf-profile.calltrace.cycles-pp.cpu_clock_sample.posix_cpu_timer_get.do_timer_gettime.__x64_sys_timer_gettime.do_syscall_64
5.72 ± 3% +0.7 6.43 ± 7% perf-profile.calltrace.cycles-pp.posix_cpu_timer_get.do_timer_gettime.__x64_sys_timer_gettime.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.24 ± 54% -0.2 0.07 ±131% perf-profile.children.cycles-pp.ext4_inode_csum_set
0.45 ± 3% +0.1 0.56 ± 4% perf-profile.children.cycles-pp.__might_sleep
0.84 ± 5% +0.2 1.03 ± 9% perf-profile.children.cycles-pp.task_rq_lock
0.66 ± 8% +0.2 0.88 ± 7% perf-profile.children.cycles-pp.___might_sleep
1.83 ± 9% +0.3 2.16 ± 8% perf-profile.children.cycles-pp.__might_fault
4.04 ± 5% +0.6 4.62 ± 7% perf-profile.children.cycles-pp.task_sched_runtime
4.24 ± 4% +0.6 4.87 ± 7% perf-profile.children.cycles-pp.cpu_clock_sample
5.77 ± 3% +0.7 6.48 ± 7% perf-profile.children.cycles-pp.posix_cpu_timer_get
0.22 ± 11% +0.1 0.28 ± 15% perf-profile.self.cycles-pp.cpu_clock_sample
0.47 ± 7% +0.1 0.55 ± 5% perf-profile.self.cycles-pp.update_curr
0.28 ± 5% +0.1 0.38 ± 14% perf-profile.self.cycles-pp.task_rq_lock
0.42 ± 3% +0.1 0.53 ± 4% perf-profile.self.cycles-pp.__might_sleep
0.50 ± 5% +0.1 0.61 ± 11% perf-profile.self.cycles-pp.task_sched_runtime
0.63 ± 9% +0.2 0.85 ± 7% perf-profile.self.cycles-pp.___might_sleep
9180611 ± 5% +40.1% 12859327 ± 14% sched_debug.cfs_rq:/.MIN_vruntime.max
1479571 ± 6% +57.6% 2331469 ± 14% sched_debug.cfs_rq:/.MIN_vruntime.stddev
7951 ± 6% -52.5% 3773 ± 17% sched_debug.cfs_rq:/.exec_clock.stddev
321306 ± 39% -44.2% 179273 sched_debug.cfs_rq:/.load.max
9180613 ± 5% +40.1% 12859327 ± 14% sched_debug.cfs_rq:/.max_vruntime.max
1479571 ± 6% +57.6% 2331469 ± 14% sched_debug.cfs_rq:/.max_vruntime.stddev
16622378 +20.0% 19940069 ± 7% sched_debug.cfs_rq:/.min_vruntime.avg
18123901 +19.7% 21686545 ± 6% sched_debug.cfs_rq:/.min_vruntime.max
14338218 ± 3% +27.4% 18267927 ± 7% sched_debug.cfs_rq:/.min_vruntime.min
0.17 ± 16% +23.4% 0.21 ± 11% sched_debug.cfs_rq:/.nr_running.stddev
319990 ± 39% -44.6% 177347 sched_debug.cfs_rq:/.runnable_weight.max
-2067420 -33.5% -1375445 sched_debug.cfs_rq:/.spread0.min
1033 ± 8% -13.7% 891.85 ± 3% sched_debug.cfs_rq:/.util_est_enqueued.max
93676 ± 16% -29.0% 66471 ± 17% sched_debug.cpu.avg_idle.min
10391 ± 52% +118.9% 22750 ± 15% sched_debug.cpu.curr->pid.avg
14393 ± 35% +113.2% 30689 ± 17% sched_debug.cpu.curr->pid.max
3041 ± 38% +161.8% 7963 ± 11% sched_debug.cpu.curr->pid.stddev
3.38 ± 6% -16.3% 2.83 ± 5% sched_debug.cpu.nr_running.max
2412687 ± 4% -16.0% 2027251 ± 3% sched_debug.cpu.nr_switches.avg
4038819 ± 3% -20.2% 3223112 ± 5% sched_debug.cpu.nr_switches.max
834203 ± 17% -37.8% 518798 ± 27% sched_debug.cpu.nr_switches.stddev
45.85 ± 13% +41.2% 64.75 ± 18% sched_debug.cpu.nr_uninterruptible.max
1937209 ± 2% +58.5% 3070891 ± 3% sched_debug.cpu.sched_count.min
1074023 ± 13% -57.9% 451958 ± 12% sched_debug.cpu.sched_count.stddev
1283769 ± 7% +65.1% 2118907 ± 7% sched_debug.cpu.yld_count.min
714244 ± 5% -51.9% 343373 ± 22% sched_debug.cpu.yld_count.stddev
12.54 ± 9% -18.8% 10.18 ± 15% perf-stat.i.MPKI
1.011e+10 +2.6% 1.038e+10 perf-stat.i.branch-instructions
13.22 ± 5% +2.5 15.75 ± 3% perf-stat.i.cache-miss-rate%
21084021 ± 6% +33.9% 28231058 ± 6% perf-stat.i.cache-misses
1143861 ± 5% -12.1% 1005721 ± 6% perf-stat.i.context-switches
1.984e+11 +1.8% 2.02e+11 perf-stat.i.cpu-cycles
1.525e+10 +1.3% 1.544e+10 perf-stat.i.dTLB-loads
65.46 -2.7 62.76 ± 3% perf-stat.i.iTLB-load-miss-rate%
20360883 ± 4% +10.5% 22500874 ± 4% perf-stat.i.iTLB-loads
4.963e+10 +2.0% 5.062e+10 perf-stat.i.instructions
181557 -2.4% 177113 perf-stat.i.msec
5350122 ± 8% +26.5% 6765332 ± 7% perf-stat.i.node-load-misses
4264320 ± 3% +24.8% 5321600 ± 4% perf-stat.i.node-store-misses
6.12 ± 5% +1.5 7.60 ± 2% perf-stat.overall.cache-miss-rate%
7646 ± 6% -17.7% 6295 ± 3% perf-stat.overall.cycles-between-cache-misses
69.29 -1.1 68.22 perf-stat.overall.iTLB-load-miss-rate%
61.11 ± 2% +6.6 67.71 ± 5% perf-stat.overall.node-load-miss-rate%
74.82 +1.8 76.58 perf-stat.overall.node-store-miss-rate%
1.044e+10 +1.8% 1.063e+10 perf-stat.ps.branch-instructions
26325951 ± 6% +22.9% 32366684 ± 2% perf-stat.ps.cache-misses
1115530 ± 3% -9.5% 1009780 perf-stat.ps.context-switches
1.536e+10 +1.0% 1.552e+10 perf-stat.ps.dTLB-loads
44718416 ± 2% +5.8% 47308605 ± 3% perf-stat.ps.iTLB-load-misses
19831973 ± 4% +11.1% 22040029 ± 4% perf-stat.ps.iTLB-loads
5.064e+10 +1.4% 5.137e+10 perf-stat.ps.instructions
5454694 ± 9% +26.4% 6892365 ± 6% perf-stat.ps.node-load-misses
4263688 ± 4% +24.9% 5325279 ± 4% perf-stat.ps.node-store-misses
3.001e+13 +1.7% 3.052e+13 perf-stat.total.instructions
18550 -74.9% 4650 ±173% interrupts.76:IR-PCI-MSI.512000-edge.ahci[0000:00:1f.2]
7642 ± 9% -20.4% 6086 ± 2% interrupts.CPU0.CAL:Function_call_interrupts
4376 ± 22% -75.4% 1077 ± 41% interrupts.CPU0.TLB:TLB_shootdowns
8402 ± 5% -19.0% 6806 interrupts.CPU1.CAL:Function_call_interrupts
4559 ± 20% -73.7% 1199 ± 15% interrupts.CPU1.TLB:TLB_shootdowns
8423 ± 4% -20.2% 6725 ± 2% interrupts.CPU10.CAL:Function_call_interrupts
4536 ± 14% -75.0% 1135 ± 20% interrupts.CPU10.TLB:TLB_shootdowns
8303 ± 3% -18.2% 6795 ± 2% interrupts.CPU11.CAL:Function_call_interrupts
4404 ± 11% -71.6% 1250 ± 35% interrupts.CPU11.TLB:TLB_shootdowns
8491 ± 6% -21.3% 6683 interrupts.CPU12.CAL:Function_call_interrupts
4723 ± 20% -77.2% 1077 ± 17% interrupts.CPU12.TLB:TLB_shootdowns
8403 ± 5% -20.3% 6700 ± 2% interrupts.CPU13.CAL:Function_call_interrupts
4557 ± 19% -74.2% 1175 ± 22% interrupts.CPU13.TLB:TLB_shootdowns
8459 ± 4% -18.6% 6884 interrupts.CPU14.CAL:Function_call_interrupts
4559 ± 18% -69.8% 1376 ± 13% interrupts.CPU14.TLB:TLB_shootdowns
8305 ± 7% -17.7% 6833 ± 2% interrupts.CPU15.CAL:Function_call_interrupts
4261 ± 25% -67.6% 1382 ± 24% interrupts.CPU15.TLB:TLB_shootdowns
8277 ± 5% -19.1% 6696 ± 3% interrupts.CPU16.CAL:Function_call_interrupts
4214 ± 22% -69.6% 1282 ± 8% interrupts.CPU16.TLB:TLB_shootdowns
8258 ± 5% -18.9% 6694 ± 3% interrupts.CPU17.CAL:Function_call_interrupts
4461 ± 19% -74.1% 1155 ± 21% interrupts.CPU17.TLB:TLB_shootdowns
8457 ± 6% -20.6% 6717 interrupts.CPU18.CAL:Function_call_interrupts
4889 ± 34% +60.0% 7822 interrupts.CPU18.NMI:Non-maskable_interrupts
4889 ± 34% +60.0% 7822 interrupts.CPU18.PMI:Performance_monitoring_interrupts
4731 ± 22% -77.2% 1078 ± 10% interrupts.CPU18.TLB:TLB_shootdowns
8160 ± 5% -18.1% 6684 interrupts.CPU19.CAL:Function_call_interrupts
4311 ± 20% -74.2% 1114 ± 13% interrupts.CPU19.TLB:TLB_shootdowns
8464 ± 2% -18.2% 6927 ± 3% interrupts.CPU2.CAL:Function_call_interrupts
4938 ± 14% -70.5% 1457 ± 18% interrupts.CPU2.TLB:TLB_shootdowns
8358 ± 6% -19.7% 6715 ± 3% interrupts.CPU20.CAL:Function_call_interrupts
4567 ± 24% -74.6% 1160 ± 35% interrupts.CPU20.TLB:TLB_shootdowns
8460 ± 4% -22.3% 6577 ± 2% interrupts.CPU21.CAL:Function_call_interrupts
4514 ± 18% -76.0% 1084 ± 22% interrupts.CPU21.TLB:TLB_shootdowns
6677 ± 6% +19.6% 7988 ± 9% interrupts.CPU22.CAL:Function_call_interrupts
1288 ± 14% +209.1% 3983 ± 35% interrupts.CPU22.TLB:TLB_shootdowns
6751 ± 2% +24.0% 8370 ± 9% interrupts.CPU23.CAL:Function_call_interrupts
1037 ± 29% +323.0% 4388 ± 36% interrupts.CPU23.TLB:TLB_shootdowns
6844 +20.6% 8251 ± 9% interrupts.CPU24.CAL:Function_call_interrupts
1205 ± 17% +229.2% 3967 ± 40% interrupts.CPU24.TLB:TLB_shootdowns
6880 +21.9% 8389 ± 7% interrupts.CPU25.CAL:Function_call_interrupts
1228 ± 19% +245.2% 4240 ± 35% interrupts.CPU25.TLB:TLB_shootdowns
6494 ± 8% +25.1% 8123 ± 9% interrupts.CPU26.CAL:Function_call_interrupts
1141 ± 13% +262.5% 4139 ± 32% interrupts.CPU26.TLB:TLB_shootdowns
6852 +19.2% 8166 ± 7% interrupts.CPU27.CAL:Function_call_interrupts
1298 ± 8% +197.1% 3857 ± 31% interrupts.CPU27.TLB:TLB_shootdowns
6563 ± 6% +25.2% 8214 ± 8% interrupts.CPU28.CAL:Function_call_interrupts
1176 ± 8% +237.1% 3964 ± 33% interrupts.CPU28.TLB:TLB_shootdowns
6842 ± 2% +21.4% 8308 ± 8% interrupts.CPU29.CAL:Function_call_interrupts
1271 ± 11% +223.8% 4118 ± 33% interrupts.CPU29.TLB:TLB_shootdowns
8418 ± 3% -21.1% 6643 ± 2% interrupts.CPU3.CAL:Function_call_interrupts
4677 ± 11% -75.1% 1164 ± 16% interrupts.CPU3.TLB:TLB_shootdowns
6798 ± 3% +21.8% 8284 ± 7% interrupts.CPU30.CAL:Function_call_interrupts
1219 ± 12% +236.3% 4102 ± 30% interrupts.CPU30.TLB:TLB_shootdowns
6503 ± 4% +25.9% 8186 ± 6% interrupts.CPU31.CAL:Function_call_interrupts
1046 ± 15% +289.1% 4072 ± 32% interrupts.CPU31.TLB:TLB_shootdowns
6949 ± 3% +17.2% 8141 ± 8% interrupts.CPU32.CAL:Function_call_interrupts
1241 ± 23% +210.6% 3854 ± 34% interrupts.CPU32.TLB:TLB_shootdowns
1487 ± 26% +161.6% 3889 ± 46% interrupts.CPU33.TLB:TLB_shootdowns
1710 ± 44% +140.1% 4105 ± 36% interrupts.CPU34.TLB:TLB_shootdowns
6957 ± 2% +15.2% 8012 ± 9% interrupts.CPU35.CAL:Function_call_interrupts
1165 ± 8% +223.1% 3765 ± 38% interrupts.CPU35.TLB:TLB_shootdowns
1423 ± 24% +173.4% 3892 ± 33% interrupts.CPU36.TLB:TLB_shootdowns
1279 ± 29% +224.2% 4148 ± 39% interrupts.CPU37.TLB:TLB_shootdowns
1301 ± 20% +226.1% 4244 ± 35% interrupts.CPU38.TLB:TLB_shootdowns
6906 ± 2% +18.5% 8181 ± 8% interrupts.CPU39.CAL:Function_call_interrupts
368828 ± 20% +96.2% 723710 ± 7% interrupts.CPU39.RES:Rescheduling_interrupts
1438 ± 12% +174.8% 3951 ± 33% interrupts.CPU39.TLB:TLB_shootdowns
8399 ± 5% -19.2% 6788 ± 2% interrupts.CPU4.CAL:Function_call_interrupts
4567 ± 18% -72.7% 1245 ± 28% interrupts.CPU4.TLB:TLB_shootdowns
6895 +22.4% 8439 ± 9% interrupts.CPU40.CAL:Function_call_interrupts
1233 ± 11% +247.1% 4280 ± 36% interrupts.CPU40.TLB:TLB_shootdowns
6819 ± 2% +21.3% 8274 ± 9% interrupts.CPU41.CAL:Function_call_interrupts
1260 ± 14% +207.1% 3871 ± 38% interrupts.CPU41.TLB:TLB_shootdowns
1301 ± 9% +204.7% 3963 ± 36% interrupts.CPU42.TLB:TLB_shootdowns
6721 ± 3% +22.3% 8221 ± 7% interrupts.CPU43.CAL:Function_call_interrupts
1237 ± 19% +224.8% 4017 ± 35% interrupts.CPU43.TLB:TLB_shootdowns
8422 ± 8% -22.7% 6506 ± 5% interrupts.CPU44.CAL:Function_call_interrupts
15261375 ± 7% -7.8% 14064176 interrupts.CPU44.LOC:Local_timer_interrupts
4376 ± 25% -75.7% 1063 ± 26% interrupts.CPU44.TLB:TLB_shootdowns
8451 ± 5% -23.7% 6448 ± 6% interrupts.CPU45.CAL:Function_call_interrupts
4351 ± 18% -74.9% 1094 ± 12% interrupts.CPU45.TLB:TLB_shootdowns
8705 ± 6% -21.2% 6860 ± 2% interrupts.CPU46.CAL:Function_call_interrupts
4787 ± 20% -69.5% 1462 ± 16% interrupts.CPU46.TLB:TLB_shootdowns
8334 ± 3% -18.9% 6763 interrupts.CPU47.CAL:Function_call_interrupts
4126 ± 10% -71.3% 1186 ± 18% interrupts.CPU47.TLB:TLB_shootdowns
8578 ± 4% -21.7% 6713 interrupts.CPU48.CAL:Function_call_interrupts
4520 ± 15% -74.5% 1154 ± 23% interrupts.CPU48.TLB:TLB_shootdowns
8450 ± 8% -18.8% 6863 ± 3% interrupts.CPU49.CAL:Function_call_interrupts
4494 ± 24% -66.5% 1505 ± 22% interrupts.CPU49.TLB:TLB_shootdowns
8307 ± 4% -18.0% 6816 ± 2% interrupts.CPU5.CAL:Function_call_interrupts
7845 -37.4% 4908 ± 34% interrupts.CPU5.NMI:Non-maskable_interrupts
7845 -37.4% 4908 ± 34% interrupts.CPU5.PMI:Performance_monitoring_interrupts
4429 ± 17% -69.8% 1339 ± 20% interrupts.CPU5.TLB:TLB_shootdowns
8444 ± 4% -21.7% 6613 interrupts.CPU50.CAL:Function_call_interrupts
4282 ± 16% -76.0% 1029 ± 17% interrupts.CPU50.TLB:TLB_shootdowns
8750 ± 6% -22.2% 6803 interrupts.CPU51.CAL:Function_call_interrupts
4755 ± 20% -73.1% 1277 ± 15% interrupts.CPU51.TLB:TLB_shootdowns
8478 ± 6% -20.2% 6766 ± 2% interrupts.CPU52.CAL:Function_call_interrupts
4337 ± 20% -72.6% 1190 ± 22% interrupts.CPU52.TLB:TLB_shootdowns
8604 ± 7% -21.5% 6750 ± 4% interrupts.CPU53.CAL:Function_call_interrupts
4649 ± 17% -74.3% 1193 ± 23% interrupts.CPU53.TLB:TLB_shootdowns
8317 ± 9% -19.4% 6706 ± 3% interrupts.CPU54.CAL:Function_call_interrupts
4372 ± 12% -75.4% 1076 ± 29% interrupts.CPU54.TLB:TLB_shootdowns
8439 ± 3% -18.5% 6876 interrupts.CPU55.CAL:Function_call_interrupts
4415 ± 11% -71.6% 1254 ± 17% interrupts.CPU55.TLB:TLB_shootdowns
8869 ± 6% -22.6% 6864 ± 2% interrupts.CPU56.CAL:Function_call_interrupts
517594 ± 13% +123.3% 1155539 ± 25% interrupts.CPU56.RES:Rescheduling_interrupts
5085 ± 22% -74.9% 1278 ± 17% interrupts.CPU56.TLB:TLB_shootdowns
8682 ± 4% -21.7% 6796 ± 2% interrupts.CPU57.CAL:Function_call_interrupts
4808 ± 19% -74.1% 1243 ± 13% interrupts.CPU57.TLB:TLB_shootdowns
8626 ± 7% -21.8% 6746 ± 2% interrupts.CPU58.CAL:Function_call_interrupts
4816 ± 20% -79.1% 1007 ± 28% interrupts.CPU58.TLB:TLB_shootdowns
8759 ± 8% -20.3% 6984 interrupts.CPU59.CAL:Function_call_interrupts
4840 ± 22% -70.6% 1423 ± 14% interrupts.CPU59.TLB:TLB_shootdowns
8167 ± 6% -19.0% 6615 ± 2% interrupts.CPU6.CAL:Function_call_interrupts
4129 ± 21% -75.4% 1017 ± 24% interrupts.CPU6.TLB:TLB_shootdowns
8910 ± 4% -23.7% 6794 ± 3% interrupts.CPU60.CAL:Function_call_interrupts
5017 ± 12% -77.8% 1113 ± 15% interrupts.CPU60.TLB:TLB_shootdowns
8689 ± 5% -21.6% 6808 interrupts.CPU61.CAL:Function_call_interrupts
4715 ± 20% -77.6% 1055 ± 19% interrupts.CPU61.TLB:TLB_shootdowns
8574 ± 4% -18.9% 6953 ± 2% interrupts.CPU62.CAL:Function_call_interrupts
4494 ± 17% -72.3% 1244 ± 7% interrupts.CPU62.TLB:TLB_shootdowns
8865 ± 3% -25.4% 6614 ± 7% interrupts.CPU63.CAL:Function_call_interrupts
4870 ± 12% -76.8% 1130 ± 12% interrupts.CPU63.TLB:TLB_shootdowns
8724 ± 7% -20.2% 6958 ± 3% interrupts.CPU64.CAL:Function_call_interrupts
4736 ± 16% -72.6% 1295 ± 7% interrupts.CPU64.TLB:TLB_shootdowns
8717 ± 6% -23.7% 6653 ± 4% interrupts.CPU65.CAL:Function_call_interrupts
4626 ± 19% -76.5% 1087 ± 21% interrupts.CPU65.TLB:TLB_shootdowns
6671 +24.7% 8318 ± 9% interrupts.CPU66.CAL:Function_call_interrupts
1091 ± 8% +249.8% 3819 ± 32% interrupts.CPU66.TLB:TLB_shootdowns
6795 ± 2% +26.9% 8624 ± 9% interrupts.CPU67.CAL:Function_call_interrupts
1098 ± 24% +299.5% 4388 ± 39% interrupts.CPU67.TLB:TLB_shootdowns
6704 ± 5% +25.8% 8431 ± 8% interrupts.CPU68.CAL:Function_call_interrupts
1214 ± 15% +236.1% 4083 ± 36% interrupts.CPU68.TLB:TLB_shootdowns
1049 ± 15% +326.2% 4473 ± 33% interrupts.CPU69.TLB:TLB_shootdowns
8554 ± 6% -19.6% 6874 ± 2% interrupts.CPU7.CAL:Function_call_interrupts
4753 ± 19% -71.7% 1344 ± 16% interrupts.CPU7.TLB:TLB_shootdowns
1298 ± 13% +227.4% 4249 ± 38% interrupts.CPU70.TLB:TLB_shootdowns
6976 +19.9% 8362 ± 7% interrupts.CPU71.CAL:Function_call_interrupts
1232748 ± 18% -57.3% 525824 ± 33% interrupts.CPU71.RES:Rescheduling_interrupts
1253 ± 9% +211.8% 3909 ± 31% interrupts.CPU71.TLB:TLB_shootdowns
1316 ± 22% +188.7% 3800 ± 33% interrupts.CPU72.TLB:TLB_shootdowns
6665 ± 5% +26.5% 8429 ± 8% interrupts.CPU73.CAL:Function_call_interrupts
1202 ± 13% +234.1% 4017 ± 37% interrupts.CPU73.TLB:TLB_shootdowns
6639 ± 5% +27.0% 8434 ± 8% interrupts.CPU74.CAL:Function_call_interrupts
1079 ± 16% +269.4% 3986 ± 36% interrupts.CPU74.TLB:TLB_shootdowns
1055 ± 12% +301.2% 4235 ± 34% interrupts.CPU75.TLB:TLB_shootdowns
7011 ± 3% +21.6% 8522 ± 8% interrupts.CPU76.CAL:Function_call_interrupts
1223 ± 13% +230.7% 4047 ± 35% interrupts.CPU76.TLB:TLB_shootdowns
6886 ± 7% +25.6% 8652 ± 10% interrupts.CPU77.CAL:Function_call_interrupts
1316 ± 16% +229.8% 4339 ± 36% interrupts.CPU77.TLB:TLB_shootdowns
7343 ± 5% +19.1% 8743 ± 9% interrupts.CPU78.CAL:Function_call_interrupts
1699 ± 37% +144.4% 4152 ± 31% interrupts.CPU78.TLB:TLB_shootdowns
7136 ± 4% +21.4% 8666 ± 9% interrupts.CPU79.CAL:Function_call_interrupts
1094 ± 13% +276.2% 4118 ± 34% interrupts.CPU79.TLB:TLB_shootdowns
8531 ± 5% -19.5% 6869 ± 2% interrupts.CPU8.CAL:Function_call_interrupts
4764 ± 16% -71.0% 1382 ± 14% interrupts.CPU8.TLB:TLB_shootdowns
1387 ± 29% +181.8% 3910 ± 38% interrupts.CPU80.TLB:TLB_shootdowns
1114 ± 30% +259.7% 4007 ± 36% interrupts.CPU81.TLB:TLB_shootdowns
7012 +23.9% 8685 ± 8% interrupts.CPU82.CAL:Function_call_interrupts
1274 ± 12% +255.4% 4530 ± 27% interrupts.CPU82.TLB:TLB_shootdowns
6971 ± 3% +23.8% 8628 ± 9% interrupts.CPU83.CAL:Function_call_interrupts
1156 ± 18% +260.1% 4162 ± 34% interrupts.CPU83.TLB:TLB_shootdowns
7030 ± 4% +21.0% 8504 ± 8% interrupts.CPU84.CAL:Function_call_interrupts
1286 ± 23% +224.0% 4166 ± 31% interrupts.CPU84.TLB:TLB_shootdowns
7059 +22.4% 8644 ± 11% interrupts.CPU85.CAL:Function_call_interrupts
1421 ± 22% +208.8% 4388 ± 33% interrupts.CPU85.TLB:TLB_shootdowns
7018 ± 2% +22.8% 8615 ± 9% interrupts.CPU86.CAL:Function_call_interrupts
1258 ± 8% +231.1% 4167 ± 34% interrupts.CPU86.TLB:TLB_shootdowns
1338 ± 3% +217.9% 4255 ± 31% interrupts.CPU87.TLB:TLB_shootdowns
8376 ± 4% -19.0% 6787 ± 2% interrupts.CPU9.CAL:Function_call_interrupts
4466 ± 17% -71.2% 1286 ± 18% interrupts.CPU9.TLB:TLB_shootdowns
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Oliver Sang
View attachment "config-5.4.0-rc1-00010-g0b0695f2b34a4" of type "text/plain" (204540 bytes)
View attachment "job-script" of type "text/plain" (7149 bytes)
View attachment "job.yaml" of type "text/plain" (4742 bytes)
View attachment "reproduce" of type "text/plain" (254 bytes)
Powered by blists - more mailing lists