[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191018082354.GA9296@shao2-debian>
Date: Fri, 18 Oct 2019 16:23:54 +0800
From: kernel test robot <rong.a.chen@...el.com>
To: Matthew Wilcox <willy@...radead.org>
Cc: Dan Williams <dan.j.williams@...el.com>,
Robert Barror <robert.barror@...el.com>,
Seema Pandit <seema.pandit@...el.com>, Jan Kara <jack@...e.cz>,
LKML <linux-kernel@...r.kernel.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
lkp@...ts.01.org
Subject: [dax] 23c84eb783: fio.write_bw_MBps -61.6% regression
Greeting,
FYI, we noticed a -61.6% regression of fio.write_bw_MBps due to commit:
commit: 23c84eb7837514e16d79ed6d849b13745e0ce688 ("dax: Fix missed wakeup with PMD faults")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: fio-basic
on test machine: 72 threads Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz with 256G memory
with following parameters:
disk: 2pmem
fs: ext4
mount_option: dax
runtime: 200s
nr_task: 50%
time_based: tb
rw: write
bs: 2M
ioengine: mmap
test_size: 200G
cpufreq_governor: performance
ucode: 0x43
test-description: Fio is a tool that will spawn a number of threads or processes doing a particular type of I/O action as specified by the user.
test-url: https://github.com/axboe/fio
In addition to that, the commit also has significant impact on the following tests:
+------------------+---------------------------------------------------------------------------+
| testcase: change | fio-basic: boot-time.boot -5.0% improvement |
| test machine | 96 threads Intel(R) Xeon(R) Platinum 8260L CPU @ 2.40GHz with 128G memory |
| test parameters | bs=128B |
| | cpufreq_governor=performance |
| | fs=ext4 |
| | iodepth=1 |
| | ioengine=libpmem |
| | mode=fsdax |
| | mount_option=dax |
| | nr_dev=2 |
| | nr_task=50% |
| | runtime=200s |
| | rw=randread |
| | test_size=200G |
| | time_based=tb |
| | ucode=0x400001c |
+------------------+---------------------------------------------------------------------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen@...el.com>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
bs/compiler/cpufreq_governor/disk/fs/ioengine/kconfig/mount_option/nr_task/rootfs/runtime/rw/tbox_group/test_size/testcase/time_based/ucode:
2M/gcc-7/performance/2pmem/ext4/mmap/x86_64-rhel-7.6/dax/50%/debian-x86_64-2019-09-23.cgz/200s/write/lkp-hsw-ep4/200G/fio-basic/tb/0x43
commit:
40cdc60ac1 ("device-dax: Add a 'resource' attribute")
23c84eb783 ("dax: Fix missed wakeup with PMD faults")
40cdc60ac16a42eb 23c84eb7837514e16d79ed6d849
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
0:4 34% 1:4 perf-profile.children.cycles-pp.error_entry
0:4 27% 1:4 perf-profile.self.cycles-pp.error_entry
%stddev %change %stddev
\ | \
0.16 ± 39% -0.2 0.00 fio.latency_1000us%
0.01 -0.0 0.00 fio.latency_100ms%
2.96 ± 19% +96.7 99.67 fio.latency_10ms%
0.01 +0.3 0.31 ± 85% fio.latency_20ms%
6.56 ±119% -6.6 0.00 ±173% fio.latency_2ms%
90.07 ± 8% -90.1 0.02 ± 65% fio.latency_4ms%
0.01 ± 32% -0.0 0.00 fio.latency_500us%
0.24 ± 91% -0.2 0.00 fio.latency_750us%
9254 +8.4% 10034 ± 3% fio.time.involuntary_context_switches
27426669 ± 5% +1577.6% 4.601e+08 ± 3% fio.time.minor_page_faults
223.22 ± 3% +2777.2% 6422 fio.time.system_time
7000 -88.4% 815.37 ± 3% fio.time.user_time
22696 ± 2% +8.2% 24558 fio.time.voluntary_context_switches
2337311 ± 7% -61.6% 898367 ± 3% fio.workload
23372 ± 7% -61.6% 8983 ± 3% fio.write_bw_MBps
3129344 ± 9% +172.3% 8519680 ± 5% fio.write_clat_90%_us
3342336 ± 12% +158.8% 8650752 ± 5% fio.write_clat_95%_us
5423104 ± 5% +64.5% 8921088 ± 6% fio.write_clat_99%_us
2598402 ± 8% +204.9% 7923118 ± 3% fio.write_clat_mean_us
11686 ± 7% -61.6% 4491 ± 3% fio.write_iops
25.58 ± 4% -8.9% 23.31 ± 3% turbostat.RAMWatt
1.81 ± 17% +663.8% 13.80 ± 7% iostat.cpu.system
13.24 ± 6% -87.6% 1.64 ± 9% iostat.cpu.user
1.55 +12.3 13.84 ± 7% mpstat.cpu.all.sys%
13.28 ± 6% -11.6 1.64 ± 9% mpstat.cpu.all.usr%
12.75 ± 6% -92.2% 1.00 vmstat.cpu.us
1376693 ± 2% +30.3% 1793986 vmstat.memory.cache
20143 ± 17% -42.1% 11662 ± 19% numa-numastat.node0.other_node
819804 ± 56% +114.3% 1757036 ± 26% numa-numastat.node1.local_node
822287 ± 56% +115.1% 1768778 ± 25% numa-numastat.node1.numa_hit
2484 ±114% +372.7% 11743 ± 19% numa-numastat.node1.other_node
14211 ± 2% +7.8% 15320 ± 3% slabinfo.anon_vma.active_objs
14211 ± 2% +7.8% 15320 ± 3% slabinfo.anon_vma.num_objs
1054 ± 6% +31.2% 1383 ± 3% slabinfo.dquot.active_objs
1054 ± 6% +31.2% 1383 ± 3% slabinfo.dquot.num_objs
48335 +1300.3% 676851 slabinfo.radix_tree_node.active_objs
876.25 +1281.1% 12101 slabinfo.radix_tree_node.active_slabs
49104 +1280.2% 677724 slabinfo.radix_tree_node.num_objs
876.25 +1281.1% 12101 slabinfo.radix_tree_node.num_slabs
43205 ± 13% +416.6% 223216 ± 13% numa-meminfo.node0.KReclaimable
16761 ± 50% +1452.6% 260246 ± 12% numa-meminfo.node0.PageTables
43205 ± 13% +416.6% 223216 ± 13% numa-meminfo.node0.SReclaimable
110769 ± 10% +162.0% 290199 ± 9% numa-meminfo.node0.Slab
45449 ± 13% +395.0% 224981 ± 11% numa-meminfo.node1.KReclaimable
2890524 ± 12% +21.5% 3510888 ± 8% numa-meminfo.node1.MemUsed
15068 ± 59% +1643.0% 262640 ± 11% numa-meminfo.node1.PageTables
45449 ± 13% +395.0% 224981 ± 11% numa-meminfo.node1.SReclaimable
102666 ± 11% +179.0% 286430 ± 7% numa-meminfo.node1.Slab
4134 ± 50% +1467.1% 64786 ± 12% numa-vmstat.node0.nr_page_table_pages
10797 ± 13% +416.9% 55809 ± 13% numa-vmstat.node0.nr_slab_reclaimable
1192200 ± 25% +55.0% 1848501 ± 24% numa-vmstat.node0.numa_hit
1170786 ± 26% +56.8% 1835674 ± 25% numa-vmstat.node0.numa_local
21413 ± 17% -40.1% 12826 ± 20% numa-vmstat.node0.numa_other
3705 ± 59% +1664.2% 65376 ± 11% numa-vmstat.node1.nr_page_table_pages
11359 ± 13% +395.2% 56254 ± 11% numa-vmstat.node1.nr_slab_reclaimable
1120301 ± 26% +70.6% 1911563 ± 22% numa-vmstat.node1.numa_hit
964396 ± 31% +81.1% 1746483 ± 24% numa-vmstat.node1.numa_local
1237 ± 14% -64.3% 441.70 ± 8% sched_debug.cfs_rq:/.exec_clock.min
0.61 ± 9% -13.4% 0.53 ± 5% sched_debug.cfs_rq:/.nr_spread_over.avg
263007 ± 2% -14.6% 224575 ± 9% sched_debug.cfs_rq:/.spread0.max
129.63 ± 6% +28.1% 166.05 ± 10% sched_debug.cfs_rq:/.util_est_enqueued.avg
135782 ± 13% +53.0% 207732 ± 5% sched_debug.cpu.avg_idle.stddev
6.89 ± 28% -70.1% 2.06 ± 4% sched_debug.cpu.clock.stddev
6.89 ± 28% -70.1% 2.06 ± 4% sched_debug.cpu.clock_task.stddev
63.41 ± 5% +41.8% 89.93 ± 10% sched_debug.cpu.sched_goidle.min
5.58 -100.0% 0.00 sched_debug.rt_rq:/.rt_runtime.stddev
977053 ± 6% +10.0% 1074569 ± 7% meminfo.Active
960159 ± 6% +10.1% 1057333 ± 7% meminfo.Active(anon)
1563938 ± 5% +8.5% 1697481 ± 7% meminfo.Committed_AS
88651 +405.8% 448400 meminfo.KReclaimable
6183366 +15.5% 7141630 ± 2% meminfo.Memused
31890 ± 10% +1536.3% 521829 ± 11% meminfo.PageTables
88651 +405.8% 448400 meminfo.SReclaimable
217155 ± 14% +27.2% 276305 ± 10% meminfo.Shmem
213432 +170.3% 576831 meminfo.Slab
11428 ± 6% +54.2% 17619 ± 9% meminfo.max_used_kB
239005 ± 6% +10.5% 264053 ± 7% proc-vmstat.nr_active_anon
1070522 -2.2% 1046455 proc-vmstat.nr_dirty_background_threshold
2143663 -2.2% 2095470 proc-vmstat.nr_dirty_threshold
323153 ± 2% +4.6% 338160 ± 2% proc-vmstat.nr_file_pages
10789746 -2.2% 10548655 proc-vmstat.nr_free_pages
7928 ± 10% +1551.5% 130940 ± 11% proc-vmstat.nr_page_table_pages
54133 ± 14% +27.7% 69115 ± 10% proc-vmstat.nr_shmem
22163 +405.7% 112074 proc-vmstat.nr_slab_reclaimable
239005 ± 6% +10.5% 264053 ± 7% proc-vmstat.nr_zone_active_anon
9727 ± 20% +40.0% 13616 ± 15% proc-vmstat.numa_hint_faults
1824734 ± 4% +90.7% 3478977 ± 2% proc-vmstat.numa_hit
1802105 ± 4% +91.8% 3455569 ± 2% proc-vmstat.numa_local
16717 ± 4% +9.4% 18289 ± 3% proc-vmstat.numa_pages_migrated
21115 ± 30% +66.7% 35207 ± 14% proc-vmstat.pgactivate
2462163 ± 3% +71.2% 4214638 ± 2% proc-vmstat.pgalloc_normal
29317033 ± 4% +1475.5% 4.619e+08 ± 3% proc-vmstat.pgfault
2429361 ± 3% +71.3% 4162407 ± 2% proc-vmstat.pgfree
16717 ± 4% +9.4% 18289 ± 3% proc-vmstat.pgmigrate_success
48830 ± 5% +1739.8% 898367 ± 3% proc-vmstat.thp_fault_fallback
121.28 ± 17% -65.8% 41.51 ± 19% perf-stat.i.MPKI
2.456e+08 ± 16% +601.4% 1.723e+09 ± 28% perf-stat.i.branch-instructions
8210890 ± 26% +32.5% 10879790 ± 11% perf-stat.i.branch-misses
20.87 ± 9% -56.1% 9.15 ± 18% perf-stat.i.cpi
5.80 ± 10% +56.8% 9.10 ± 6% perf-stat.i.cpu-migrations
2176777 ± 20% +252.3% 7669422 ± 28% perf-stat.i.dTLB-load-misses
5.84e+08 ± 21% +237.0% 1.968e+09 ± 27% perf-stat.i.dTLB-loads
0.27 ± 15% +0.9 1.16 ± 8% perf-stat.i.dTLB-store-miss-rate%
2813218 ± 37% +1052.5% 32423092 ± 22% perf-stat.i.dTLB-store-misses
7.056e+08 ± 20% +59.5% 1.125e+09 ± 13% perf-stat.i.dTLB-stores
855886 ± 27% +303.5% 3453236 ± 19% perf-stat.i.iTLB-load-misses
1.174e+09 ± 16% +500.9% 7.055e+09 ± 28% perf-stat.i.instructions
1284 ± 14% +27.0% 1631 ± 9% perf-stat.i.instructions-per-iTLB-miss
0.08 ± 10% +66.7% 0.13 ± 16% perf-stat.i.ipc
39123 ± 10% +1592.5% 662155 ± 11% perf-stat.i.minor-faults
72.94 ± 2% +9.0 81.96 perf-stat.i.node-load-miss-rate%
554017 ± 24% +1240.4% 7426193 ± 21% perf-stat.i.node-load-misses
21822653 ± 34% -61.8% 8340609 ± 21% perf-stat.i.node-loads
29884117 ± 40% -62.6% 11167522 ± 21% perf-stat.i.node-stores
39123 ± 10% +1592.5% 662155 ± 11% perf-stat.i.page-faults
171.16 ± 19% -88.7% 19.36 ± 14% perf-stat.overall.MPKI
3.44 ± 32% -2.8 0.66 ± 17% perf-stat.overall.branch-miss-rate%
31.42 ± 12% -81.9% 5.68 ± 15% perf-stat.overall.cpi
575.80 ± 12% +77.2% 1020 ± 9% perf-stat.overall.cycles-between-cache-misses
0.39 ± 20% +2.4 2.77 ± 10% perf-stat.overall.dTLB-store-miss-rate%
50.44 ± 11% +24.6 75.03 ± 4% perf-stat.overall.iTLB-load-miss-rate%
1416 ± 11% +42.1% 2013 ± 10% perf-stat.overall.instructions-per-iTLB-miss
0.03 ± 11% +458.7% 0.18 ± 15% perf-stat.overall.ipc
2.56 ± 11% +44.5 47.09 ± 3% perf-stat.overall.node-load-miss-rate%
35.55 ± 21% +14.7 50.30 ± 3% perf-stat.overall.node-store-miss-rate%
373429 ± 11% +1347.9% 5406996 ± 21% perf-stat.overall.path-length
2.466e+08 ± 16% +598.1% 1.722e+09 ± 28% perf-stat.ps.branch-instructions
8180698 ± 26% +32.9% 10868841 ± 11% perf-stat.ps.branch-misses
5.86 ± 10% +55.2% 9.10 ± 6% perf-stat.ps.cpu-migrations
2171060 ± 20% +253.0% 7663130 ± 28% perf-stat.ps.dTLB-load-misses
5.88e+08 ± 21% +234.6% 1.967e+09 ± 27% perf-stat.ps.dTLB-loads
2834083 ± 37% +1043.4% 32405891 ± 22% perf-stat.ps.dTLB-store-misses
7.078e+08 ± 20% +58.8% 1.124e+09 ± 13% perf-stat.ps.dTLB-stores
856551 ± 27% +302.8% 3450575 ± 19% perf-stat.ps.iTLB-load-misses
1.179e+09 ± 16% +498.1% 7.051e+09 ± 28% perf-stat.ps.instructions
39462 ± 9% +1576.5% 661605 ± 11% perf-stat.ps.minor-faults
558934 ± 24% +1227.8% 7421551 ± 21% perf-stat.ps.node-load-misses
22060153 ± 34% -62.2% 8335422 ± 21% perf-stat.ps.node-loads
30203853 ± 40% -63.0% 11160678 ± 21% perf-stat.ps.node-stores
39462 ± 9% +1576.5% 661605 ± 11% perf-stat.ps.page-faults
8.746e+11 ± 14% +456.8% 4.87e+12 ± 22% perf-stat.total.instructions
44055 ± 4% -62.0% 16761 ± 9% softirqs.CPU0.RCU
47182 ± 11% -63.7% 17141 ± 11% softirqs.CPU1.RCU
48879 ± 13% -65.9% 16655 ± 10% softirqs.CPU10.RCU
48582 ± 15% -66.5% 16251 ± 9% softirqs.CPU11.RCU
49768 ± 15% -66.4% 16705 ± 17% softirqs.CPU12.RCU
45350 ± 20% -61.7% 17354 ± 14% softirqs.CPU13.RCU
45643 ± 19% -62.6% 17054 ± 11% softirqs.CPU14.RCU
90257 ± 8% -15.3% 76446 ± 16% softirqs.CPU14.SCHED
42541 ± 8% -66.7% 14178 ± 10% softirqs.CPU15.RCU
40700 ± 12% -65.1% 14224 ± 13% softirqs.CPU16.RCU
90664 ± 10% -21.0% 71587 ± 12% softirqs.CPU16.SCHED
40007 ± 11% -63.1% 14762 ± 9% softirqs.CPU17.RCU
90721 ± 9% -15.2% 76913 ± 16% softirqs.CPU17.SCHED
56587 ± 7% -66.3% 19092 ± 7% softirqs.CPU18.RCU
54834 ± 3% -67.6% 17786 ± 11% softirqs.CPU19.RCU
48947 ± 9% -60.7% 19251 ± 16% softirqs.CPU2.RCU
50458 ± 20% -66.9% 16690 ± 9% softirqs.CPU20.RCU
56425 ± 4% -69.0% 17508 ± 14% softirqs.CPU21.RCU
55135 ± 7% -68.7% 17264 ± 11% softirqs.CPU22.RCU
58345 ± 7% -71.2% 16820 ± 10% softirqs.CPU23.RCU
56257 ± 7% -68.6% 17651 ± 9% softirqs.CPU24.RCU
55896 ± 4% -69.0% 17319 ± 12% softirqs.CPU25.RCU
55977 ± 5% -72.8% 15247 ± 18% softirqs.CPU26.RCU
55460 ± 6% -69.7% 16812 ± 10% softirqs.CPU27.RCU
55624 ± 5% -67.8% 17933 ± 24% softirqs.CPU28.RCU
57802 ± 6% -70.1% 17267 ± 9% softirqs.CPU29.RCU
45974 ± 14% -64.0% 16528 ± 7% softirqs.CPU3.RCU
52807 -68.2% 16815 ± 5% softirqs.CPU30.RCU
52603 ± 2% -67.2% 17251 ± 3% softirqs.CPU31.RCU
49441 ± 6% -66.6% 16520 ± 7% softirqs.CPU32.RCU
51960 -68.2% 16524 ± 6% softirqs.CPU33.RCU
52135 ± 2% -68.1% 16612 ± 10% softirqs.CPU34.RCU
39829 ± 12% -57.1% 17068 ± 7% softirqs.CPU35.RCU
57831 ± 5% -64.2% 20680 ± 25% softirqs.CPU36.RCU
58164 ± 10% -68.0% 18608 ± 14% softirqs.CPU37.RCU
50364 ± 17% -67.3% 16457 ± 14% softirqs.CPU38.RCU
55798 ± 7% -70.6% 16430 ± 12% softirqs.CPU39.RCU
45531 ± 14% -61.4% 17557 ± 9% softirqs.CPU4.RCU
53271 ± 4% -68.5% 16801 ± 7% softirqs.CPU40.RCU
53074 ± 19% -68.0% 16995 ± 9% softirqs.CPU41.RCU
57020 ± 8% -72.0% 15938 ± 7% softirqs.CPU42.RCU
57573 ± 9% -70.5% 16997 ± 7% softirqs.CPU43.RCU
54134 ± 11% -70.7% 15861 ± 11% softirqs.CPU44.RCU
53061 ± 7% -70.4% 15697 ± 10% softirqs.CPU45.RCU
49303 ± 15% -65.4% 17070 softirqs.CPU46.RCU
54340 ± 10% -66.8% 18033 ± 7% softirqs.CPU47.RCU
50379 ± 12% -65.7% 17268 ± 9% softirqs.CPU48.RCU
50569 ± 12% -65.1% 17665 ± 3% softirqs.CPU49.RCU
49170 ± 17% -66.5% 16448 ± 10% softirqs.CPU5.RCU
52741 ± 9% -68.0% 16901 softirqs.CPU50.RCU
52519 ± 14% -65.6% 18047 ± 3% softirqs.CPU51.RCU
54948 ± 8% -68.8% 17123 ± 14% softirqs.CPU52.RCU
54210 ± 7% -66.5% 18143 ± 2% softirqs.CPU53.RCU
43736 ± 15% -63.0% 16193 ± 13% softirqs.CPU54.RCU
44488 ± 13% -64.0% 16026 ± 12% softirqs.CPU55.RCU
41085 ± 19% -63.2% 15101 ± 12% softirqs.CPU56.RCU
42371 ± 15% -63.0% 15663 ± 11% softirqs.CPU57.RCU
42237 ± 17% -63.2% 15548 ± 14% softirqs.CPU58.RCU
42957 ± 15% -64.2% 15366 ± 10% softirqs.CPU59.RCU
46957 ± 17% -57.6% 19921 ± 28% softirqs.CPU6.RCU
43793 ± 13% -65.2% 15228 ± 12% softirqs.CPU60.RCU
42612 ± 14% -62.8% 15838 ± 9% softirqs.CPU61.RCU
42578 ± 14% -64.5% 15103 ± 9% softirqs.CPU62.RCU
42461 ± 16% -63.9% 15309 ± 13% softirqs.CPU63.RCU
44047 ± 14% -62.7% 16420 ± 12% softirqs.CPU64.RCU
45361 ± 16% -64.2% 16260 ± 13% softirqs.CPU65.RCU
45455 ± 16% -64.4% 16160 ± 13% softirqs.CPU66.RCU
45070 ± 14% -64.0% 16226 ± 12% softirqs.CPU67.RCU
42204 ± 14% -62.4% 15848 ± 8% softirqs.CPU68.RCU
44889 ± 17% -63.9% 16226 ± 11% softirqs.CPU69.RCU
45813 ± 11% -63.3% 16826 ± 10% softirqs.CPU7.RCU
45145 ± 17% -62.3% 17027 ± 4% softirqs.CPU70.RCU
45264 ± 16% -64.5% 16089 ± 13% softirqs.CPU71.RCU
45030 ± 11% -63.2% 16558 ± 12% softirqs.CPU8.RCU
46406 ± 11% -64.7% 16389 ± 12% softirqs.CPU9.RCU
3549210 ± 3% -66.0% 1205129 ± 8% softirqs.RCU
771.25 ± 19% -55.3% 344.75 ± 8% interrupts.CPU0.RES:Rescheduling_interrupts
51244 ± 34% -80.5% 10009 ±104% interrupts.CPU0.TRM:Thermal_event_interrupts
51247 ± 34% -80.5% 10012 ±104% interrupts.CPU1.TRM:Thermal_event_interrupts
136.00 ± 47% -96.5% 4.75 ± 79% interrupts.CPU10.RES:Rescheduling_interrupts
51248 ± 34% -80.5% 10012 ±104% interrupts.CPU10.TRM:Thermal_event_interrupts
51248 ± 34% -80.5% 10012 ±104% interrupts.CPU11.TRM:Thermal_event_interrupts
51248 ± 34% -80.5% 10012 ±104% interrupts.CPU12.TRM:Thermal_event_interrupts
51248 ± 34% -80.5% 10012 ±104% interrupts.CPU13.TRM:Thermal_event_interrupts
51247 ± 34% -80.5% 10012 ±104% interrupts.CPU14.TRM:Thermal_event_interrupts
463.00 ±138% -94.3% 26.50 ± 71% interrupts.CPU15.RES:Rescheduling_interrupts
51246 ± 34% -80.5% 10012 ±104% interrupts.CPU15.TRM:Thermal_event_interrupts
327.00 ±123% -94.5% 18.00 ±135% interrupts.CPU16.RES:Rescheduling_interrupts
51248 ± 34% -80.5% 10012 ±104% interrupts.CPU16.TRM:Thermal_event_interrupts
51248 ± 34% -80.5% 10012 ±104% interrupts.CPU17.TRM:Thermal_event_interrupts
62835 ±100% -100.0% 28.75 ±122% interrupts.CPU18.RES:Rescheduling_interrupts
5511 ±130% -99.2% 45.00 ± 58% interrupts.CPU19.RES:Rescheduling_interrupts
51248 ± 34% -80.5% 10012 ±104% interrupts.CPU2.TRM:Thermal_event_interrupts
6641 ± 7% -13.4% 5754 ± 7% interrupts.CPU20.CAL:Function_call_interrupts
3729 ± 64% -99.2% 30.50 ±154% interrupts.CPU20.NMI:Non-maskable_interrupts
3729 ± 64% -99.2% 30.50 ±154% interrupts.CPU20.PMI:Performance_monitoring_interrupts
51248 ± 34% -80.5% 10012 ±104% interrupts.CPU3.TRM:Thermal_event_interrupts
15800 ±161% -99.8% 29.25 ± 61% interrupts.CPU32.RES:Rescheduling_interrupts
1885 ±117% +189.4% 5456 ± 45% interrupts.CPU34.NMI:Non-maskable_interrupts
1885 ±117% +189.4% 5456 ± 45% interrupts.CPU34.PMI:Performance_monitoring_interrupts
280.25 ±111% -99.4% 1.75 ± 47% interrupts.CPU36.RES:Rescheduling_interrupts
51248 ± 34% -80.5% 10012 ±104% interrupts.CPU36.TRM:Thermal_event_interrupts
6358 ± 5% -40.5% 3784 ± 50% interrupts.CPU37.CAL:Function_call_interrupts
289.25 ± 87% -93.4% 19.00 ± 97% interrupts.CPU37.RES:Rescheduling_interrupts
51248 ± 34% -80.5% 10012 ±104% interrupts.CPU37.TRM:Thermal_event_interrupts
2175 ±109% +166.6% 5799 ± 37% interrupts.CPU38.NMI:Non-maskable_interrupts
2175 ±109% +166.6% 5799 ± 37% interrupts.CPU38.PMI:Performance_monitoring_interrupts
162.00 ± 44% -61.9% 61.75 ±138% interrupts.CPU38.RES:Rescheduling_interrupts
51248 ± 34% -80.5% 10007 ±104% interrupts.CPU38.TRM:Thermal_event_interrupts
108.00 ± 44% -85.4% 15.75 ± 89% interrupts.CPU39.RES:Rescheduling_interrupts
51248 ± 34% -80.5% 10012 ±104% interrupts.CPU39.TRM:Thermal_event_interrupts
51248 ± 34% -80.5% 10012 ±104% interrupts.CPU4.TRM:Thermal_event_interrupts
51248 ± 34% -80.5% 10012 ±104% interrupts.CPU40.TRM:Thermal_event_interrupts
328.00 ± 88% -96.9% 10.25 ±114% interrupts.CPU41.RES:Rescheduling_interrupts
51248 ± 34% -80.5% 10012 ±104% interrupts.CPU41.TRM:Thermal_event_interrupts
6616 ± 7% -10.7% 5909 ± 12% interrupts.CPU42.CAL:Function_call_interrupts
51248 ± 34% -80.5% 10012 ±104% interrupts.CPU42.TRM:Thermal_event_interrupts
1385 ±125% +343.9% 6148 ± 28% interrupts.CPU43.NMI:Non-maskable_interrupts
1385 ±125% +343.9% 6148 ± 28% interrupts.CPU43.PMI:Performance_monitoring_interrupts
118.25 ± 41% -81.8% 21.50 ±138% interrupts.CPU43.RES:Rescheduling_interrupts
51248 ± 34% -80.5% 10012 ±104% interrupts.CPU43.TRM:Thermal_event_interrupts
115.50 ± 63% -85.1% 17.25 ± 86% interrupts.CPU44.RES:Rescheduling_interrupts
51248 ± 34% -80.5% 10012 ±104% interrupts.CPU44.TRM:Thermal_event_interrupts
100.25 ± 39% -74.3% 25.75 ±119% interrupts.CPU45.RES:Rescheduling_interrupts
51248 ± 34% -80.5% 10012 ±104% interrupts.CPU45.TRM:Thermal_event_interrupts
51233 ± 34% -80.5% 10012 ±104% interrupts.CPU46.TRM:Thermal_event_interrupts
152.75 ± 72% -94.4% 8.50 ± 90% interrupts.CPU47.RES:Rescheduling_interrupts
51248 ± 34% -80.5% 10012 ±104% interrupts.CPU47.TRM:Thermal_event_interrupts
139.00 ± 58% -86.5% 18.75 ± 93% interrupts.CPU48.RES:Rescheduling_interrupts
51247 ± 34% -80.5% 10012 ±104% interrupts.CPU48.TRM:Thermal_event_interrupts
145.75 ± 58% -81.8% 26.50 ± 77% interrupts.CPU49.RES:Rescheduling_interrupts
51248 ± 34% -80.5% 10012 ±104% interrupts.CPU49.TRM:Thermal_event_interrupts
51248 ± 34% -80.5% 10012 ±104% interrupts.CPU5.TRM:Thermal_event_interrupts
151.25 ± 73% -84.0% 24.25 ± 93% interrupts.CPU50.RES:Rescheduling_interrupts
51248 ± 34% -80.5% 10012 ±104% interrupts.CPU50.TRM:Thermal_event_interrupts
51248 ± 34% -80.5% 10011 ±104% interrupts.CPU51.TRM:Thermal_event_interrupts
51239 ± 34% -80.5% 10012 ±104% interrupts.CPU52.TRM:Thermal_event_interrupts
51247 ± 34% -80.5% 10012 ±104% interrupts.CPU53.TRM:Thermal_event_interrupts
2091 ±157% -99.3% 14.75 ±102% interrupts.CPU55.RES:Rescheduling_interrupts
1078 ±125% +201.3% 3249 ± 20% interrupts.CPU56.NMI:Non-maskable_interrupts
1078 ±125% +201.3% 3249 ± 20% interrupts.CPU56.PMI:Performance_monitoring_interrupts
202.75 ± 95% -78.8% 43.00 ± 88% interrupts.CPU57.RES:Rescheduling_interrupts
440.25 ±172% +473.9% 2526 ± 43% interrupts.CPU6.NMI:Non-maskable_interrupts
440.25 ±172% +473.9% 2526 ± 43% interrupts.CPU6.PMI:Performance_monitoring_interrupts
51248 ± 34% -80.5% 10012 ±104% interrupts.CPU6.TRM:Thermal_event_interrupts
131.50 ± 42% -88.0% 15.75 ± 43% interrupts.CPU61.RES:Rescheduling_interrupts
131.75 ± 72% -78.9% 27.75 ±113% interrupts.CPU62.RES:Rescheduling_interrupts
78.00 ± 44% -51.3% 38.00 ± 34% interrupts.CPU64.RES:Rescheduling_interrupts
245.50 ± 93% -93.3% 16.50 ± 96% interrupts.CPU65.RES:Rescheduling_interrupts
196.25 ±110% -83.6% 32.25 ±105% interrupts.CPU66.RES:Rescheduling_interrupts
299.25 ±126% -93.4% 19.75 ± 91% interrupts.CPU67.RES:Rescheduling_interrupts
4947 ±154% -99.3% 34.25 ±101% interrupts.CPU68.RES:Rescheduling_interrupts
219.00 ±128% -93.7% 13.75 ± 72% interrupts.CPU69.RES:Rescheduling_interrupts
2645 ± 72% -63.2% 972.50 ±173% interrupts.CPU7.NMI:Non-maskable_interrupts
2645 ± 72% -63.2% 972.50 ±173% interrupts.CPU7.PMI:Performance_monitoring_interrupts
51248 ± 34% -80.5% 10012 ±104% interrupts.CPU7.TRM:Thermal_event_interrupts
6540 ± 6% -12.5% 5722 ± 8% interrupts.CPU70.CAL:Function_call_interrupts
3259 ± 63% -73.6% 859.75 ±156% interrupts.CPU70.NMI:Non-maskable_interrupts
3259 ± 63% -73.6% 859.75 ±156% interrupts.CPU70.PMI:Performance_monitoring_interrupts
271.75 ± 90% -91.3% 23.75 ± 69% interrupts.CPU70.RES:Rescheduling_interrupts
51248 ± 34% -80.5% 10012 ±104% interrupts.CPU8.TRM:Thermal_event_interrupts
5988 ±171% -99.6% 26.00 ± 76% interrupts.CPU9.RES:Rescheduling_interrupts
51248 ± 34% -80.5% 10012 ±104% interrupts.CPU9.TRM:Thermal_event_interrupts
176.00 ± 38% +70.0% 299.25 ± 16% interrupts.IWI:IRQ_work_interrupts
5216285 ± 36% -68.1% 1662283 ±128% interrupts.TRM:Thermal_event_interrupts
68.83 ± 15% -35.3 33.55 ± 17% perf-profile.calltrace.cycles-pp.secondary_startup_64
67.32 ± 14% -34.3 33.02 ± 17% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
65.71 ± 15% -33.3 32.41 ± 17% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
65.71 ± 15% -33.3 32.41 ± 17% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
65.71 ± 15% -33.3 32.41 ± 17% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
65.29 ± 15% -33.0 32.29 ± 17% perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
65.26 ± 15% -33.0 32.27 ± 16% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary
24.22 ± 38% -23.3 0.88 ± 8% perf-profile.calltrace.cycles-pp.get_io_u
3.13 ± 30% -2.0 1.14 ±101% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_kernel.secondary_startup_64
3.13 ± 30% -2.0 1.14 ±101% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_kernel.secondary_startup_64
3.13 ± 30% -2.0 1.14 ±101% perf-profile.calltrace.cycles-pp.start_kernel.secondary_startup_64
3.10 ± 30% -2.0 1.13 ±101% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_kernel
3.10 ± 30% -2.0 1.13 ±101% perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_kernel.secondary_startup_64
0.00 +0.6 0.60 ± 5% perf-profile.calltrace.cycles-pp.ext4_es_lookup_extent.ext4_map_blocks.ext4_iomap_begin.dax_iomap_pte_fault.ext4_dax_huge_fault
0.00 +0.6 0.60 ± 9% perf-profile.calltrace.cycles-pp._raw_read_lock.find_next_iomem_res.walk_system_ram_range.pat_pagerange_is_ram.lookup_memtype
0.00 +0.7 0.66 ± 9% perf-profile.calltrace.cycles-pp.file_update_time.ext4_dax_huge_fault.__do_fault.__handle_mm_fault.handle_mm_fault
0.00 +0.7 0.68 ± 18% perf-profile.calltrace.cycles-pp.insert_pfn.__vm_insert_mixed.dax_iomap_pte_fault.ext4_dax_huge_fault.__do_fault
0.00 +0.8 0.83 ± 8% perf-profile.calltrace.cycles-pp.ext4_map_blocks.ext4_iomap_begin.dax_iomap_pte_fault.ext4_dax_huge_fault.__do_fault
0.14 ±173% +1.0 1.17 ± 9% perf-profile.calltrace.cycles-pp.prepare_exit_to_usermode.swapgs_restore_regs_and_return_to_usermode
0.16 ±173% +1.4 1.56 ± 9% perf-profile.calltrace.cycles-pp.swapgs_restore_regs_and_return_to_usermode
0.00 +2.2 2.21 ± 34% perf-profile.calltrace.cycles-pp._raw_read_lock.jbd2_transaction_committed.ext4_iomap_begin.dax_iomap_pte_fault.ext4_dax_huge_fault
0.00 +2.3 2.33 ± 33% perf-profile.calltrace.cycles-pp.ext4_journal_check_start.__ext4_journal_start_sb.ext4_iomap_begin.dax_iomap_pte_fault.ext4_dax_huge_fault
0.00 +2.4 2.35 ± 30% perf-profile.calltrace.cycles-pp.ext4_journal_check_start.__ext4_journal_start_sb.ext4_dax_huge_fault.__do_fault.__handle_mm_fault
0.00 +2.4 2.44 ± 33% perf-profile.calltrace.cycles-pp.__ext4_journal_start_sb.ext4_iomap_begin.dax_iomap_pte_fault.ext4_dax_huge_fault.__do_fault
0.00 +2.4 2.44 ± 30% perf-profile.calltrace.cycles-pp.__ext4_journal_start_sb.ext4_dax_huge_fault.__do_fault.__handle_mm_fault.handle_mm_fault
0.45 ±117% +2.7 3.15 ± 8% perf-profile.calltrace.cycles-pp.find_next_iomem_res.walk_system_ram_range.pat_pagerange_is_ram.lookup_memtype.track_pfn_insert
0.46 ±116% +2.8 3.25 ± 8% perf-profile.calltrace.cycles-pp.walk_system_ram_range.pat_pagerange_is_ram.lookup_memtype.track_pfn_insert.__vm_insert_mixed
0.46 ±116% +2.8 3.28 ± 8% perf-profile.calltrace.cycles-pp.pat_pagerange_is_ram.lookup_memtype.track_pfn_insert.__vm_insert_mixed.dax_iomap_pte_fault
0.00 +3.2 3.17 ± 29% perf-profile.calltrace.cycles-pp.jbd2_journal_stop.__ext4_journal_stop.ext4_iomap_begin.dax_iomap_pte_fault.ext4_dax_huge_fault
0.00 +3.3 3.27 ± 29% perf-profile.calltrace.cycles-pp.__ext4_journal_stop.ext4_iomap_begin.dax_iomap_pte_fault.ext4_dax_huge_fault.__do_fault
0.00 +3.6 3.56 ± 22% perf-profile.calltrace.cycles-pp.add_transaction_credits.start_this_handle.jbd2__journal_start.ext4_dax_huge_fault.__do_fault
0.00 +3.7 3.70 ± 26% perf-profile.calltrace.cycles-pp._raw_read_lock.start_this_handle.jbd2__journal_start.ext4_dax_huge_fault.__do_fault
0.00 +5.0 4.99 ± 25% perf-profile.calltrace.cycles-pp.jbd2_journal_stop.__ext4_journal_stop.ext4_dax_huge_fault.__do_fault.__handle_mm_fault
0.00 +5.1 5.11 ± 24% perf-profile.calltrace.cycles-pp.__ext4_journal_stop.ext4_dax_huge_fault.__do_fault.__handle_mm_fault.handle_mm_fault
0.00 +5.5 5.53 ± 33% perf-profile.calltrace.cycles-pp.jbd2_transaction_committed.ext4_iomap_begin.dax_iomap_pte_fault.ext4_dax_huge_fault.__do_fault
0.00 +12.7 12.72 ± 29% perf-profile.calltrace.cycles-pp.ext4_iomap_begin.dax_iomap_pte_fault.ext4_dax_huge_fault.__do_fault.__handle_mm_fault
0.00 +16.4 16.42 ± 26% perf-profile.calltrace.cycles-pp.start_this_handle.jbd2__journal_start.ext4_dax_huge_fault.__do_fault.__handle_mm_fault
0.00 +16.6 16.58 ± 25% perf-profile.calltrace.cycles-pp.jbd2__journal_start.ext4_dax_huge_fault.__do_fault.__handle_mm_fault.handle_mm_fault
0.00 +17.5 17.54 ± 46% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.lookup_memtype.track_pfn_insert.__vm_insert_mixed
0.00 +18.1 18.11 ± 44% perf-profile.calltrace.cycles-pp._raw_spin_lock.lookup_memtype.track_pfn_insert.__vm_insert_mixed.dax_iomap_pte_fault
0.51 ±117% +21.2 21.68 ± 38% perf-profile.calltrace.cycles-pp.lookup_memtype.track_pfn_insert.__vm_insert_mixed.dax_iomap_pte_fault.ext4_dax_huge_fault
0.51 ±117% +21.2 21.70 ± 38% perf-profile.calltrace.cycles-pp.track_pfn_insert.__vm_insert_mixed.dax_iomap_pte_fault.ext4_dax_huge_fault.__do_fault
0.53 ±117% +21.9 22.41 ± 37% perf-profile.calltrace.cycles-pp.__vm_insert_mixed.dax_iomap_pte_fault.ext4_dax_huge_fault.__do_fault.__handle_mm_fault
1.46 ± 49% +34.9 36.33 ± 15% perf-profile.calltrace.cycles-pp.dax_iomap_pte_fault.ext4_dax_huge_fault.__do_fault.__handle_mm_fault.handle_mm_fault
3.24 ± 31% +58.7 61.97 ± 8% perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
3.33 ± 31% +58.8 62.18 ± 8% perf-profile.calltrace.cycles-pp.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
3.70 ± 29% +58.9 62.65 ± 8% perf-profile.calltrace.cycles-pp.__do_page_fault.do_page_fault.page_fault
3.78 ± 29% +59.0 62.73 ± 8% perf-profile.calltrace.cycles-pp.do_page_fault.page_fault
3.79 ± 29% +59.0 62.75 ± 8% perf-profile.calltrace.cycles-pp.page_fault
1.86 ± 41% +59.7 61.59 ± 8% perf-profile.calltrace.cycles-pp.ext4_dax_huge_fault.__do_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault
1.89 ± 41% +59.8 61.65 ± 8% perf-profile.calltrace.cycles-pp.__do_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
68.83 ± 15% -35.3 33.55 ± 17% perf-profile.children.cycles-pp.do_idle
68.83 ± 15% -35.3 33.55 ± 17% perf-profile.children.cycles-pp.secondary_startup_64
68.83 ± 15% -35.3 33.55 ± 17% perf-profile.children.cycles-pp.cpu_startup_entry
68.39 ± 14% -35.0 33.41 ± 17% perf-profile.children.cycles-pp.cpuidle_enter
68.39 ± 14% -35.0 33.41 ± 17% perf-profile.children.cycles-pp.cpuidle_enter_state
67.32 ± 14% -34.3 33.02 ± 17% perf-profile.children.cycles-pp.intel_idle
65.71 ± 15% -33.3 32.41 ± 17% perf-profile.children.cycles-pp.start_secondary
24.27 ± 38% -23.4 0.88 ± 8% perf-profile.children.cycles-pp.get_io_u
3.13 ± 30% -2.0 1.14 ±101% perf-profile.children.cycles-pp.start_kernel
1.07 ± 39% -1.0 0.09 ± 49% perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
1.06 ± 39% -1.0 0.09 ± 49% perf-profile.children.cycles-pp.do_syscall_64
1.06 ± 18% -0.8 0.27 ±125% perf-profile.children.cycles-pp.thermal_interrupt
1.03 ± 19% -0.8 0.27 ±124% perf-profile.children.cycles-pp.smp_thermal_interrupt
1.02 ± 19% -0.8 0.27 ±124% perf-profile.children.cycles-pp.intel_thermal_interrupt
0.98 ± 17% -0.7 0.24 ±125% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.95 ± 19% -0.7 0.25 ±125% perf-profile.children.cycles-pp.pkg_thermal_notify
1.02 ± 30% -0.4 0.62 ± 7% perf-profile.children.cycles-pp.apic_timer_interrupt
0.93 ± 33% -0.4 0.56 ± 7% perf-profile.children.cycles-pp.smp_apic_timer_interrupt
0.40 ± 44% -0.3 0.09 ± 17% perf-profile.children.cycles-pp.dax_wake_entry
0.62 ± 24% -0.2 0.43 ± 8% perf-profile.children.cycles-pp.hrtimer_interrupt
0.27 ± 35% -0.2 0.09 ± 14% perf-profile.children.cycles-pp.menu_select
0.20 ± 20% -0.2 0.03 ±102% perf-profile.children.cycles-pp.io_serial_in
0.44 ± 28% -0.2 0.28 ± 6% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.17 ± 8% -0.1 0.04 ± 57% perf-profile.children.cycles-pp.irq_work_interrupt
0.17 ± 8% -0.1 0.04 ± 57% perf-profile.children.cycles-pp.smp_irq_work_interrupt
0.17 ± 8% -0.1 0.04 ± 57% perf-profile.children.cycles-pp.irq_work_run
0.17 ± 8% -0.1 0.04 ± 57% perf-profile.children.cycles-pp.printk
0.17 ± 8% -0.1 0.04 ± 57% perf-profile.children.cycles-pp.vprintk_emit
0.17 ± 11% -0.1 0.06 ± 60% perf-profile.children.cycles-pp.irq_work_run_list
0.17 ± 9% -0.1 0.06 ± 66% perf-profile.children.cycles-pp.uart_console_write
0.17 ± 11% -0.1 0.06 ± 67% perf-profile.children.cycles-pp.serial8250_console_write
0.17 ± 10% -0.1 0.06 ± 66% perf-profile.children.cycles-pp.wait_for_xmitr
0.17 ± 10% -0.1 0.06 ± 66% perf-profile.children.cycles-pp.serial8250_console_putchar
0.17 ± 11% -0.1 0.06 ± 64% perf-profile.children.cycles-pp.console_unlock
0.06 ± 60% +0.1 0.11 ± 11% perf-profile.children.cycles-pp.kmem_cache_alloc
0.00 +0.1 0.07 ± 23% perf-profile.children.cycles-pp.current_time
0.00 +0.1 0.07 ± 7% perf-profile.children.cycles-pp._cond_resched
0.00 +0.1 0.07 ± 7% perf-profile.children.cycles-pp.__might_sleep
0.00 +0.1 0.07 ± 11% perf-profile.children.cycles-pp.down_read
0.00 +0.1 0.08 ± 14% perf-profile.children.cycles-pp.xas_load
0.03 ±100% +0.1 0.11 ± 19% perf-profile.children.cycles-pp.ext4_data_block_valid
0.00 +0.1 0.09 ± 10% perf-profile.children.cycles-pp.up_read
0.03 ±102% +0.1 0.12 ± 11% perf-profile.children.cycles-pp.__sb_start_write
0.08 ± 19% +0.1 0.17 ± 11% perf-profile.children.cycles-pp.dax_iomap_pfn
0.00 +0.1 0.10 ± 5% perf-profile.children.cycles-pp.sync_regs
0.24 ± 31% +0.1 0.34 ± 5% perf-profile.children.cycles-pp.grab_mapping_entry
0.00 +0.1 0.10 ± 14% perf-profile.children.cycles-pp.___might_sleep
0.04 ± 57% +0.1 0.17 ± 6% perf-profile.children.cycles-pp._raw_spin_lock_irq
0.00 +0.1 0.14 ± 8% perf-profile.children.cycles-pp.xas_create
0.08 ± 30% +0.1 0.21 ± 8% perf-profile.children.cycles-pp.xas_store
0.03 ±100% +0.2 0.20 ± 15% perf-profile.children.cycles-pp.__check_block_validity
0.02 ±173% +0.2 0.22 ± 8% perf-profile.children.cycles-pp.dax_unlock_entry
0.03 ±102% +0.2 0.24 ± 9% perf-profile.children.cycles-pp.dax_insert_entry
0.01 ±173% +0.2 0.25 ± 14% perf-profile.children.cycles-pp.rbt_memtype_lookup
0.00 +0.3 0.28 ± 13% perf-profile.children.cycles-pp.ext4_meta_trans_blocks
0.07 ± 39% +0.3 0.37 ± 8% perf-profile.children.cycles-pp.next_resource
0.23 ± 21% +0.3 0.57 ± 6% perf-profile.children.cycles-pp.native_irq_return_iret
0.08 ± 12% +0.5 0.60 ± 5% perf-profile.children.cycles-pp.ext4_es_lookup_extent
0.00 +0.6 0.64 ± 20% perf-profile.children.cycles-pp.__get_locked_pte
0.01 ±173% +0.7 0.68 ± 18% perf-profile.children.cycles-pp.insert_pfn
0.13 ± 11% +0.7 0.83 ± 8% perf-profile.children.cycles-pp.ext4_map_blocks
0.40 ± 25% +0.8 1.17 ± 9% perf-profile.children.cycles-pp.prepare_exit_to_usermode
0.48 ± 24% +1.1 1.56 ± 9% perf-profile.children.cycles-pp.swapgs_restore_regs_and_return_to_usermode
0.77 ± 57% +2.5 3.23 ± 8% perf-profile.children.cycles-pp.find_next_iomem_res
0.77 ± 57% +2.5 3.26 ± 8% perf-profile.children.cycles-pp.walk_system_ram_range
0.77 ± 57% +2.5 3.29 ± 8% perf-profile.children.cycles-pp.pat_pagerange_is_ram
0.00 +3.6 3.62 ± 22% perf-profile.children.cycles-pp.add_transaction_credits
0.10 ± 15% +4.6 4.73 ± 32% perf-profile.children.cycles-pp.ext4_journal_check_start
0.10 ± 17% +4.8 4.92 ± 31% perf-profile.children.cycles-pp.__ext4_journal_start_sb
0.00 +5.5 5.54 ± 33% perf-profile.children.cycles-pp.jbd2_transaction_committed
0.29 ± 54% +6.3 6.62 ± 26% perf-profile.children.cycles-pp._raw_read_lock
0.18 ± 23% +8.1 8.29 ± 26% perf-profile.children.cycles-pp.jbd2_journal_stop
0.19 ± 21% +8.3 8.50 ± 26% perf-profile.children.cycles-pp.__ext4_journal_stop
0.26 ± 13% +12.5 12.73 ± 29% perf-profile.children.cycles-pp.ext4_iomap_begin
0.18 ± 23% +16.5 16.69 ± 26% perf-profile.children.cycles-pp.start_this_handle
0.26 ± 24% +16.6 16.87 ± 25% perf-profile.children.cycles-pp.jbd2__journal_start
0.93 ± 19% +16.8 17.78 ± 44% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
0.15 ± 44% +18.6 18.71 ± 44% perf-profile.children.cycles-pp._raw_spin_lock
0.91 ± 51% +20.8 21.70 ± 38% perf-profile.children.cycles-pp.lookup_memtype
0.91 ± 50% +20.8 21.71 ± 38% perf-profile.children.cycles-pp.track_pfn_insert
0.73 ± 65% +21.7 22.41 ± 37% perf-profile.children.cycles-pp.__vm_insert_mixed
1.46 ± 49% +34.9 36.34 ± 15% perf-profile.children.cycles-pp.dax_iomap_pte_fault
3.02 ± 33% +58.6 61.66 ± 8% perf-profile.children.cycles-pp.ext4_dax_huge_fault
3.29 ± 30% +58.7 61.98 ± 8% perf-profile.children.cycles-pp.__handle_mm_fault
3.39 ± 30% +58.8 62.20 ± 8% perf-profile.children.cycles-pp.handle_mm_fault
3.76 ± 28% +58.9 62.65 ± 8% perf-profile.children.cycles-pp.__do_page_fault
3.84 ± 28% +58.9 62.74 ± 8% perf-profile.children.cycles-pp.do_page_fault
3.86 ± 28% +58.9 62.76 ± 8% perf-profile.children.cycles-pp.page_fault
1.89 ± 41% +59.8 61.65 ± 8% perf-profile.children.cycles-pp.__do_fault
67.30 ± 14% -34.3 33.02 ± 17% perf-profile.self.cycles-pp.intel_idle
23.95 ± 38% -23.1 0.87 ± 9% perf-profile.self.cycles-pp.get_io_u
0.39 ± 43% -0.3 0.08 ± 15% perf-profile.self.cycles-pp.dax_wake_entry
0.14 ± 10% -0.1 0.03 ±102% perf-profile.self.cycles-pp.io_serial_in
0.07 ± 26% +0.0 0.11 ± 3% perf-profile.self.cycles-pp.__do_page_fault
0.00 +0.1 0.06 ± 9% perf-profile.self.cycles-pp.file_update_time
0.00 +0.1 0.06 ± 9% perf-profile.self.cycles-pp.__might_sleep
0.01 ±173% +0.1 0.08 ± 21% perf-profile.self.cycles-pp.__sb_start_write
0.00 +0.1 0.06 ± 6% perf-profile.self.cycles-pp.__mark_inode_dirty
0.00 +0.1 0.06 ± 17% perf-profile.self.cycles-pp.xas_load
0.01 ±173% +0.1 0.08 ± 10% perf-profile.self.cycles-pp.xas_store
0.00 +0.1 0.07 ± 10% perf-profile.self.cycles-pp.jbd2__journal_start
0.00 +0.1 0.08 ± 12% perf-profile.self.cycles-pp.up_read
0.03 ±100% +0.1 0.11 ± 19% perf-profile.self.cycles-pp.ext4_data_block_valid
0.03 ±100% +0.1 0.11 ± 14% perf-profile.self.cycles-pp.handle_mm_fault
0.00 +0.1 0.09 ± 17% perf-profile.self.cycles-pp.__check_block_validity
0.00 +0.1 0.09 ± 5% perf-profile.self.cycles-pp.sync_regs
0.00 +0.1 0.10 ± 19% perf-profile.self.cycles-pp.___might_sleep
0.01 ±173% +0.1 0.12 ± 10% perf-profile.self.cycles-pp.dax_iomap_pte_fault
0.04 ± 57% +0.1 0.17 ± 6% perf-profile.self.cycles-pp._raw_spin_lock_irq
0.00 +0.1 0.13 ± 10% perf-profile.self.cycles-pp.xas_create
0.11 ± 19% +0.1 0.24 ± 2% perf-profile.self.cycles-pp.__handle_mm_fault
0.00 +0.2 0.19 ± 32% perf-profile.self.cycles-pp.__ext4_journal_start_sb
0.03 ±100% +0.2 0.22 ± 11% perf-profile.self.cycles-pp.ext4_dax_huge_fault
0.00 +0.2 0.21 ± 26% perf-profile.self.cycles-pp.__ext4_journal_stop
0.01 ±173% +0.2 0.24 ± 13% perf-profile.self.cycles-pp.rbt_memtype_lookup
0.02 ±173% +0.3 0.28 ± 7% perf-profile.self.cycles-pp.next_resource
0.00 +0.3 0.27 ± 14% perf-profile.self.cycles-pp.ext4_meta_trans_blocks
0.09 ± 25% +0.3 0.39 ± 10% perf-profile.self.cycles-pp.swapgs_restore_regs_and_return_to_usermode
0.23 ± 21% +0.3 0.57 ± 6% perf-profile.self.cycles-pp.native_irq_return_iret
0.00 +0.4 0.37 ± 24% perf-profile.self.cycles-pp.ext4_iomap_begin
0.06 ± 14% +0.5 0.54 ± 5% perf-profile.self.cycles-pp.ext4_es_lookup_extent
0.39 ± 25% +0.8 1.16 ± 9% perf-profile.self.cycles-pp.prepare_exit_to_usermode
0.10 ± 37% +1.0 1.15 ± 17% perf-profile.self.cycles-pp._raw_spin_lock
0.51 ± 46% +1.8 2.32 ± 10% perf-profile.self.cycles-pp.find_next_iomem_res
0.00 +3.3 3.30 ± 32% perf-profile.self.cycles-pp.jbd2_transaction_committed
0.00 +3.6 3.60 ± 22% perf-profile.self.cycles-pp.add_transaction_credits
0.07 ± 17% +4.6 4.63 ± 32% perf-profile.self.cycles-pp.ext4_journal_check_start
0.29 ± 54% +6.3 6.56 ± 26% perf-profile.self.cycles-pp._raw_read_lock
0.10 ± 21% +8.1 8.20 ± 26% perf-profile.self.cycles-pp.jbd2_journal_stop
0.05 ± 64% +9.2 9.23 ± 27% perf-profile.self.cycles-pp.start_this_handle
0.93 ± 19% +16.7 17.66 ± 44% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
fio.write_bw_MBps
35000 +-+-----------------------------------------------------------------+
| |
30000 +-+ +.. |
| .. |
| + + .+ +.. .+.. |
25000 +-+ + + + +..+. + : +. +..+.+.. |
| .. + + + .. + : ..|
20000 +-++ + .+ +.+ +..+. : + |
| +.+. + |
15000 +-+ |
| |
| |
10000 O-+O O O O O O O O O O O O O O O O O O |
| |
5000 +-+-----------------------------------------------------------------+
fio.write_iops
16000 +-+-----------------------------------------------------------------+
| : + |
14000 +-+ : + |
| : + + +.. +.. |
| + + : .. : : .. .+. |
12000 +-+ + : : : +..+ : : + +. +.. +|
| + + : : : .. : : + |
10000 +-++ : : +.+ +.. : + |
| +.+..+ +.+ |
8000 +-+ |
| |
| |
6000 +-+ |
O O O O O O O O O O O O |
4000 +-+O--O----O-----------O----O--O-----------O------------------------+
fio.write_clat_mean_us
9e+06 +-+-----------------------------------------------------------------+
| O O O O |
8e+06 O-+ O O O O O O O O O O O O |
7e+06 +-+ O O |
| |
6e+06 +-+ |
| |
5e+06 +-+ |
| |
4e+06 +-+ |
3e+06 +-+ +.+..+.. +.+.. +..+.+.. |
| .+.. .. .. +..+.. .. .+.. .+..+.+..+..|
2e+06 +-+ + +..+..+ + +. +. |
| |
1e+06 +-+-----------------------------------------------------------------+
fio.write_clat_90__us
1e+07 +-+-----------------------------------------------------------------+
| O O O O |
9e+06 +-+ O O O O O O |
8e+06 O-+ O O O O O O O O |
| |
7e+06 +-+ |
| |
6e+06 +-+ |
| |
5e+06 +-+ + |
4e+06 +-+ +. + : |
| + +..+.. +.+.. +..+ : |
3e+06 +-++.. + .. +..+.. .. : .+.. .+.. .+..+..|
|. + +.. .+ + +. +. + |
2e+06 +-+-----------------------------------------------------------------+
fio.write_clat_95__us
1e+07 +-+-----------------------------------------------------------------+
| O O O O |
9e+06 +-+ O O O O O O O O O |
8e+06 O-+ O O O O O |
| |
7e+06 +-+ |
| |
6e+06 +-+ |
| + |
5e+06 +-+ + : : |
4e+06 +-+ : + +.. : : |
| +.. : +..+.. +.+.. + + : .+..|
3e+06 +-+ : .. +..+.. + : .+.. .+.. .+. |
| + +..+..+ + +. +. + |
2e+06 +-+-----------------------------------------------------------------+
fio.latency_10ms_
100 O-+O--O--O--O--O--O-O--O--O--O--O--O--O--O--O--O--O--O----------------+
90 +-+ |
| |
80 +-+ |
70 +-+ |
| |
60 +-+ |
50 +-+ |
40 +-+ |
| |
30 +-+ |
20 +-+ |
| +.. +.. |
10 +-+ .. .+.. .. .+..|
0 +-+-------------------------------------------------------------------+
fio.workload
3.5e+06 +-+---------------------------------------------------------------+
| |
3e+06 +-+ +.. |
| + |
| + + .+ +.. .+.. |
2.5e+06 +-+ + + + +..+. + : + +..+..+ |
| .. : + + + + : + ..|
2e+06 +-++ : .+ +..+ +.+.. : + |
| +..+. + |
1.5e+06 +-+ |
| |
| |
1e+06 O-+O O O O O O O O O O O O O O O O O O |
| |
500000 +-+---------------------------------------------------------------+
fio.time.user_time
8000 +-+------------------------------------------------------------------+
| |
7000 +-++..+..+..+.+..+..+..+..+..+..+..+.+..+..+..+..+..+..+..+.+..+..+..|
6000 +-+ |
| |
5000 +-+ |
| |
4000 +-+ |
| |
3000 +-+ |
2000 +-+ |
| |
1000 O-+ O O |
| O O O O O O O O O O O O O O O O |
0 +-+------------------------------------------------------------------+
fio.time.system_time
7000 +-+------------------------------------------------------------------+
O O O O O O O O O O O O O O O O O O O |
6000 +-+ |
| |
5000 +-+ |
| |
4000 +-+ |
| |
3000 +-+ |
| |
2000 +-+ |
| |
1000 +-+ |
| |
0 +-+------------------------------------------------------------------+
fio.time.minor_page_faults
5e+08 +-+---------------------------------O----------O------------------+
4.5e+08 O-+ O O O O O O O O O O O O |
| O O O O |
4e+08 +-+ |
3.5e+08 +-+ |
| |
3e+08 +-+ |
2.5e+08 +-+ |
2e+08 +-+ |
| |
1.5e+08 +-+ |
1e+08 +-+ |
| |
5e+07 +-++..+.+..+..+..+.+..+..+..+.+..+..+..+.+..+..+..+.+..+..+..+.+..|
0 +-+---------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
***************************************************************************************************
lkp-csl-2sp2: 96 threads Intel(R) Xeon(R) Platinum 8260L CPU @ 2.40GHz with 128G memory
=========================================================================================
bs/compiler/cpufreq_governor/fs/iodepth/ioengine/kconfig/mode/mount_option/nr_dev/nr_task/rootfs/runtime/rw/tbox_group/test_size/testcase/time_based/ucode:
128B/gcc-7/performance/ext4/1/libpmem/x86_64-rhel-7.6/fsdax/dax/2/50%/debian-x86_64-2019-05-14.cgz/200s/randread/lkp-csl-2sp2/200G/fio-basic/tb/0x400001c
commit:
40cdc60ac1 ("device-dax: Add a 'resource' attribute")
23c84eb783 ("dax: Fix missed wakeup with PMD faults")
40cdc60ac16a42eb 23c84eb7837514e16d79ed6d849
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
1:4 -25% :3 dmesg.WARNING:at#for_ip_swapgs_restore_regs_and_return_to_usermode/0x
1:4 -25% :3 dmesg.WARNING:stack_recursion
1:4 -25% :3 kmsg.Hardware_Error]:DIMM_location:not_present.DMI_handle
1:4 -25% :3 kmsg.Hardware_Error]:Error#,type:recoverable
1:4 -25% :3 kmsg.Hardware_Error]:Hardware_error_from_APEI_Generic_Hardware_Error_Source
1:4 -25% :3 kmsg.Hardware_Error]:error_status
1:4 -25% :3 kmsg.Hardware_Error]:error_type:#,unknown
1:4 -25% :3 kmsg.Hardware_Error]:event_severity:recoverable
1:4 -25% :3 kmsg.Hardware_Error]:node:#card:#module
1:4 -25% :3 kmsg.Hardware_Error]:physical_address
1:4 -25% :3 kmsg.Hardware_Error]:physical_address_mask
1:4 -25% :3 kmsg.Hardware_Error]:section_type:memory_error
1:4 -25% :3 kmsg.Memory_failure:#:recovery_action_for_dax_page:Failed
%stddev %change %stddev
\ | \
0.01 -0.0 0.00 fio.latency_1000us%
0.01 -0.0 0.00 fio.latency_100ms%
0.23 +0.5 0.68 ± 44% fio.latency_10us%
0.12 +2.4 2.54 ± 54% fio.latency_20us%
0.17 +0.4 0.52 ± 40% fio.latency_2us%
644.00 +2359.2% 15837 ± 65% fio.read_clat_99%_us
136.98 +483.3% 798.93 ± 60% fio.read_clat_mean_us
13708 -72.9% 3720 ± 48% fio.read_clat_stddev
271811 +19245.5% 52583088 fio.time.minor_page_faults
196.58 +342.0% 868.90 ± 16% fio.time.system_time
9420 -5.8% 8877 fio.time.user_time
27149 -16.1% 22781 fio.time.voluntary_context_switches
3448 -1.8% 3385 ndctl.time.maximum_resident_set_size
162.00 -11.1% 144.00 ndctl.time.percent_of_cpu_this_job_got
5.773e+08 -28.0% 4.158e+08 ± 35% cpuidle.C6.time
2.85 -0.8 2.02 ± 36% turbostat.C6%
0.20 -0.2 0.02 ± 87% mpstat.cpu.all.soft%
1.43 +3.0 4.42 ± 15% mpstat.cpu.all.sys%
143827 +203.7% 436747 ± 23% numa-numastat.node0.local_node
174846 +161.7% 457502 ± 19% numa-numastat.node0.numa_hit
40.52 -5.0% 38.49 boot-time.boot
25.83 -1.5% 25.44 boot-time.dhcp
3439 -7.1% 3195 boot-time.idle
46.00 -6.5% 43.00 vmstat.cpu.us
1882442 +25.7% 2366566 vmstat.memory.cache
1723 -4.0% 1654 ± 2% vmstat.system.cs
192529 +2.0% 196426 vmstat.system.in
91733 +493.2% 544132 meminfo.KReclaimable
2854102 +30.6% 3726561 meminfo.Memused
9789 +3989.2% 400292 meminfo.PageTables
91733 +493.2% 544132 meminfo.SReclaimable
261954 +173.8% 717201 meminfo.Slab
15548 +24.6% 19376 meminfo.max_used_kB
12428 +1036.9% 141294 ±127% numa-meminfo.node1.Inactive
12424 +1035.7% 141097 ±127% numa-meminfo.node1.Inactive(anon)
35975 +439.8% 194177 ±110% numa-meminfo.node1.KReclaimable
1232 +11028.1% 137098 ±135% numa-meminfo.node1.PageTables
35975 +439.8% 194177 ±110% numa-meminfo.node1.SReclaimable
71264 +9.2% 77846 numa-meminfo.node1.SUnreclaim
172476 +59.0% 274160 ± 31% numa-meminfo.node1.Shmem
107240 +153.7% 272024 ± 78% numa-meminfo.node1.Slab
137.00 -67.2% 45.00 ± 85% numa-vmstat.node0.nr_inactive_file
137.00 -67.2% 45.00 ± 85% numa-vmstat.node0.nr_zone_inactive_file
1.00 +1.8e+07% 176769 ±141% numa-vmstat.node1.nr_dirtied
2973 +1082.6% 35157 ±127% numa-vmstat.node1.nr_inactive_anon
309.00 +10989.3% 34266 ±135% numa-vmstat.node1.nr_page_table_pages
43146 +58.9% 68561 ± 31% numa-vmstat.node1.nr_shmem
8997 +440.5% 48625 ±110% numa-vmstat.node1.nr_slab_reclaimable
17815 +9.2% 19459 numa-vmstat.node1.nr_slab_unreclaimable
1.00 +1.8e+07% 176763 ±141% numa-vmstat.node1.nr_written
2973 +1082.6% 35157 ±127% numa-vmstat.node1.nr_zone_inactive_anon
152151 +4.6% 159190 proc-vmstat.nr_active_anon
6294 -3.6% 6065 proc-vmstat.nr_active_file
451092 +1.7% 458890 proc-vmstat.nr_file_pages
138.00 -31.6% 94.33 ± 2% proc-vmstat.nr_inactive_file
2440 +4007.0% 100212 proc-vmstat.nr_page_table_pages
145339 +5.6% 153452 ± 2% proc-vmstat.nr_shmem
22930 +493.8% 136157 proc-vmstat.nr_slab_reclaimable
152151 +4.6% 159190 proc-vmstat.nr_zone_active_anon
6294 -3.6% 6065 proc-vmstat.nr_zone_active_file
138.00 -31.6% 94.33 ± 2% proc-vmstat.nr_zone_inactive_file
25239 -26.5% 18547 proc-vmstat.numa_hint_faults
10148 -68.5% 3191 ± 4% proc-vmstat.numa_hint_faults_local
770739 +32.1% 1018202 proc-vmstat.numa_hit
739614 +33.5% 987031 proc-vmstat.numa_local
45092 -17.1% 37382 ± 16% proc-vmstat.numa_pages_migrated
844874 +41.6% 1196713 proc-vmstat.pgalloc_normal
836660 +6253.9% 53160425 proc-vmstat.pgfault
740872 +46.6% 1085990 proc-vmstat.pgfree
45092 -17.1% 37382 ± 16% proc-vmstat.pgmigrate_success
48.00 +2.1e+05% 102432 proc-vmstat.thp_fault_fallback
0.90 +0.6 1.48 ± 21% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_kernel.secondary_startup_64
0.90 +0.6 1.48 ± 21% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_kernel.secondary_startup_64
0.90 +0.6 1.48 ± 21% perf-profile.calltrace.cycles-pp.start_kernel.secondary_startup_64
0.89 +0.6 1.47 ± 22% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_kernel
0.89 +0.6 1.47 ± 21% perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_kernel.secondary_startup_64
1.98 -1.4 0.58 ±119% perf-profile.children.cycles-pp.axmap_next_free
0.00 +0.1 0.07 ± 25% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.11 +0.1 0.19 ± 31% perf-profile.children.cycles-pp.scheduler_tick
0.00 +0.1 0.09 ± 32% perf-profile.children.cycles-pp.ktime_get_update_offsets_now
0.19 +0.1 0.30 ± 30% perf-profile.children.cycles-pp.update_process_times
0.19 +0.1 0.31 ± 30% perf-profile.children.cycles-pp.tick_sched_handle
0.05 +0.1 0.19 ± 52% perf-profile.children.cycles-pp.irq_exit
0.11 +0.1 0.25 ± 37% perf-profile.children.cycles-pp.clockevents_program_event
0.00 +0.1 0.15 ± 53% perf-profile.children.cycles-pp.__softirqentry_text_start
0.23 +0.2 0.42 ± 31% perf-profile.children.cycles-pp.tick_sched_timer
0.00 +0.2 0.19 ± 92% perf-profile.children.cycles-pp.tick_nohz_get_sleep_length
0.05 +0.2 0.25 ± 78% perf-profile.children.cycles-pp.menu_select
0.29 +0.2 0.50 ± 30% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.08 +0.2 0.30 ± 45% perf-profile.children.cycles-pp.ktime_get
0.46 +0.4 0.88 ± 24% perf-profile.children.cycles-pp.hrtimer_interrupt
0.90 +0.6 1.48 ± 21% perf-profile.children.cycles-pp.start_kernel
0.52 +0.6 1.13 ± 31% perf-profile.children.cycles-pp.smp_apic_timer_interrupt
0.53 +0.7 1.19 ± 31% perf-profile.children.cycles-pp.apic_timer_interrupt
1.97 -1.4 0.57 ±119% perf-profile.self.cycles-pp.axmap_next_free
0.00 +0.1 0.08 ± 31% perf-profile.self.cycles-pp.ktime_get_update_offsets_now
0.07 +0.2 0.27 ± 46% perf-profile.self.cycles-pp.ktime_get
1595 +25.0% 1994 ± 14% slabinfo.UNIX.active_objs
1595 +25.0% 1994 ± 14% slabinfo.UNIX.num_objs
168.00 -66.7% 56.00 slabinfo.btrfs_extent_map.active_objs
168.00 -66.7% 56.00 slabinfo.btrfs_extent_map.num_objs
216.00 -44.4% 120.00 ± 14% slabinfo.btrfs_path.active_objs
216.00 -44.4% 120.00 ± 14% slabinfo.btrfs_path.num_objs
10235 +9.9% 11250 ± 4% slabinfo.kmalloc-96.active_objs
420.00 +56.7% 658.00 ± 21% slabinfo.kmem_cache.active_objs
420.00 +56.7% 658.00 ± 21% slabinfo.kmem_cache.num_objs
725.00 +53.5% 1113 ± 21% slabinfo.kmem_cache_node.active_objs
768.00 +50.0% 1152 ± 20% slabinfo.kmem_cache_node.num_objs
588.00 +26.2% 742.00 ± 5% slabinfo.mnt_cache.active_objs
588.00 +26.2% 742.00 ± 5% slabinfo.mnt_cache.num_objs
372.00 -33.3% 248.00 ± 20% slabinfo.numa_policy.active_objs
372.00 -33.3% 248.00 ± 20% slabinfo.numa_policy.num_objs
34276 +2309.9% 826003 slabinfo.radix_tree_node.active_objs
613.00 +2306.9% 14754 slabinfo.radix_tree_node.active_slabs
34348 +2305.6% 826283 slabinfo.radix_tree_node.num_objs
613.00 +2306.9% 14754 slabinfo.radix_tree_node.num_slabs
647.00 -21.4% 508.33 slabinfo.skbuff_ext_cache.active_objs
761.00 -25.8% 565.00 ± 9% slabinfo.skbuff_ext_cache.num_objs
260.00 +47.6% 383.67 ± 13% slabinfo.skbuff_fclone_cache.active_objs
260.00 +47.6% 383.67 ± 13% slabinfo.skbuff_fclone_cache.num_objs
3103 +19.7% 3713 ± 13% slabinfo.sock_inode_cache.active_objs
3103 +19.7% 3713 ± 13% slabinfo.sock_inode_cache.num_objs
2015 +16.7% 2351 ± 10% slabinfo.task_group.active_objs
2015 +16.7% 2351 ± 10% slabinfo.task_group.num_objs
10871 +15.3% 12537 ± 7% slabinfo.vmap_area.active_objs
10897 +15.1% 12547 ± 7% slabinfo.vmap_area.num_objs
306.00 -50.0% 153.00 ± 27% slabinfo.xfrm_dst_cache.active_objs
306.00 -50.0% 153.00 ± 27% slabinfo.xfrm_dst_cache.num_objs
2.10 +30.9% 2.75 ± 2% perf-stat.i.MPKI
1667 -3.8% 1603 ± 2% perf-stat.i.context-switches
27.21 -46.7% 14.50 ± 53% perf-stat.i.cpu-migrations
0.12 +0.1 0.20 ± 2% perf-stat.i.dTLB-load-miss-rate%
0.00 +0.0 0.02 ± 17% perf-stat.i.dTLB-store-miss-rate%
54955 +834.8% 513715 ± 5% perf-stat.i.dTLB-store-misses
27.37 +4.4 31.74 ± 7% perf-stat.i.iTLB-load-miss-rate%
1702300 +57.8% 2685706 ± 8% perf-stat.i.iTLB-load-misses
2688 +8874.4% 241268 perf-stat.i.minor-faults
3.28 +24.7 27.94 ± 3% perf-stat.i.node-load-miss-rate%
187966 +4476.0% 8601404 ± 73% perf-stat.i.node-load-misses
81.25 -3.7 77.54 ± 2% perf-stat.i.node-store-miss-rate%
59369 +964.1% 631752 ± 3% perf-stat.i.node-store-misses
14987 +269.7% 55401 ± 12% perf-stat.i.node-stores
2688 +8875.0% 241285 perf-stat.i.page-faults
1.85 +45.0% 2.69 perf-stat.overall.MPKI
0.12 +0.1 0.18 ± 23% perf-stat.overall.branch-miss-rate%
0.13 +0.1 0.21 ± 3% perf-stat.overall.dTLB-load-miss-rate%
0.00 +0.0 0.02 ± 46% perf-stat.overall.dTLB-store-miss-rate%
27.25 +9.2 36.44 ± 6% perf-stat.overall.iTLB-load-miss-rate%
33626 -59.6% 13600 ± 65% perf-stat.overall.instructions-per-iTLB-miss
0.88 +26.7 27.61 ± 3% perf-stat.overall.node-load-miss-rate%
79.84 +12.0 91.87 perf-stat.overall.node-store-miss-rate%
2059 +13.8% 2343 ± 6% perf-stat.overall.path-length
1658 -4.0% 1592 ± 2% perf-stat.ps.context-switches
27.07 -46.7% 14.43 ± 53% perf-stat.ps.cpu-migrations
54587 +828.6% 506869 ± 5% perf-stat.ps.dTLB-store-misses
1693240 +57.2% 2661964 ± 8% perf-stat.ps.iTLB-load-misses
2675 +8794.5% 237936 perf-stat.ps.minor-faults
187014 +4466.7% 8540397 ± 73% perf-stat.ps.node-load-misses
59071 +954.4% 622854 ± 3% perf-stat.ps.node-store-misses
14913 +268.6% 54979 ± 12% perf-stat.ps.node-stores
2675 +8795.1% 237953 perf-stat.ps.page-faults
102.00 -100.0% 0.00 interrupts.349:PCI-MSI.288768-edge.ahci[0000:00:11.5]
165.00 +136.8% 390.67 ± 39% interrupts.70:PCI-MSI.70258694-edge.eth2-TxRx-5
104.00 +25.6% 130.67 ± 11% interrupts.71:PCI-MSI.70258695-edge.eth2-TxRx-6
2445 -54.7% 1106 ± 21% interrupts.CPU0.RES:Rescheduling_interrupts
161.00 +6865.4% 11214 ±113% interrupts.CPU1.RES:Rescheduling_interrupts
302.00 -89.0% 33.33 ± 62% interrupts.CPU11.TLB:TLB_shootdowns
102.00 -100.0% 0.00 interrupts.CPU17.349:PCI-MSI.288768-edge.ahci[0000:00:11.5]
203855 -95.0% 10261 ±136% interrupts.CPU2.RES:Rescheduling_interrupts
6063 -97.9% 127.00 ±121% interrupts.CPU3.RES:Rescheduling_interrupts
34.00 +1193.1% 439.67 ± 58% interrupts.CPU31.TLB:TLB_shootdowns
2221 +12.7% 2503 ± 7% interrupts.CPU34.CAL:Function_call_interrupts
28.00 +1463.1% 437.67 ± 58% interrupts.CPU35.TLB:TLB_shootdowns
14.00 +3040.5% 439.67 ± 60% interrupts.CPU36.TLB:TLB_shootdowns
8.00 +5387.5% 439.00 ± 58% interrupts.CPU37.TLB:TLB_shootdowns
32.00 +1314.6% 452.67 ± 57% interrupts.CPU39.TLB:TLB_shootdowns
4.00 +9491.7% 383.67 ±133% interrupts.CPU40.RES:Rescheduling_interrupts
8.00 +5425.0% 442.00 ± 55% interrupts.CPU40.TLB:TLB_shootdowns
33.00 +1187.9% 425.00 ± 61% interrupts.CPU42.TLB:TLB_shootdowns
13.00 +3294.9% 441.33 ± 61% interrupts.CPU43.TLB:TLB_shootdowns
4.00 +11241.7% 453.67 ± 55% interrupts.CPU45.TLB:TLB_shootdowns
29.00 +732.2% 241.33 ±111% interrupts.CPU47.TLB:TLB_shootdowns
3347 -99.8% 7.00 ±101% interrupts.CPU53.RES:Rescheduling_interrupts
34.00 +278.4% 128.67 ± 90% interrupts.CPU66.TLB:TLB_shootdowns
2444 +16.7% 2851 ± 10% interrupts.CPU69.CAL:Function_call_interrupts
902.00 +196.6% 2675 ± 4% interrupts.CPU73.CAL:Function_call_interrupts
165.00 +136.8% 390.67 ± 39% interrupts.CPU76.70:PCI-MSI.70258694-edge.eth2-TxRx-5
104.00 +25.6% 130.67 ± 11% interrupts.CPU77.71:PCI-MSI.70258695-edge.eth2-TxRx-6
159.00 -75.7% 38.67 ± 77% interrupts.CPU77.TLB:TLB_shootdowns
13.00 +1069.2% 152.00 ± 93% interrupts.CPU82.TLB:TLB_shootdowns
406.00 -91.6% 34.00 ±111% interrupts.CPU84.RES:Rescheduling_interrupts
8701 -27.5% 6311 ± 27% interrupts.CPU87.NMI:Non-maskable_interrupts
8701 -27.5% 6311 ± 27% interrupts.CPU87.PMI:Performance_monitoring_interrupts
35.00 +175.2% 96.33 ± 18% interrupts.CPU94.TLB:TLB_shootdowns
218391 -74.9% 54875 ± 49% interrupts.RES:Rescheduling_interrupts
4882 +131.5% 11302 ± 22% interrupts.TLB:TLB_shootdowns
66.91 -100.0% 0.00 sched_debug.cfs_rq:/.MIN_vruntime.avg
6423 -100.0% 0.00 sched_debug.cfs_rq:/.MIN_vruntime.max
652.16 -100.0% 0.00 ± 3% sched_debug.cfs_rq:/.MIN_vruntime.stddev
55837 +15.8% 64652 ± 2% sched_debug.cfs_rq:/.exec_clock.avg
704.76 -75.9% 169.91 ± 43% sched_debug.cfs_rq:/.exec_clock.min
508473 +16.6% 592912 ± 4% sched_debug.cfs_rq:/.load.avg
507.39 +17.0% 593.51 ± 3% sched_debug.cfs_rq:/.load_avg.avg
66.91 -100.0% 0.00 sched_debug.cfs_rq:/.max_vruntime.avg
6423 -100.0% 0.00 sched_debug.cfs_rq:/.max_vruntime.max
652.16 -100.0% 0.00 ± 3% sched_debug.cfs_rq:/.max_vruntime.stddev
23657 -23.5% 18098 ± 19% sched_debug.cfs_rq:/.min_vruntime.min
0.52 +16.8% 0.61 ± 3% sched_debug.cfs_rq:/.nr_running.avg
0.14 -97.2% 0.00 ±141% sched_debug.cfs_rq:/.nr_spread_over.avg
2.25 -88.9% 0.25 ±141% sched_debug.cfs_rq:/.nr_spread_over.max
0.47 -93.2% 0.03 ±141% sched_debug.cfs_rq:/.nr_spread_over.stddev
493.76 +16.6% 575.63 ± 4% sched_debug.cfs_rq:/.runnable_load_avg.avg
508431 +16.6% 592847 ± 4% sched_debug.cfs_rq:/.runnable_weight.avg
-4735 +574.4% -31934 sched_debug.cfs_rq:/.spread0.min
571.86 +16.0% 663.09 ± 2% sched_debug.cfs_rq:/.util_avg.avg
870.00 -41.2% 511.50 sched_debug.cfs_rq:/.util_est_enqueued.max
203.82 -21.5% 160.02 sched_debug.cfs_rq:/.util_est_enqueued.stddev
16015309 -94.7% 850358 sched_debug.cpu.avg_idle.avg
87047806 -98.8% 1021851 ± 3% sched_debug.cpu.avg_idle.max
76953 -45.3% 42066 ± 45% sched_debug.cpu.avg_idle.min
16507466 -98.7% 211475 ± 6% sched_debug.cpu.avg_idle.stddev
5298421 -90.6% 500000 sched_debug.cpu.max_idle_balance_cost.avg
14677282 -96.6% 500000 sched_debug.cpu.max_idle_balance_cost.max
3245193 -100.0% 0.00 sched_debug.cpu.max_idle_balance_cost.stddev
1714934 +19.6% 2050299 sched_debug.cpu.nr_load_updates.max
161991 +19.2% 193066 sched_debug.cpu.nr_load_updates.stddev
0.01 +55.6% 0.01 ± 10% sched_debug.cpu.nr_uninterruptible.avg
76.50 -10.5% 68.50 ± 4% sched_debug.cpu.sched_count.min
5702 -18.9% 4624 ± 18% sched_debug.cpu.sched_count.stddev
3.75 +120.0% 8.25 ± 7% sched_debug.cpu.sched_goidle.min
2853 -19.5% 2297 ± 18% sched_debug.cpu.sched_goidle.stddev
2047 -27.1% 1493 ± 11% sched_debug.cpu.ttwu_count.stddev
375.59 -10.0% 338.00 ± 2% sched_debug.cpu.ttwu_local.avg
1582 -22.5% 1227 ± 20% sched_debug.cpu.ttwu_local.stddev
0.06 -100.0% 0.00 sched_debug.rt_rq:/.rt_time.avg
5.48 -100.0% 0.00 sched_debug.rt_rq:/.rt_time.max
0.56 -100.0% 0.00 sched_debug.rt_rq:/.rt_time.stddev
5717 +378.1% 27333 ± 52% softirqs.CPU1.SCHED
3743 +442.4% 20300 ± 55% softirqs.CPU10.SCHED
4362 +423.1% 22815 ± 57% softirqs.CPU13.SCHED
75628 +37.5% 104002 ± 13% softirqs.CPU13.TIMER
32742 -38.4% 20170 ± 54% softirqs.CPU14.SCHED
101974 -17.7% 83966 ± 14% softirqs.CPU14.TIMER
118827 -79.7% 24125 ± 58% softirqs.CPU2.SCHED
3816 +424.2% 20004 ± 50% softirqs.CPU22.SCHED
5141 +454.7% 28516 ±106% softirqs.CPU24.SCHED
88797 +16.4% 103367 ± 10% softirqs.CPU24.TIMER
3859 +253.0% 13620 ± 78% softirqs.CPU25.SCHED
75033 +11.9% 83976 ± 2% softirqs.CPU25.TIMER
54332 -13.7% 46876 ± 8% softirqs.CPU26.RCU
75701 +11.5% 84403 softirqs.CPU26.TIMER
56074 -16.6% 46760 ± 8% softirqs.CPU28.RCU
54473 -13.3% 47249 ± 8% softirqs.CPU29.RCU
75145 +11.9% 84056 softirqs.CPU29.TIMER
396697 -78.5% 85352 ± 11% softirqs.CPU3.TIMER
54396 -13.8% 46904 ± 8% softirqs.CPU31.RCU
3981 +243.9% 13689 ± 76% softirqs.CPU31.SCHED
74166 +12.4% 83331 ± 2% softirqs.CPU31.TIMER
4085 +268.8% 15066 ± 59% softirqs.CPU32.SCHED
71541 +19.5% 85502 ± 3% softirqs.CPU32.TIMER
3704 +236.4% 12458 ± 84% softirqs.CPU36.SCHED
72133 +31.5% 94841 ± 13% softirqs.CPU36.TIMER
3864 +214.6% 12155 ± 87% softirqs.CPU38.SCHED
3681 +242.4% 12603 ± 82% softirqs.CPU40.SCHED
73540 +27.2% 93570 ± 15% softirqs.CPU40.TIMER
3850 +221.7% 12384 ± 84% softirqs.CPU41.SCHED
71418 +14.0% 81439 softirqs.CPU41.TIMER
27325 -54.9% 12322 ± 85% softirqs.CPU42.SCHED
71599 +14.1% 81690 ± 2% softirqs.CPU43.TIMER
43146 +23.0% 53051 ± 10% softirqs.CPU44.RCU
27047 -53.7% 12521 ± 80% softirqs.CPU45.SCHED
5194 +177.7% 14424 ± 69% softirqs.CPU46.SCHED
73602 +15.5% 84977 softirqs.CPU46.TIMER
30007 +18.4% 35534 ± 7% softirqs.CPU47.SCHED
92763 +26.5% 117385 ± 9% softirqs.CPU47.TIMER
95514 -13.0% 83067 softirqs.CPU49.TIMER
3598 +235.9% 12084 ± 94% softirqs.CPU50.SCHED
75948 +10.7% 84080 softirqs.CPU50.TIMER
52806 -11.7% 46652 ± 10% softirqs.CPU51.RCU
1677858 -94.8% 87918 ± 7% softirqs.CPU51.TIMER
75905 +14.2% 86687 ± 3% softirqs.CPU52.TIMER
18597 +165.5% 49378 ± 12% softirqs.CPU53.RCU
29446 -57.5% 12500 ± 94% softirqs.CPU53.SCHED
3847 +361.6% 17759 ± 56% softirqs.CPU55.SCHED
71400 +936.0% 739692 ±124% softirqs.CPU55.TIMER
4055 +209.3% 12541 ± 89% softirqs.CPU56.SCHED
74122 +13.1% 83797 ± 3% softirqs.CPU56.TIMER
3799 +239.6% 12902 ± 86% softirqs.CPU59.SCHED
73830 +17.6% 86816 ± 5% softirqs.CPU59.TIMER
3789 +591.6% 26204 ± 64% softirqs.CPU6.SCHED
74036 +18.5% 87699 ± 4% softirqs.CPU62.TIMER
3617 +232.4% 12023 ± 94% softirqs.CPU63.SCHED
73615 +11.0% 81705 softirqs.CPU63.TIMER
3638 +230.5% 12024 ± 94% softirqs.CPU64.SCHED
70890 +17.7% 83403 ± 3% softirqs.CPU64.TIMER
73302 +14.6% 83983 softirqs.CPU65.TIMER
71888 +15.5% 83048 softirqs.CPU66.TIMER
73388 +14.0% 83666 softirqs.CPU67.TIMER
72650 +16.0% 84271 softirqs.CPU68.TIMER
73528 +14.3% 84039 softirqs.CPU69.TIMER
43111 +10.7% 47716 ± 3% softirqs.CPU7.RCU
76154 +14.7% 87336 ± 5% softirqs.CPU71.TIMER
95281 -13.2% 82745 ± 12% softirqs.CPU73.TIMER
4453 +372.3% 21032 ± 49% softirqs.CPU78.SCHED
71517 +13.7% 81342 ± 3% softirqs.CPU78.TIMER
51125 -10.7% 45668 softirqs.CPU79.RCU
53450 -11.9% 47089 ± 3% softirqs.CPU83.RCU
3545 +468.8% 20164 ± 57% softirqs.CPU83.SCHED
3576 +461.9% 20092 ± 57% softirqs.CPU85.SCHED
3878 +428.1% 20478 ± 53% softirqs.CPU9.SCHED
109714 -25.7% 81558 ± 9% softirqs.CPU9.TIMER
3651 +605.4% 25755 ± 12% softirqs.CPU90.SCHED
70706 +19.3% 84361 ± 2% softirqs.CPU90.TIMER
3731 +431.3% 19821 ± 57% softirqs.CPU92.SCHED
3657 +453.0% 20224 ± 55% softirqs.CPU93.SCHED
13076 +60.5% 20987 ± 14% softirqs.NET_RX
1605010 +9.1% 1750317 softirqs.SCHED
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
View attachment "config-5.2.0-rc5-00002-g23c84eb783751" of type "text/plain" (196384 bytes)
View attachment "job-script" of type "text/plain" (8146 bytes)
View attachment "job.yaml" of type "text/plain" (5769 bytes)
View attachment "reproduce" of type "text/plain" (840 bytes)
Powered by blists - more mailing lists