[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20200627082522.GJ5535@shao2-debian>
Date: Sat, 27 Jun 2020 16:25:22 +0800
From: kernel test robot <rong.a.chen@...el.com>
To: David Rientjes <rientjes@...gle.com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
Andrea Arcangeli <aarcange@...hat.com>,
Michal Hocko <mhocko@...e.com>, Mel Gorman <mgorman@...e.de>,
Vlastimil Babka <vbabka@...e.cz>,
Stefan Priebe - Profihost AG <s.priebe@...fihost.ag>,
"Kirill A. Shutemov" <kirill@...temov.name>,
Andrew Morton <akpm@...ux-foundation.org>,
LKML <linux-kernel@...r.kernel.org>, lkp@...ts.01.org
Subject: [mm, page_alloc] b39d0ee263: vm-scalability.throughput 26.4%
improvement
Greeting,
FYI, we noticed a 26.4% improvement of vm-scalability.throughput due to commit:
commit: b39d0ee2632d2f4fb180e8e4eba33736283f23de ("mm, page_alloc: avoid expensive reclaim when compaction may not succeed")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: vm-scalability
on test machine: 72 threads Intel(R) Xeon(R) Gold 6139 CPU @ 2.30GHz with 128G memory
with following parameters:
runtime: 300
thp_enabled: always
thp_defrag: always
nr_task: 8
nr_ssd: 1
test: swap-w-rand-mt
cpufreq_governor: performance
ucode: 0x2000065
test-description: The motivation behind this suite is to exercise functions and regions of the mm/ of the Linux kernel which are of interest to us.
test-url: https://git.kernel.org/cgit/linux/kernel/git/wfg/vm-scalability.git/
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_ssd/nr_task/rootfs/runtime/tbox_group/test/testcase/thp_defrag/thp_enabled/ucode:
gcc-9/performance/x86_64-rhel-7.6/1/8/debian-x86_64-20191114.cgz/300/lkp-skl-2sp7/swap-w-rand-mt/vm-scalability/always/always/0x2000065
commit:
19deb7695e ("Revert "Revert "Revert "mm, thp: consolidate THP gfp handling into alloc_hugepage_direct_gfpmask""")
b39d0ee263 ("mm, page_alloc: avoid expensive reclaim when compaction may not succeed")
19deb7695e072dea b39d0ee2632d2f4fb180e8e4eba
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
:4 50% 2:4 dmesg.page_allocation_failure:order:#,mode:#(GFP_KERNEL|__GFP_COMP),nodemask=(null),cpuset=/,mems_allowed=
:4 144% 5:4 perf-profile.calltrace.cycles-pp.error_entry.do_access
:4 162% 6:4 perf-profile.children.cycles-pp.error_entry
:4 80% 3:4 perf-profile.self.cycles-pp.error_entry
%stddev %change %stddev
\ | \
0.85 ± 4% +141.8% 2.06 ± 6% vm-scalability.free_time
12854 ± 4% +27.6% 16402 vm-scalability.median
103213 ± 5% +26.4% 130433 vm-scalability.throughput
390.27 ± 4% -20.1% 311.66 vm-scalability.time.elapsed_time
390.27 ± 4% -20.1% 311.66 vm-scalability.time.elapsed_time.max
1282856 ± 3% +2142.7% 28770194 ± 3% vm-scalability.time.file_system_inputs
55840 ± 24% -75.6% 13642 ± 24% vm-scalability.time.involuntary_context_switches
159970 ± 3% +2147.8% 3595842 ± 3% vm-scalability.time.major_page_faults
167684 ± 2% +9973.1% 16891007 ± 5% vm-scalability.time.minor_page_faults
27.75 ± 11% +38.7% 38.50 vm-scalability.time.percent_of_cpu_this_job_got
177871 ± 2% +1929.7% 3610223 ± 3% vm-scalability.time.voluntary_context_switches
1487 +1.4% 1507 boot-time.idle
73888352 ±139% +982.4% 7.998e+08 ± 65% cpuidle.C1.time
1138951 ±143% +795.5% 10199057 ± 45% cpuidle.C1.usage
3121797 ±146% +924.7% 31989104 ± 81% cpuidle.POLL.time
127036 ± 75% +1269.3% 1739536 ± 72% cpuidle.POLL.usage
11.90 ± 7% -2.0 9.94 mpstat.cpu.all.iowait%
0.03 ± 75% +0.1 0.16 ± 5% mpstat.cpu.all.soft%
0.45 ± 10% +0.3 0.71 ± 3% mpstat.cpu.all.sys%
0.05 ± 3% +0.0 0.10 ± 3% mpstat.cpu.all.usr%
1.227e+08 ± 4% -23.3% 94052436 ± 6% meminfo.AnonHugePages
397617 ± 4% +48.5% 590655 meminfo.PageTables
84313 ± 14% -36.4% 53604 meminfo.SwapCached
2.153e+08 +13.9% 2.452e+08 meminfo.SwapFree
59.00 ± 30% +14510.2% 8620 ± 5% meminfo.Writeback
336353 ± 4% +26.4% 425057 meminfo.max_used_kB
1771 ± 2% +2492.1% 45912 ± 3% vmstat.io.bi
399107 -25.9% 295684 vmstat.io.bo
77396038 ± 4% -39.1% 47170548 ± 2% vmstat.memory.swpd
1765 ± 2% +2499.7% 45905 ± 3% vmstat.swap.si
399102 -25.9% 295678 vmstat.swap.so
3974 ± 2% +865.7% 38376 ± 3% vmstat.system.cs
62763223 -21.0% 49591522 ± 4% numa-meminfo.node0.AnonHugePages
634545 ± 2% -8.5% 580694 numa-meminfo.node0.FilePages
1522483 ± 22% +39.0% 2115718 ± 7% numa-meminfo.node0.MemFree
201764 ± 11% +75.5% 354185 ± 25% numa-meminfo.node0.PageTables
66.75 ± 16% +5839.3% 3964 ± 9% numa-meminfo.node0.Writeback
11185 +15.8% 12956 ± 11% numa-meminfo.node1.Mapped
59245 ± 6% +14.3% 67731 numa-meminfo.node1.SUnreclaim
14.50 ± 62% +21572.4% 3142 ± 7% numa-meminfo.node1.Writeback
588490 ± 14% +1444.9% 9091320 numa-numastat.node0.local_node
486779 ± 21% +1991.9% 10183025 ± 14% numa-numastat.node0.numa_foreign
597887 ± 13% +1421.7% 9098331 numa-numastat.node0.numa_hit
4033 ±113% +28572.6% 1156365 ± 68% numa-numastat.node0.numa_miss
13436 ± 83% +8558.4% 1163385 ± 69% numa-numastat.node0.other_node
287398 ± 15% +268.4% 1058708 ± 16% numa-numastat.node1.local_node
4033 ±113% +28572.6% 1156365 ± 68% numa-numastat.node1.numa_foreign
301422 ± 17% +256.6% 1074938 ± 17% numa-numastat.node1.numa_hit
486779 ± 21% +1991.9% 10183025 ± 14% numa-numastat.node1.numa_miss
500805 ± 19% +1936.6% 10199261 ± 14% numa-numastat.node1.other_node
87.61 +1.7% 89.13 iostat.cpu.idle
11.85 ± 7% -16.5% 9.89 iostat.cpu.iowait
15.40 ±173% -100.0% 0.00 iostat.sdb.avgqu-sz.max
35.02 ±173% -100.0% 0.00 iostat.sdb.await.max
25.69 ±173% -100.0% 0.00 iostat.sdb.r_await.max
0.27 ±173% -100.0% 0.00 iostat.sdb.svctm.max
35.93 ±173% -100.0% 0.00 iostat.sdb.w_await.max
205.50 ±100% +5477.0% 11460 ± 3% iostat.sdc.r/s
866.22 ±100% +5199.5% 45905 ± 3% iostat.sdc.rkB/s
22.69 ±100% +185.9% 64.86 ± 2% iostat.sdc.util
196.91 ±100% +110.0% 413.53 ± 2% iostat.sdc.w/s
82.54 ±102% +37097.0% 30704 ± 8% iostat.sdc.wrqm/s
15.30 ±173% -100.0% 0.00 iostat.sdd.avgqu-sz.max
37.20 ±173% -100.0% 0.00 iostat.sdd.await.max
26.39 ±173% -100.0% 0.00 iostat.sdd.r_await.max
0.27 ±173% -100.0% 0.00 iostat.sdd.svctm.max
37.46 ±173% -100.0% 0.00 iostat.sdd.w_await.max
467.00 ± 15% +168.0% 1251 slabinfo.biovec-128.active_objs
467.00 ± 15% +169.3% 1257 slabinfo.biovec-128.num_objs
234.00 ± 20% +215.6% 738.50 slabinfo.biovec-max.active_objs
234.00 ± 20% +226.9% 765.00 slabinfo.biovec-max.num_objs
1979 ± 9% +18.5% 2345 ± 2% slabinfo.blkdev_ioc.active_objs
1979 ± 9% +18.5% 2345 ± 2% slabinfo.blkdev_ioc.num_objs
6539 ± 33% +248.9% 22817 slabinfo.dmaengine-unmap-16.active_objs
159.50 ± 32% +248.9% 556.50 slabinfo.dmaengine-unmap-16.active_slabs
6722 ± 32% +243.6% 23099 slabinfo.dmaengine-unmap-16.num_objs
159.50 ± 32% +248.9% 556.50 slabinfo.dmaengine-unmap-16.num_slabs
904.50 ± 6% +15.5% 1044 ± 7% slabinfo.mnt_cache.active_objs
904.50 ± 6% +15.5% 1044 ± 7% slabinfo.mnt_cache.num_objs
1462 ± 15% +36.0% 1988 slabinfo.pool_workqueue.active_objs
1462 ± 15% +36.0% 1989 slabinfo.pool_workqueue.num_objs
25317 ± 14% +29.3% 32743 slabinfo.radix_tree_node.active_objs
25564 ± 14% +29.0% 32976 slabinfo.radix_tree_node.num_objs
541.25 ± 9% +19.7% 648.00 ± 7% slabinfo.skbuff_ext_cache.active_objs
306.25 ± 10% -35.3% 198.00 ± 2% slabinfo.xfrm_state.active_objs
306.25 ± 10% -35.3% 198.00 ± 2% slabinfo.xfrm_state.num_objs
30636 -20.8% 24248 ± 4% numa-vmstat.node0.nr_anon_transparent_hugepages
158653 ± 2% -8.6% 145086 numa-vmstat.node0.nr_file_pages
383593 ± 21% +31.6% 504920 ± 7% numa-vmstat.node0.nr_free_pages
1197 ± 11% -79.1% 250.50 ± 38% numa-vmstat.node0.nr_isolated_anon
50443 ± 11% +75.9% 88719 ± 25% numa-vmstat.node0.nr_page_table_pages
67929 ± 15% +2664.4% 1877847 ± 5% numa-vmstat.node0.nr_vmscan_write
15.75 ± 14% +6417.5% 1026 ± 17% numa-vmstat.node0.nr_writeback
67991 ± 15% +2660.4% 1876858 ± 5% numa-vmstat.node0.nr_written
16.25 ± 18% +6229.2% 1028 ± 17% numa-vmstat.node0.nr_zone_write_pending
176809 ± 38% +3070.4% 5605565 ± 28% numa-vmstat.node0.numa_foreign
955047 ± 10% +405.1% 4823783 numa-vmstat.node0.numa_hit
945568 ± 11% +409.4% 4816358 numa-vmstat.node0.numa_local
3222 ±120% +34593.4% 1118081 ± 70% numa-vmstat.node0.numa_miss
12701 ± 86% +8761.2% 1125506 ± 70% numa-vmstat.node0.numa_other
2832 ± 3% +18.8% 3364 ± 10% numa-vmstat.node1.nr_mapped
14811 ± 6% +14.4% 16941 numa-vmstat.node1.nr_slab_unreclaimable
28441 ± 86% +5081.8% 1473792 ± 18% numa-vmstat.node1.nr_vmscan_write
3.00 ± 62% +32716.7% 984.50 ± 14% numa-vmstat.node1.nr_writeback
28466 ± 86% +5073.9% 1472846 ± 18% numa-vmstat.node1.nr_written
4.75 ± 40% +20205.3% 964.50 ± 14% numa-vmstat.node1.nr_zone_write_pending
3222 ±120% +34593.6% 1118089 ± 70% numa-vmstat.node1.numa_foreign
632865 ± 8% +122.0% 1405129 ± 18% numa-vmstat.node1.numa_hit
456935 ± 12% +168.6% 1227541 ± 20% numa-vmstat.node1.numa_local
176892 ± 38% +3069.0% 5605676 ± 28% numa-vmstat.node1.numa_miss
352822 ± 17% +1539.1% 5783265 ± 28% numa-vmstat.node1.numa_other
2.165e+08 ± 8% +85.0% 4.005e+08 perf-stat.i.branch-instructions
3905237 ± 12% +16.5% 4550517 perf-stat.i.cache-misses
3962 ± 2% +874.2% 38608 ± 3% perf-stat.i.context-switches
3.43 ± 20% -22.4% 2.66 perf-stat.i.cpi
3.301e+09 ± 12% +45.8% 4.813e+09 perf-stat.i.cpu-cycles
75.62 +18.6% 89.65 perf-stat.i.cpu-migrations
1066 ± 7% +15.9% 1236 perf-stat.i.cycles-between-cache-misses
2.573e+08 ± 6% +96.6% 5.06e+08 ± 2% perf-stat.i.dTLB-loads
1.371e+08 ± 5% +101.6% 2.765e+08 ± 2% perf-stat.i.dTLB-stores
153829 ± 76% +118.3% 335876 ± 4% perf-stat.i.iTLB-load-misses
834267 ± 27% +34.6% 1122801 ± 6% perf-stat.i.iTLB-loads
1.043e+09 ± 7% +87.4% 1.954e+09 perf-stat.i.instructions
0.31 ± 19% +28.3% 0.40 perf-stat.i.ipc
421.94 ± 6% +2632.8% 11530 ± 3% perf-stat.i.major-faults
0.05 ± 12% +46.8% 0.07 perf-stat.i.metric.GHz
0.73 ± 20% +110.3% 1.53 ± 2% perf-stat.i.metric.K/sec
2997 +1789.4% 56632 ± 5% perf-stat.i.minor-faults
38.29 ± 11% +12.3 50.61 ± 11% perf-stat.i.node-load-miss-rate%
10.29 ± 13% +35.1 45.35 ± 4% perf-stat.i.node-store-miss-rate%
46487 ± 11% +462.8% 261642 ± 3% perf-stat.i.node-store-misses
638374 ± 5% -7.2% 592659 perf-stat.i.node-stores
3419 +1893.5% 68163 ± 3% perf-stat.i.page-faults
3.19 ± 17% -22.7% 2.46 perf-stat.overall.cpi
852.81 ± 14% +24.0% 1057 ± 2% perf-stat.overall.cycles-between-cache-misses
0.32 ± 17% +25.4% 0.41 perf-stat.overall.ipc
6.72 ± 12% +23.8 30.53 ± 2% perf-stat.overall.node-store-miss-rate%
10361 ± 8% +47.9% 15326 perf-stat.overall.path-length
2.175e+08 ± 8% +83.8% 3.999e+08 perf-stat.ps.branch-instructions
3921204 ± 13% +15.9% 4546434 perf-stat.ps.cache-misses
3946 ± 2% +876.2% 38525 ± 3% perf-stat.ps.context-switches
3.302e+09 ± 12% +45.5% 4.806e+09 perf-stat.ps.cpu-cycles
75.32 +18.8% 89.49 perf-stat.ps.cpu-migrations
2.58e+08 ± 6% +95.8% 5.052e+08 perf-stat.ps.dTLB-loads
1.371e+08 ± 5% +101.3% 2.76e+08 ± 2% perf-stat.ps.dTLB-stores
152980 ± 76% +119.1% 335236 ± 3% perf-stat.ps.iTLB-load-misses
831526 ± 27% +34.8% 1120567 ± 5% perf-stat.ps.iTLB-loads
1.047e+09 ± 7% +86.3% 1.951e+09 perf-stat.ps.instructions
419.56 ± 6% +2642.3% 11505 ± 3% perf-stat.ps.major-faults
2994 +1787.6% 56525 ± 5% perf-stat.ps.minor-faults
46508 ± 11% +461.5% 261135 ± 3% perf-stat.ps.node-store-misses
646522 ± 5% -8.1% 593906 perf-stat.ps.node-stores
3414 +1892.7% 68031 ± 3% perf-stat.ps.page-faults
4.098e+11 ± 8% +48.9% 6.104e+11 perf-stat.total.instructions
28571 ± 16% -94.0% 1725 ± 79% proc-vmstat.allocstall_movable
29775 ± 17% +189.5% 86205 proc-vmstat.compact_fail
239845 ± 86% -91.6% 20116 ± 41% proc-vmstat.compact_migrate_scanned
30325 ± 17% +184.3% 86207 proc-vmstat.compact_stall
550.00 ± 69% -99.7% 1.50 ± 33% proc-vmstat.compact_success
48.50 ± 34% +2025.8% 1031 ± 15% proc-vmstat.kswapd_high_wmark_hit_quickly
59881 ± 4% -23.3% 45914 ± 6% proc-vmstat.nr_anon_transparent_hugepages
306441 -2.5% 298732 proc-vmstat.nr_file_pages
1416 ± 14% -56.7% 613.00 ± 53% proc-vmstat.nr_isolated_anon
13021 +2.0% 13286 proc-vmstat.nr_kernel_stack
99385 ± 4% +48.6% 147700 proc-vmstat.nr_page_table_pages
16444 +5.0% 17266 proc-vmstat.nr_slab_reclaimable
33077 +6.4% 35194 proc-vmstat.nr_slab_unreclaimable
96333 ± 16% +3377.2% 3349689 ± 11% proc-vmstat.nr_vmscan_write
21.25 ± 22% +10325.9% 2215 ± 5% proc-vmstat.nr_writeback
146006 ± 11% +6584.0% 9759132 ± 8% proc-vmstat.nr_written
22.50 ± 17% +9651.1% 2194 ± 5% proc-vmstat.nr_zone_write_pending
490812 ± 21% +2210.3% 11339390 ± 6% proc-vmstat.numa_foreign
12546 ± 27% +143.1% 30504 ± 21% proc-vmstat.numa_hint_faults
5408 ± 69% +175.4% 14895 ± 9% proc-vmstat.numa_hint_faults_local
924107 ± 13% +1003.6% 10198682 proc-vmstat.numa_hit
900674 ± 13% +1029.8% 10175421 proc-vmstat.numa_local
490812 ± 21% +2210.3% 11339390 ± 6% proc-vmstat.numa_miss
514246 ± 20% +2109.6% 11362651 ± 6% proc-vmstat.numa_other
114.75 ± 42% +1264.7% 1566 proc-vmstat.pageoutrun
2024276 ± 13% +2098.0% 44494248 ± 17% proc-vmstat.pgactivate
1114631 ± 4% -60.9% 435525 ± 7% proc-vmstat.pgalloc_dma32
77470327 -26.6% 56867854 proc-vmstat.pgalloc_normal
42466243 ± 3% +54.1% 65450796 ± 11% proc-vmstat.pgdeactivate
1375277 ± 3% +1449.6% 21311829 ± 3% proc-vmstat.pgfault
77941011 -27.1% 56789463 proc-vmstat.pgfree
164760 ± 2% +2084.4% 3598976 ± 3% proc-vmstat.pgmajfault
691015 ± 2% +1984.9% 14406909 ± 3% proc-vmstat.pgpgin
1.569e+08 ± 3% -40.7% 92993810 proc-vmstat.pgpgout
42466243 ± 3% +54.1% 65450796 ± 11% proc-vmstat.pgrefill
145620 ± 11% +6601.3% 9758473 ± 8% proc-vmstat.pgrotated
61188098 ± 6% -88.6% 6974537 ± 89% proc-vmstat.pgscan_direct
19257098 ± 35% +335.0% 83761134 ± 16% proc-vmstat.pgscan_kswapd
30237264 ± 6% -89.6% 3143011 ± 89% proc-vmstat.pgsteal_direct
8970033 ± 36% +124.0% 20096453 ± 13% proc-vmstat.pgsteal_kswapd
172208 ± 2% +1991.2% 3601181 ± 3% proc-vmstat.pswpin
39224926 ± 3% -40.7% 23247940 proc-vmstat.pswpout
149697 -54.1% 68756 proc-vmstat.thp_deferred_split_page
148067 -53.8% 68408 proc-vmstat.thp_fault_alloc
1200 ± 28% +7069.7% 86072 proc-vmstat.thp_fault_fallback
76475 ± 3% -65.5% 26397 ± 7% proc-vmstat.thp_split_pmd
76475 ± 3% -65.5% 26397 ± 7% proc-vmstat.thp_swpout
1397 ± 11% +33.6% 1866 ± 2% sched_debug.cfs_rq:/.exec_clock.avg
1843 ± 19% +26.5% 2331 ± 18% sched_debug.cfs_rq:/.exec_clock.stddev
35761 ± 11% +1065.8% 416897 ± 5% sched_debug.cfs_rq:/.load.avg
642939 ± 10% +2330.3% 15625051 sched_debug.cfs_rq:/.load.max
130599 ± 7% +1793.1% 2472308 ± 3% sched_debug.cfs_rq:/.load.stddev
47.82 ± 9% +321.8% 201.69 ± 34% sched_debug.cfs_rq:/.load_avg.avg
725.18 ± 8% +63.3% 1184 ± 17% sched_debug.cfs_rq:/.load_avg.max
148.72 ± 8% +151.4% 373.90 ± 22% sched_debug.cfs_rq:/.load_avg.stddev
8846 ± 12% +26.7% 11208 ± 9% sched_debug.cfs_rq:/.min_vruntime.stddev
0.24 ± 25% -33.4% 0.16 ± 3% sched_debug.cfs_rq:/.nr_spread_over.avg
11.10 ± 26% +46.9% 16.32 ± 14% sched_debug.cfs_rq:/.removed.load_avg.avg
146.29 +16.7% 170.67 sched_debug.cfs_rq:/.removed.load_avg.max
38.24 ± 11% +29.6% 49.57 ± 6% sched_debug.cfs_rq:/.removed.load_avg.stddev
511.35 ± 26% +47.2% 752.48 ± 14% sched_debug.cfs_rq:/.removed.runnable_sum.avg
6799 +15.7% 7866 sched_debug.cfs_rq:/.removed.runnable_sum.max
1761 ± 11% +29.8% 2285 ± 6% sched_debug.cfs_rq:/.removed.runnable_sum.stddev
4.48 ± 33% +52.5% 6.83 ± 11% sched_debug.cfs_rq:/.removed.util_avg.avg
74.36 +17.5% 87.33 sched_debug.cfs_rq:/.removed.util_avg.max
15.80 ± 17% +35.5% 21.41 ± 3% sched_debug.cfs_rq:/.removed.util_avg.stddev
8847 ± 12% +26.7% 11209 ± 9% sched_debug.cfs_rq:/.spread0.stddev
81.12 ± 4% +30.4% 105.82 ± 12% sched_debug.cfs_rq:/.util_avg.avg
120956 ± 8% +81.3% 219238 ± 8% sched_debug.cpu.avg_idle.stddev
211729 -14.4% 181159 sched_debug.cpu.clock.avg
211819 -14.4% 181220 sched_debug.cpu.clock.max
211444 -14.4% 180985 sched_debug.cpu.clock.min
211729 -14.4% 181159 sched_debug.cpu.clock_task.avg
211819 -14.4% 181220 sched_debug.cpu.clock_task.max
211444 -14.4% 180985 sched_debug.cpu.clock_task.min
6474 -12.4% 5672 sched_debug.cpu.curr->pid.max
9875 ± 3% +490.8% 58346 ± 3% sched_debug.cpu.nr_switches.avg
139078 ± 7% +226.5% 454101 ± 4% sched_debug.cpu.nr_switches.max
17645 ± 7% +597.5% 123074 ± 6% sched_debug.cpu.nr_switches.stddev
8498 ± 3% +570.4% 56972 ± 3% sched_debug.cpu.sched_count.avg
137026 ± 7% +230.1% 452307 ± 4% sched_debug.cpu.sched_count.max
17489 ± 7% +603.6% 123061 ± 6% sched_debug.cpu.sched_count.stddev
3369 +498.1% 20150 ± 3% sched_debug.cpu.sched_goidle.avg
61893 ± 8% +153.4% 156853 ± 4% sched_debug.cpu.sched_goidle.max
7830 ± 6% +446.0% 42754 ± 6% sched_debug.cpu.sched_goidle.stddev
4252 ± 4% +749.8% 36135 ± 4% sched_debug.cpu.ttwu_count.avg
103569 ± 10% +1200.0% 1346433 ± 4% sched_debug.cpu.ttwu_count.max
12410 ± 10% +1189.6% 160043 ± 4% sched_debug.cpu.ttwu_count.stddev
2757 ± 4% +586.9% 18940 ± 4% sched_debug.cpu.ttwu_local.avg
70698 ± 7% +145.2% 173350 sched_debug.cpu.ttwu_local.max
8437 ± 8% +399.3% 42134 ± 5% sched_debug.cpu.ttwu_local.stddev
211432 -14.4% 180953 sched_debug.cpu_clk
208718 -14.6% 178237 sched_debug.ktime
211799 -14.4% 181322 sched_debug.sched_clk
156536 ± 2% +1083.0% 1851838 ± 3% softirqs.BLOCK
102222 ± 5% -14.6% 87282 softirqs.CPU0.RCU
55553 ± 4% -17.5% 45844 ± 2% softirqs.CPU0.SCHED
141539 ± 8% -23.0% 109045 softirqs.CPU0.TIMER
53374 ± 6% -20.9% 42228 softirqs.CPU1.SCHED
139918 ± 10% -20.9% 110719 ± 2% softirqs.CPU1.TIMER
103294 ± 4% -20.9% 81695 ± 14% softirqs.CPU10.RCU
51338 ± 5% -19.9% 41102 softirqs.CPU10.SCHED
140253 ± 9% -26.6% 102901 ± 2% softirqs.CPU10.TIMER
102504 ± 4% -12.9% 89311 softirqs.CPU11.RCU
51376 ± 5% -20.0% 41118 softirqs.CPU11.SCHED
137637 ± 5% -19.9% 110297 ± 3% softirqs.CPU11.TIMER
51108 ± 5% -20.5% 40635 softirqs.CPU12.SCHED
139873 ± 8% -22.3% 108727 softirqs.CPU12.TIMER
51393 ± 5% -19.8% 41208 softirqs.CPU13.SCHED
142556 ± 11% -22.0% 111168 softirqs.CPU13.TIMER
100079 ± 4% -12.7% 87403 ± 2% softirqs.CPU14.RCU
51099 ± 5% -20.7% 40529 softirqs.CPU14.SCHED
138927 ± 10% -23.2% 106759 softirqs.CPU14.TIMER
82560 ± 13% -17.1% 68451 ± 3% softirqs.CPU15.RCU
51881 ± 5% -20.1% 41433 softirqs.CPU15.SCHED
143407 ± 13% -24.2% 108766 softirqs.CPU15.TIMER
86319 ± 11% -33.1% 57723 ± 5% softirqs.CPU16.RCU
52245 ± 5% -20.5% 41518 softirqs.CPU16.SCHED
139770 ± 8% -21.7% 109469 softirqs.CPU16.TIMER
51865 ± 5% -19.0% 42013 softirqs.CPU17.SCHED
143175 ± 9% -20.4% 114025 softirqs.CPU17.TIMER
52987 ± 3% -18.2% 43330 softirqs.CPU18.SCHED
51954 ± 4% -17.9% 42654 softirqs.CPU19.SCHED
51747 ± 4% -20.3% 41226 softirqs.CPU2.SCHED
140583 ± 6% -18.1% 115164 softirqs.CPU2.TIMER
51928 ± 4% -18.2% 42474 softirqs.CPU20.SCHED
51952 ± 4% -18.1% 42558 softirqs.CPU21.SCHED
52112 ± 4% -16.4% 43552 ± 3% softirqs.CPU22.SCHED
51993 ± 3% -18.4% 42448 softirqs.CPU23.SCHED
52401 ± 3% -18.5% 42729 softirqs.CPU24.SCHED
52279 ± 4% -18.5% 42583 softirqs.CPU25.SCHED
51980 ± 4% -18.2% 42509 softirqs.CPU26.SCHED
52662 ± 2% -19.3% 42472 softirqs.CPU27.SCHED
51995 ± 4% -17.9% 42675 softirqs.CPU28.SCHED
51705 ± 4% -17.4% 42732 softirqs.CPU29.SCHED
51470 ± 5% -20.2% 41058 softirqs.CPU3.SCHED
138346 ± 6% -17.6% 114050 softirqs.CPU3.TIMER
51955 ± 4% -18.3% 42429 softirqs.CPU30.SCHED
52295 ± 4% -18.3% 42750 softirqs.CPU31.SCHED
51864 ± 3% -18.4% 42321 softirqs.CPU32.SCHED
51934 ± 3% -18.0% 42560 softirqs.CPU33.SCHED
52111 ± 4% -18.0% 42722 softirqs.CPU34.SCHED
52007 ± 4% -18.6% 42355 softirqs.CPU35.SCHED
99121 ± 9% -22.4% 76875 ± 7% softirqs.CPU36.RCU
51253 ± 5% -18.9% 41587 softirqs.CPU36.SCHED
138261 ± 8% -20.9% 109345 ± 2% softirqs.CPU36.TIMER
99050 ± 8% -17.8% 81385 ± 2% softirqs.CPU37.RCU
51526 ± 4% -20.4% 41018 softirqs.CPU37.SCHED
136647 ± 11% -21.5% 107205 softirqs.CPU37.TIMER
100099 ± 11% -18.2% 81883 ± 4% softirqs.CPU38.RCU
52577 ± 6% -22.6% 40688 softirqs.CPU38.SCHED
159405 ± 21% -31.6% 109016 softirqs.CPU38.TIMER
100876 ± 5% -17.8% 82904 ± 4% softirqs.CPU39.RCU
51601 ± 5% -21.6% 40444 softirqs.CPU39.SCHED
133824 ± 7% -18.5% 109026 ± 3% softirqs.CPU39.TIMER
104778 ± 2% -15.7% 88356 ± 4% softirqs.CPU4.RCU
52528 ± 4% -21.2% 41394 softirqs.CPU4.SCHED
137599 ± 6% -17.2% 113874 softirqs.CPU4.TIMER
101451 ± 4% -19.3% 81896 ± 6% softirqs.CPU40.RCU
51366 ± 5% -19.8% 41181 softirqs.CPU40.SCHED
133313 ± 7% -16.0% 111972 softirqs.CPU40.TIMER
0.00 +8.8e+107% 881255 ± 99% softirqs.CPU41.BLOCK
102088 ± 7% -20.9% 80735 ± 6% softirqs.CPU41.RCU
51312 ± 5% -19.9% 41080 softirqs.CPU41.SCHED
134722 ± 7% -21.1% 106350 ± 2% softirqs.CPU41.TIMER
51478 ± 5% -19.9% 41216 softirqs.CPU42.SCHED
138281 ± 9% -28.0% 99510 ± 4% softirqs.CPU42.TIMER
51696 ± 4% -20.6% 41061 softirqs.CPU43.SCHED
139944 ± 10% -24.8% 105188 softirqs.CPU43.TIMER
51827 ± 4% -20.7% 41091 softirqs.CPU44.SCHED
138316 ± 10% -25.0% 103736 softirqs.CPU44.TIMER
51317 ± 4% -19.3% 41436 softirqs.CPU45.SCHED
142804 ± 16% -28.4% 102247 softirqs.CPU45.TIMER
51655 ± 4% -17.7% 42518 ± 3% softirqs.CPU46.SCHED
51218 ± 4% -17.7% 42129 softirqs.CPU47.SCHED
134090 ± 5% -19.2% 108284 ± 4% softirqs.CPU47.TIMER
51373 ± 4% -19.7% 41277 softirqs.CPU48.SCHED
135851 ± 9% -22.7% 104947 softirqs.CPU48.TIMER
51371 ± 5% -19.4% 41406 softirqs.CPU49.SCHED
140209 ± 11% -23.3% 107509 softirqs.CPU49.TIMER
104700 ± 3% -17.6% 86271 ± 3% softirqs.CPU5.RCU
52052 ± 6% -18.4% 42478 softirqs.CPU5.SCHED
138397 ± 6% -19.6% 111330 softirqs.CPU5.TIMER
50639 ± 4% -20.2% 40398 softirqs.CPU50.SCHED
136318 ± 10% -23.7% 104045 ± 2% softirqs.CPU50.TIMER
51040 ± 5% -19.0% 41319 softirqs.CPU51.SCHED
137581 ± 10% -22.6% 106506 softirqs.CPU51.TIMER
98661 ± 8% -17.1% 81797 softirqs.CPU52.RCU
51505 ± 5% -20.6% 40900 softirqs.CPU52.SCHED
135498 ± 7% -20.7% 107393 softirqs.CPU52.TIMER
51498 ± 5% -19.0% 41734 softirqs.CPU53.SCHED
141340 ± 9% -21.4% 111145 softirqs.CPU53.TIMER
52978 ± 4% -19.7% 42558 softirqs.CPU54.SCHED
51962 ± 4% -18.2% 42507 softirqs.CPU55.SCHED
51861 ± 4% -17.4% 42854 softirqs.CPU56.SCHED
52026 ± 4% -18.1% 42607 softirqs.CPU57.SCHED
52154 ± 4% -18.3% 42606 softirqs.CPU58.SCHED
51825 ± 4% -17.6% 42686 softirqs.CPU59.SCHED
100094 ± 7% -13.7% 86334 softirqs.CPU6.RCU
51390 ± 5% -19.0% 41641 softirqs.CPU6.SCHED
52000 ± 4% -18.4% 42438 softirqs.CPU60.SCHED
51774 ± 4% -17.4% 42763 ± 2% softirqs.CPU61.SCHED
51526 ± 4% -17.6% 42474 softirqs.CPU62.SCHED
51551 ± 3% -18.0% 42252 softirqs.CPU63.SCHED
51502 ± 4% -17.7% 42411 softirqs.CPU64.SCHED
51836 ± 4% -17.8% 42591 softirqs.CPU65.SCHED
51240 ± 4% -16.8% 42616 softirqs.CPU66.SCHED
51759 ± 4% -17.5% 42686 softirqs.CPU67.SCHED
51062 ± 4% -16.7% 42535 softirqs.CPU68.SCHED
51486 ± 4% -17.0% 42726 softirqs.CPU69.SCHED
102056 ± 5% -11.4% 90459 softirqs.CPU7.RCU
51293 ± 5% -20.1% 40970 softirqs.CPU7.SCHED
141933 ± 9% -22.9% 109469 ± 2% softirqs.CPU7.TIMER
51697 ± 4% -17.5% 42646 softirqs.CPU70.SCHED
52802 ± 4% -19.2% 42681 softirqs.CPU71.SCHED
103043 ± 4% -12.4% 90276 softirqs.CPU8.RCU
51558 ± 5% -20.5% 40984 softirqs.CPU8.SCHED
141932 ± 9% -23.0% 109242 ± 2% softirqs.CPU8.TIMER
50960 ± 4% -20.3% 40601 softirqs.CPU9.SCHED
135507 ± 5% -22.2% 105468 softirqs.CPU9.TIMER
3731680 ± 4% -18.9% 3024994 softirqs.SCHED
319585 ± 2% +1066.4% 3727738 ± 3% interrupts.272:PCI-MSI.376832-edge.ahci[0000:00:17.0]
356.00 ± 12% -28.4% 255.00 interrupts.35:PCI-MSI.31981568-edge.i40e-0000:3d:00.0:misc
39.25 ±167% +452.9% 217.00 ± 98% interrupts.40:PCI-MSI.31981573-edge.i40e-eth0-TxRx-4
1.75 ±116% +4357.1% 78.00 ± 93% interrupts.51:PCI-MSI.31981584-edge.i40e-eth0-TxRx-15
0.75 ±173% +26300.0% 198.00 ± 96% interrupts.52:PCI-MSI.31981585-edge.i40e-eth0-TxRx-16
784.00 ± 4% -20.0% 627.00 interrupts.9:IO-APIC.9-fasteoi.acpi
414069 ± 5% -24.5% 312769 ± 7% interrupts.CAL:Function_call_interrupts
753730 ± 7% -19.9% 603714 ± 3% interrupts.CPU0.LOC:Local_timer_interrupts
159.50 ± 30% +39.8% 223.00 ± 8% interrupts.CPU0.NMI:Non-maskable_interrupts
159.50 ± 30% +39.8% 223.00 ± 8% interrupts.CPU0.PMI:Performance_monitoring_interrupts
784.00 ± 4% -20.0% 627.00 interrupts.CPU1.9:IO-APIC.9-fasteoi.acpi
768703 ± 5% -22.2% 598135 ± 4% interrupts.CPU1.LOC:Local_timer_interrupts
1845 ±137% +760.7% 15883 ± 97% interrupts.CPU10.CAL:Function_call_interrupts
768528 ± 5% -20.9% 607622 ± 2% interrupts.CPU10.LOC:Local_timer_interrupts
172.25 ± 19% +161.5% 450.50 ± 49% interrupts.CPU10.NMI:Non-maskable_interrupts
172.25 ± 19% +161.5% 450.50 ± 49% interrupts.CPU10.PMI:Performance_monitoring_interrupts
2133 ±172% +640.2% 15788 ± 99% interrupts.CPU10.TLB:TLB_shootdowns
767666 ± 5% -19.7% 616680 interrupts.CPU11.LOC:Local_timer_interrupts
156.00 ± 35% +136.9% 369.50 ± 6% interrupts.CPU11.NMI:Non-maskable_interrupts
156.00 ± 35% +136.9% 369.50 ± 6% interrupts.CPU11.PMI:Performance_monitoring_interrupts
755252 ± 7% -18.0% 619541 interrupts.CPU12.LOC:Local_timer_interrupts
137.50 ± 29% +109.1% 287.50 ± 26% interrupts.CPU12.NMI:Non-maskable_interrupts
137.50 ± 29% +109.1% 287.50 ± 26% interrupts.CPU12.PMI:Performance_monitoring_interrupts
758516 ± 6% -20.4% 603684 ± 3% interrupts.CPU13.LOC:Local_timer_interrupts
134.50 ± 36% +46.8% 197.50 ± 3% interrupts.CPU13.NMI:Non-maskable_interrupts
134.50 ± 36% +46.8% 197.50 ± 3% interrupts.CPU13.PMI:Performance_monitoring_interrupts
544.00 ± 64% -98.5% 8.00 ± 12% interrupts.CPU13.RES:Rescheduling_interrupts
752379 ± 7% -18.2% 615329 interrupts.CPU14.LOC:Local_timer_interrupts
138.00 ± 22% +110.9% 291.00 ± 42% interrupts.CPU14.NMI:Non-maskable_interrupts
138.00 ± 22% +110.9% 291.00 ± 42% interrupts.CPU14.PMI:Performance_monitoring_interrupts
1.50 ±110% +5100.0% 78.00 ± 93% interrupts.CPU15.51:PCI-MSI.31981584-edge.i40e-eth0-TxRx-15
752754 ± 7% -20.5% 598798 ± 4% interrupts.CPU15.LOC:Local_timer_interrupts
0.75 ±173% +26166.7% 197.00 ± 96% interrupts.CPU16.52:PCI-MSI.31981585-edge.i40e-eth0-TxRx-16
780971 ± 4% -23.1% 600487 ± 4% interrupts.CPU16.LOC:Local_timer_interrupts
762160 ± 5% -21.9% 595108 ± 5% interrupts.CPU17.LOC:Local_timer_interrupts
162.00 ± 20% +69.4% 274.50 ± 10% interrupts.CPU18.NMI:Non-maskable_interrupts
162.00 ± 20% +69.4% 274.50 ± 10% interrupts.CPU18.PMI:Performance_monitoring_interrupts
37.25 ± 55% +302.7% 150.00 ± 68% interrupts.CPU18.RES:Rescheduling_interrupts
145.50 ± 34% +75.9% 256.00 ± 12% interrupts.CPU19.NMI:Non-maskable_interrupts
145.50 ± 34% +75.9% 256.00 ± 12% interrupts.CPU19.PMI:Performance_monitoring_interrupts
780589 ± 4% -21.6% 612163 ± 2% interrupts.CPU2.LOC:Local_timer_interrupts
143.50 ± 41% +58.2% 227.00 interrupts.CPU20.NMI:Non-maskable_interrupts
143.50 ± 41% +58.2% 227.00 interrupts.CPU20.PMI:Performance_monitoring_interrupts
2421 ± 34% -78.2% 528.00 ± 75% interrupts.CPU21.TLB:TLB_shootdowns
154.25 ± 47% +68.9% 260.50 ± 4% interrupts.CPU22.NMI:Non-maskable_interrupts
154.25 ± 47% +68.9% 260.50 ± 4% interrupts.CPU22.PMI:Performance_monitoring_interrupts
2288 ± 45% -78.7% 487.50 interrupts.CPU22.TLB:TLB_shootdowns
2110 ± 48% +99.5% 4211 ± 15% interrupts.CPU23.CAL:Function_call_interrupts
139.00 ± 43% +79.1% 249.00 ± 9% interrupts.CPU23.NMI:Non-maskable_interrupts
139.00 ± 43% +79.1% 249.00 ± 9% interrupts.CPU23.PMI:Performance_monitoring_interrupts
1431 ± 75% +113.1% 3049 ± 20% interrupts.CPU24.CAL:Function_call_interrupts
142.00 ± 42% +57.7% 224.00 ± 8% interrupts.CPU24.NMI:Non-maskable_interrupts
142.00 ± 42% +57.7% 224.00 ± 8% interrupts.CPU24.PMI:Performance_monitoring_interrupts
147.75 ± 36% +89.5% 280.00 ± 16% interrupts.CPU25.NMI:Non-maskable_interrupts
147.75 ± 36% +89.5% 280.00 ± 16% interrupts.CPU25.PMI:Performance_monitoring_interrupts
1653 ± 53% -95.1% 81.00 ± 93% interrupts.CPU25.TLB:TLB_shootdowns
158.25 ± 30% +51.3% 239.50 ± 3% interrupts.CPU26.NMI:Non-maskable_interrupts
158.25 ± 30% +51.3% 239.50 ± 3% interrupts.CPU26.PMI:Performance_monitoring_interrupts
150.75 ± 34% +75.1% 264.00 ± 7% interrupts.CPU27.NMI:Non-maskable_interrupts
150.75 ± 34% +75.1% 264.00 ± 7% interrupts.CPU27.PMI:Performance_monitoring_interrupts
143.75 ± 42% +66.3% 239.00 ± 10% interrupts.CPU28.NMI:Non-maskable_interrupts
143.75 ± 42% +66.3% 239.00 ± 10% interrupts.CPU28.PMI:Performance_monitoring_interrupts
1095 ± 76% -81.8% 199.00 ± 38% interrupts.CPU28.TLB:TLB_shootdowns
780942 ± 4% -21.9% 609615 ± 2% interrupts.CPU3.LOC:Local_timer_interrupts
144.75 ± 37% +72.4% 249.50 ± 15% interrupts.CPU30.NMI:Non-maskable_interrupts
144.75 ± 37% +72.4% 249.50 ± 15% interrupts.CPU30.PMI:Performance_monitoring_interrupts
1940 ± 90% -100.0% 0.50 ±100% interrupts.CPU30.TLB:TLB_shootdowns
146.75 ± 45% +71.0% 251.00 ± 5% interrupts.CPU31.NMI:Non-maskable_interrupts
146.75 ± 45% +71.0% 251.00 ± 5% interrupts.CPU31.PMI:Performance_monitoring_interrupts
140.00 ± 43% +73.6% 243.00 ± 6% interrupts.CPU33.NMI:Non-maskable_interrupts
140.00 ± 43% +73.6% 243.00 ± 6% interrupts.CPU33.PMI:Performance_monitoring_interrupts
124.00 ± 50% +103.6% 252.50 ± 9% interrupts.CPU34.NMI:Non-maskable_interrupts
124.00 ± 50% +103.6% 252.50 ± 9% interrupts.CPU34.PMI:Performance_monitoring_interrupts
121.75 ± 50% +146.0% 299.50 interrupts.CPU35.NMI:Non-maskable_interrupts
121.75 ± 50% +146.0% 299.50 interrupts.CPU35.PMI:Performance_monitoring_interrupts
753066 ± 7% -19.9% 603509 ± 3% interrupts.CPU36.LOC:Local_timer_interrupts
770217 ± 4% -22.3% 598102 ± 4% interrupts.CPU37.LOC:Local_timer_interrupts
154.25 ± 22% +35.5% 209.00 ± 11% interrupts.CPU37.NMI:Non-maskable_interrupts
154.25 ± 22% +35.5% 209.00 ± 11% interrupts.CPU37.PMI:Performance_monitoring_interrupts
25858 ± 19% -77.0% 5934 ± 93% interrupts.CPU38.CAL:Function_call_interrupts
780482 ± 4% -21.6% 612137 ± 2% interrupts.CPU38.LOC:Local_timer_interrupts
40918 ± 20% -83.1% 6897 ± 99% interrupts.CPU38.TLB:TLB_shootdowns
16002 ± 73% -91.3% 1393 ± 71% interrupts.CPU39.CAL:Function_call_interrupts
780803 ± 4% -21.9% 610067 ± 2% interrupts.CPU39.LOC:Local_timer_interrupts
26028 ± 80% -95.1% 1266 ± 99% interrupts.CPU39.TLB:TLB_shootdowns
38.75 ±170% +457.4% 216.00 ± 99% interrupts.CPU4.40:PCI-MSI.31981573-edge.i40e-eth0-TxRx-4
775413 ± 4% -23.2% 595390 ± 5% interrupts.CPU4.LOC:Local_timer_interrupts
775403 ± 4% -23.2% 595381 ± 5% interrupts.CPU40.LOC:Local_timer_interrupts
224.50 ± 66% -99.3% 1.50 ±100% interrupts.CPU40.RES:Rescheduling_interrupts
767824 ± 5% -20.4% 611248 ± 2% interrupts.CPU41.LOC:Local_timer_interrupts
761919 ± 5% -17.9% 625594 interrupts.CPU42.LOC:Local_timer_interrupts
148.75 ± 39% +218.0% 473.00 ± 20% interrupts.CPU42.NMI:Non-maskable_interrupts
148.75 ± 39% +218.0% 473.00 ± 20% interrupts.CPU42.PMI:Performance_monitoring_interrupts
91.25 ± 76% +1514.2% 1473 ± 92% interrupts.CPU42.RES:Rescheduling_interrupts
769048 ± 5% -18.9% 623787 interrupts.CPU43.LOC:Local_timer_interrupts
178.75 ± 62% +160.4% 465.50 ± 36% interrupts.CPU43.NMI:Non-maskable_interrupts
178.75 ± 62% +160.4% 465.50 ± 36% interrupts.CPU43.PMI:Performance_monitoring_interrupts
334.25 ± 71% +772.0% 2914 ± 6% interrupts.CPU43.RES:Rescheduling_interrupts
762037 ± 5% -18.2% 623633 interrupts.CPU44.LOC:Local_timer_interrupts
145.00 ± 30% +193.4% 425.50 ± 45% interrupts.CPU44.NMI:Non-maskable_interrupts
145.00 ± 30% +193.4% 425.50 ± 45% interrupts.CPU44.PMI:Performance_monitoring_interrupts
768845 ± 5% -19.1% 621650 interrupts.CPU45.LOC:Local_timer_interrupts
160.50 ± 39% +164.5% 424.50 ± 40% interrupts.CPU45.NMI:Non-maskable_interrupts
160.50 ± 39% +164.5% 424.50 ± 40% interrupts.CPU45.PMI:Performance_monitoring_interrupts
258.00 ± 93% +1022.9% 2897 ± 10% interrupts.CPU45.RES:Rescheduling_interrupts
768253 ± 5% -20.9% 607606 ± 2% interrupts.CPU46.LOC:Local_timer_interrupts
484.75 ± 46% +446.0% 2646 ± 10% interrupts.CPU46.RES:Rescheduling_interrupts
767745 ± 5% -19.7% 616701 interrupts.CPU47.LOC:Local_timer_interrupts
151.25 ± 25% +175.7% 417.00 ± 10% interrupts.CPU47.NMI:Non-maskable_interrupts
151.25 ± 25% +175.7% 417.00 ± 10% interrupts.CPU47.PMI:Performance_monitoring_interrupts
217.25 ± 82% +1163.3% 2744 ± 36% interrupts.CPU47.RES:Rescheduling_interrupts
755309 ± 7% -18.0% 619516 interrupts.CPU48.LOC:Local_timer_interrupts
141.50 ± 29% +224.7% 459.50 interrupts.CPU48.NMI:Non-maskable_interrupts
141.50 ± 29% +224.7% 459.50 interrupts.CPU48.PMI:Performance_monitoring_interrupts
381.75 ±129% +563.4% 2532 ± 8% interrupts.CPU48.RES:Rescheduling_interrupts
758326 ± 6% -20.4% 603715 ± 3% interrupts.CPU49.LOC:Local_timer_interrupts
216.75 ±118% +754.2% 1851 ± 32% interrupts.CPU49.RES:Rescheduling_interrupts
768460 ± 5% -20.5% 611077 ± 2% interrupts.CPU5.LOC:Local_timer_interrupts
752177 ± 7% -18.2% 615565 interrupts.CPU50.LOC:Local_timer_interrupts
138.25 ±104% +830.9% 1287 ± 3% interrupts.CPU50.RES:Rescheduling_interrupts
651.00 ± 40% +334.1% 2826 ± 61% interrupts.CPU51.CAL:Function_call_interrupts
753755 ± 7% -20.5% 598892 ± 4% interrupts.CPU51.LOC:Local_timer_interrupts
200.50 ±163% +244.4% 690.50 ± 77% interrupts.CPU51.RES:Rescheduling_interrupts
452.00 ±127% +545.9% 2919 ± 75% interrupts.CPU51.TLB:TLB_shootdowns
781089 ± 4% -23.1% 600485 ± 4% interrupts.CPU52.LOC:Local_timer_interrupts
762214 ± 5% -21.9% 595108 ± 5% interrupts.CPU53.LOC:Local_timer_interrupts
137.50 ±131% +425.1% 722.00 ± 75% interrupts.CPU53.RES:Rescheduling_interrupts
1444 ± 57% +62.3% 2344 ± 25% interrupts.CPU55.CAL:Function_call_interrupts
148.75 ± 40% +56.6% 233.00 ± 2% interrupts.CPU55.NMI:Non-maskable_interrupts
148.75 ± 40% +56.6% 233.00 ± 2% interrupts.CPU55.PMI:Performance_monitoring_interrupts
152.50 ± 36% +41.3% 215.50 ± 8% interrupts.CPU56.NMI:Non-maskable_interrupts
152.50 ± 36% +41.3% 215.50 ± 8% interrupts.CPU56.PMI:Performance_monitoring_interrupts
1412 ± 72% +137.7% 3356 ± 25% interrupts.CPU57.CAL:Function_call_interrupts
119.00 ± 47% +95.4% 232.50 ± 4% interrupts.CPU57.NMI:Non-maskable_interrupts
119.00 ± 47% +95.4% 232.50 ± 4% interrupts.CPU57.PMI:Performance_monitoring_interrupts
974.75 ± 49% +190.8% 2834 ± 26% interrupts.CPU58.CAL:Function_call_interrupts
149.00 ± 42% +60.1% 238.50 ± 2% interrupts.CPU58.NMI:Non-maskable_interrupts
149.00 ± 42% +60.1% 238.50 ± 2% interrupts.CPU58.PMI:Performance_monitoring_interrupts
424.75 ± 21% +243.5% 1459 ± 65% interrupts.CPU6.CAL:Function_call_interrupts
761736 ± 6% -17.9% 625621 interrupts.CPU6.LOC:Local_timer_interrupts
131.50 ± 63% +282.9% 503.50 ± 18% interrupts.CPU6.NMI:Non-maskable_interrupts
131.50 ± 63% +282.9% 503.50 ± 18% interrupts.CPU6.PMI:Performance_monitoring_interrupts
57.50 ±165% +1915.7% 1159 ± 89% interrupts.CPU6.TLB:TLB_shootdowns
934.00 ± 65% +173.0% 2549 ± 22% interrupts.CPU61.CAL:Function_call_interrupts
151.50 ± 43% +54.1% 233.50 ± 10% interrupts.CPU61.NMI:Non-maskable_interrupts
151.50 ± 43% +54.1% 233.50 ± 10% interrupts.CPU61.PMI:Performance_monitoring_interrupts
957.25 ± 87% +205.5% 2924 ± 38% interrupts.CPU62.CAL:Function_call_interrupts
160.25 ± 33% +46.0% 234.00 interrupts.CPU62.NMI:Non-maskable_interrupts
160.25 ± 33% +46.0% 234.00 interrupts.CPU62.PMI:Performance_monitoring_interrupts
157.75 ± 32% +36.6% 215.50 ± 5% interrupts.CPU63.NMI:Non-maskable_interrupts
157.75 ± 32% +36.6% 215.50 ± 5% interrupts.CPU63.PMI:Performance_monitoring_interrupts
142.75 ± 45% +60.8% 229.50 interrupts.CPU64.NMI:Non-maskable_interrupts
142.75 ± 45% +60.8% 229.50 interrupts.CPU64.PMI:Performance_monitoring_interrupts
168.50 ± 37% +42.4% 240.00 ± 11% interrupts.CPU66.NMI:Non-maskable_interrupts
168.50 ± 37% +42.4% 240.00 ± 11% interrupts.CPU66.PMI:Performance_monitoring_interrupts
19.00 ±112% +389.5% 93.00 ± 86% interrupts.CPU66.RES:Rescheduling_interrupts
1019 ± 74% +207.4% 3133 ± 58% interrupts.CPU67.CAL:Function_call_interrupts
21.25 ±105% +229.4% 70.00 ± 78% interrupts.CPU67.RES:Rescheduling_interrupts
1670 ±100% +158.0% 4308 ± 45% interrupts.CPU68.CAL:Function_call_interrupts
139.00 ± 42% +70.1% 236.50 ± 7% interrupts.CPU68.NMI:Non-maskable_interrupts
139.00 ± 42% +70.1% 236.50 ± 7% interrupts.CPU68.PMI:Performance_monitoring_interrupts
151.50 ± 42% +54.1% 233.50 ± 3% interrupts.CPU69.NMI:Non-maskable_interrupts
151.50 ± 42% +54.1% 233.50 ± 3% interrupts.CPU69.PMI:Performance_monitoring_interrupts
356.00 ± 12% -28.4% 255.00 interrupts.CPU7.35:PCI-MSI.31981568-edge.i40e-0000:3d:00.0:misc
471.00 ± 35% +1325.9% 6716 ± 91% interrupts.CPU7.CAL:Function_call_interrupts
768670 ± 5% -18.9% 623623 interrupts.CPU7.LOC:Local_timer_interrupts
124.75 ± 42% +189.0% 360.50 ± 28% interrupts.CPU7.NMI:Non-maskable_interrupts
124.75 ± 42% +189.0% 360.50 ± 28% interrupts.CPU7.PMI:Performance_monitoring_interrupts
240.00 ±171% +2702.3% 6725 ± 95% interrupts.CPU7.TLB:TLB_shootdowns
143.25 ± 43% +88.8% 270.50 ± 8% interrupts.CPU71.NMI:Non-maskable_interrupts
143.25 ± 43% +88.8% 270.50 ± 8% interrupts.CPU71.PMI:Performance_monitoring_interrupts
403.00 ± 13% +43.7% 579.00 ± 31% interrupts.CPU8.CAL:Function_call_interrupts
762534 ± 5% -18.2% 623692 interrupts.CPU8.LOC:Local_timer_interrupts
116.75 ± 36% +210.5% 362.50 ± 20% interrupts.CPU8.NMI:Non-maskable_interrupts
116.75 ± 36% +210.5% 362.50 ± 20% interrupts.CPU8.PMI:Performance_monitoring_interrupts
58.25 ±165% +381.5% 280.50 ± 97% interrupts.CPU8.TLB:TLB_shootdowns
517.25 ± 47% +164.3% 1367 ± 27% interrupts.CPU9.CAL:Function_call_interrupts
769145 ± 4% -19.2% 621669 interrupts.CPU9.LOC:Local_timer_interrupts
133.25 ± 29% +163.4% 351.00 ± 30% interrupts.CPU9.NMI:Non-maskable_interrupts
133.25 ± 29% +163.4% 351.00 ± 30% interrupts.CPU9.PMI:Performance_monitoring_interrupts
213.75 ±170% +385.8% 1038 ± 34% interrupts.CPU9.TLB:TLB_shootdowns
11245 ± 24% +71.2% 19256 ± 2% interrupts.NMI:Non-maskable_interrupts
11245 ± 24% +71.2% 19256 ± 2% interrupts.PMI:Performance_monitoring_interrupts
552885 ± 8% -52.3% 263920 ± 4% interrupts.TLB:TLB_shootdowns
9.93 ± 52% -8.6 1.34 ± 41% perf-profile.calltrace.cycles-pp.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.do_page_fault
28.49 ± 13% -7.5 21.01 ± 4% perf-profile.calltrace.cycles-pp.apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
24.12 ± 14% -6.2 17.91 ± 6% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle
15.20 ± 17% -4.4 10.78 ± 3% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter
5.40 ± 21% -4.1 1.30 ± 40% perf-profile.calltrace.cycles-pp.clear_huge_page.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
5.25 ± 21% -4.0 1.27 ± 40% perf-profile.calltrace.cycles-pp.clear_subpage.clear_huge_page.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault
4.47 ± 21% -3.4 1.03 ± 40% perf-profile.calltrace.cycles-pp.clear_page_erms.clear_subpage.clear_huge_page.do_huge_pmd_anonymous_page.__handle_mm_fault
8.39 ± 8% -2.2 6.23 perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state
5.29 ± 15% -1.8 3.50 ± 3% perf-profile.calltrace.cycles-pp.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
4.25 ± 15% -1.5 2.74 ± 4% perf-profile.calltrace.cycles-pp.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt
4.48 ± 12% -1.5 2.98 ± 2% perf-profile.calltrace.cycles-pp.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt
8.97 ± 2% -1.3 7.64 ± 3% perf-profile.calltrace.cycles-pp.menu_select.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
4.86 ± 5% -0.7 4.12 perf-profile.calltrace.cycles-pp.tick_nohz_get_sleep_length.menu_select.do_idle.cpu_startup_entry.start_secondary
3.81 ± 5% -0.6 3.18 ± 2% perf-profile.calltrace.cycles-pp.tick_nohz_next_event.tick_nohz_get_sleep_length.menu_select.do_idle.cpu_startup_entry
1.38 ± 8% -0.5 0.84 ± 10% perf-profile.calltrace.cycles-pp.scheduler_tick.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues
1.64 ± 17% -0.5 1.17 ± 8% perf-profile.calltrace.cycles-pp.lapic_next_deadline.clockevents_program_event.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
2.84 ± 2% -0.5 2.39 perf-profile.calltrace.cycles-pp.get_next_timer_interrupt.tick_nohz_next_event.tick_nohz_get_sleep_length.menu_select.do_idle
1.01 ± 39% -0.4 0.59 ± 3% perf-profile.calltrace.cycles-pp.tick_nohz_irq_exit.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state
1.27 ± 19% -0.4 0.91 ± 6% perf-profile.calltrace.cycles-pp.native_write_msr.lapic_next_deadline.clockevents_program_event.hrtimer_interrupt.smp_apic_timer_interrupt
1.09 ± 11% -0.3 0.82 ± 10% perf-profile.calltrace.cycles-pp.rcu_sched_clock_irq.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues
1.82 ± 5% -0.3 1.56 ± 2% perf-profile.calltrace.cycles-pp.__next_timer_interrupt.get_next_timer_interrupt.tick_nohz_next_event.tick_nohz_get_sleep_length.menu_select
1.38 ± 5% -0.2 1.21 perf-profile.calltrace.cycles-pp.perf_mux_hrtimer_handler.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
0.99 ± 3% -0.1 0.87 ± 5% perf-profile.calltrace.cycles-pp.find_next_bit.__next_timer_interrupt.get_next_timer_interrupt.tick_nohz_next_event.tick_nohz_get_sleep_length
0.43 ± 59% +0.4 0.78 ± 19% perf-profile.calltrace.cycles-pp.worker_thread.kthread.ret_from_fork
0.00 +0.6 0.59 ± 12% perf-profile.calltrace.cycles-pp.clear_page_erms.prep_new_page.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma
0.00 +0.8 0.77 ± 10% perf-profile.calltrace.cycles-pp.mem_cgroup_try_charge.mem_cgroup_try_charge_delay.do_anonymous_page.__handle_mm_fault.handle_mm_fault
0.00 +0.9 0.88 ± 13% perf-profile.calltrace.cycles-pp.mem_cgroup_try_charge_delay.do_anonymous_page.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
0.00 +0.9 0.90 ± 15% perf-profile.calltrace.cycles-pp.prep_new_page.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.do_anonymous_page
0.00 +0.9 0.94 ± 34% perf-profile.calltrace.cycles-pp.do_swap_page.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.do_page_fault
0.00 +1.2 1.16 ± 37% perf-profile.calltrace.cycles-pp.do_IRQ.ret_from_intr.cpuidle_enter_state.cpuidle_enter.do_idle
0.00 +1.2 1.22 ± 34% perf-profile.calltrace.cycles-pp.ret_from_intr.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
0.00 +1.3 1.25 ± 13% perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.do_anonymous_page.__handle_mm_fault
0.00 +1.3 1.29 ± 13% perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.alloc_pages_vma.do_anonymous_page.__handle_mm_fault.handle_mm_fault
0.00 +1.3 1.33 ± 12% perf-profile.calltrace.cycles-pp.alloc_pages_vma.do_anonymous_page.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
0.00 +1.4 1.36 ± 9% perf-profile.calltrace.cycles-pp._raw_spin_lock.do_anonymous_page.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
0.47 ± 62% +1.5 2.02 ± 18% perf-profile.calltrace.cycles-pp.shrink_page_list.shrink_inactive_list.shrink_node_memcg.shrink_node.balance_pgdat
0.48 ± 62% +1.6 2.10 ± 20% perf-profile.calltrace.cycles-pp.shrink_inactive_list.shrink_node_memcg.shrink_node.balance_pgdat.kswapd
0.52 ± 61% +2.2 2.69 ± 23% perf-profile.calltrace.cycles-pp.shrink_node_memcg.shrink_node.balance_pgdat.kswapd.kthread
0.52 ± 61% +2.2 2.69 ± 23% perf-profile.calltrace.cycles-pp.shrink_node.balance_pgdat.kswapd.kthread.ret_from_fork
0.52 ± 61% +2.2 2.75 ± 21% perf-profile.calltrace.cycles-pp.kswapd.kthread.ret_from_fork
0.52 ± 61% +2.2 2.75 ± 21% perf-profile.calltrace.cycles-pp.balance_pgdat.kswapd.kthread.ret_from_fork
1.25 ± 21% +2.4 3.64 ± 20% perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
1.25 ± 21% +2.4 3.64 ± 20% perf-profile.calltrace.cycles-pp.ret_from_fork
0.00 +4.2 4.15 ± 3% perf-profile.calltrace.cycles-pp.do_anonymous_page.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.do_page_fault
9.93 ± 52% -8.6 1.34 ± 41% perf-profile.children.cycles-pp.do_huge_pmd_anonymous_page
26.97 ± 13% -6.5 20.47 ± 5% perf-profile.children.cycles-pp.apic_timer_interrupt
5.43 ± 21% -4.1 1.31 ± 41% perf-profile.children.cycles-pp.clear_huge_page
15.57 ± 16% -4.1 11.46 ± 4% perf-profile.children.cycles-pp.hrtimer_interrupt
5.27 ± 21% -4.0 1.27 ± 40% perf-profile.children.cycles-pp.clear_subpage
4.54 ± 21% -2.8 1.69 ± 19% perf-profile.children.cycles-pp.clear_page_erms
8.62 ± 7% -1.9 6.71 perf-profile.children.cycles-pp.__hrtimer_run_queues
5.43 ± 13% -1.6 3.86 ± 2% perf-profile.children.cycles-pp.tick_sched_timer
9.15 ± 3% -1.3 7.80 ± 3% perf-profile.children.cycles-pp.menu_select
4.60 ± 10% -1.3 3.33 perf-profile.children.cycles-pp.tick_sched_handle
4.39 ± 12% -1.3 3.13 ± 3% perf-profile.children.cycles-pp.update_process_times
4.93 ± 5% -0.7 4.19 perf-profile.children.cycles-pp.tick_nohz_get_sleep_length
3.89 ± 5% -0.6 3.25 ± 2% perf-profile.children.cycles-pp.tick_nohz_next_event
1.17 ± 15% -0.6 0.60 ± 9% perf-profile.children.cycles-pp.try_to_unmap_one
1.18 ± 15% -0.5 0.64 ± 5% perf-profile.children.cycles-pp.try_to_unmap
1.42 ± 9% -0.5 0.90 ± 9% perf-profile.children.cycles-pp.scheduler_tick
2.88 ± 2% -0.4 2.44 perf-profile.children.cycles-pp.get_next_timer_interrupt
1.75 ± 14% -0.4 1.30 ± 5% perf-profile.children.cycles-pp.native_write_msr
1.25 ± 20% -0.4 0.81 perf-profile.children.cycles-pp._raw_spin_lock_irqsave
1.67 ± 15% -0.4 1.23 ± 4% perf-profile.children.cycles-pp.lapic_next_deadline
1.03 ± 38% -0.4 0.60 perf-profile.children.cycles-pp.tick_nohz_irq_exit
0.55 ± 42% -0.3 0.24 ± 16% perf-profile.children.cycles-pp.calc_global_load_tick
2.01 ± 4% -0.3 1.71 ± 3% perf-profile.children.cycles-pp.__next_timer_interrupt
1.13 ± 11% -0.3 0.86 ± 11% perf-profile.children.cycles-pp.rcu_sched_clock_irq
0.89 ± 7% -0.2 0.65 perf-profile.children.cycles-pp.sched_clock_cpu
0.78 ± 8% -0.2 0.56 perf-profile.children.cycles-pp.sched_clock
0.75 ± 9% -0.2 0.54 perf-profile.children.cycles-pp.native_sched_clock
1.43 ± 5% -0.2 1.27 perf-profile.children.cycles-pp.perf_mux_hrtimer_handler
0.14 ± 38% -0.1 0.04 ±100% perf-profile.children.cycles-pp.do_fault
0.40 ± 3% -0.1 0.30 ± 9% perf-profile.children.cycles-pp.__intel_pmu_enable_all
0.12 ± 44% -0.1 0.03 ±100% perf-profile.children.cycles-pp.filemap_map_pages
0.13 ± 23% -0.1 0.07 ± 7% perf-profile.children.cycles-pp.__x64_sys_execve
0.12 ± 19% -0.1 0.06 perf-profile.children.cycles-pp.execve
0.12 ± 26% -0.1 0.07 ± 7% perf-profile.children.cycles-pp.__do_execve_file
0.18 ± 9% -0.0 0.14 ± 11% perf-profile.children.cycles-pp.call_cpuidle
0.11 ± 15% -0.0 0.07 ± 7% perf-profile.children.cycles-pp.page_remove_rmap
0.00 +0.1 0.05 perf-profile.children.cycles-pp.vmacache_find
0.00 +0.1 0.05 perf-profile.children.cycles-pp.ata_scsi_qc_complete
0.00 +0.1 0.05 perf-profile.children.cycles-pp.__mod_node_page_state
0.00 +0.1 0.05 perf-profile.children.cycles-pp.xas_create_range
0.00 +0.1 0.06 perf-profile.children.cycles-pp.find_vma
0.00 +0.1 0.06 ± 16% perf-profile.children.cycles-pp.xas_create
0.01 ±173% +0.1 0.08 ± 33% perf-profile.children.cycles-pp.ata_scsi_translate
0.17 ± 28% +0.1 0.23 ± 2% perf-profile.children.cycles-pp.account_process_tick
0.00 +0.1 0.07 ± 28% perf-profile.children.cycles-pp.xas_load
0.18 ± 27% +0.1 0.25 ± 8% perf-profile.children.cycles-pp.__delete_from_swap_cache
0.00 +0.1 0.08 ± 6% perf-profile.children.cycles-pp.ata_qc_complete_multiple
0.00 +0.1 0.08 ± 33% perf-profile.children.cycles-pp.__mod_lruvec_state
0.00 +0.1 0.08 ± 12% perf-profile.children.cycles-pp.down_read_trylock
0.00 +0.1 0.08 ± 12% perf-profile.children.cycles-pp.get_mem_cgroup_from_mm
0.00 +0.1 0.08 ± 5% perf-profile.children.cycles-pp.delete_from_swap_cache
0.01 ±173% +0.1 0.10 ± 30% perf-profile.children.cycles-pp.io_schedule
0.02 ±173% +0.1 0.11 ± 33% perf-profile.children.cycles-pp.unwind_next_frame
0.00 +0.1 0.09 ± 11% perf-profile.children.cycles-pp.reuse_swap_page
0.01 ±173% +0.1 0.11 ± 52% perf-profile.children.cycles-pp.ata_scsi_queuecmd
0.10 ± 24% +0.1 0.19 ± 5% perf-profile.children.cycles-pp.add_to_swap
0.09 ± 20% +0.1 0.18 ± 13% perf-profile.children.cycles-pp.add_to_swap_cache
0.00 +0.1 0.10 ± 19% perf-profile.children.cycles-pp.mem_cgroup_throttle_swaprate
0.00 +0.1 0.11 ± 52% perf-profile.children.cycles-pp.get_swap_bio
0.05 ± 61% +0.1 0.16 ± 25% perf-profile.children.cycles-pp.schedule_idle
0.00 +0.1 0.11 ± 27% perf-profile.children.cycles-pp.___perf_sw_event
0.06 ± 59% +0.1 0.17 ± 23% perf-profile.children.cycles-pp.__queue_work
0.00 +0.1 0.12 ± 21% perf-profile.children.cycles-pp.__list_del_entry_valid
0.10 ± 81% +0.1 0.21 ± 4% perf-profile.children.cycles-pp.on_each_cpu_cond_mask
0.05 ± 62% +0.1 0.17 ± 33% perf-profile.children.cycles-pp.arch_stack_walk
0.00 +0.1 0.12 ± 50% perf-profile.children.cycles-pp.__slab_free
0.09 ± 42% +0.1 0.22 ± 30% perf-profile.children.cycles-pp.inactive_list_is_low
0.00 +0.1 0.12 ± 28% perf-profile.children.cycles-pp.__perf_sw_event
0.00 +0.1 0.13 ± 15% perf-profile.children.cycles-pp.lru_cache_add_active_or_unevictable
0.00 +0.1 0.13 ± 15% perf-profile.children.cycles-pp.ahci_scr_read
0.00 +0.1 0.13 ± 15% perf-profile.children.cycles-pp.sata_async_notification
0.03 ±100% +0.1 0.16 ± 50% perf-profile.children.cycles-pp.kmem_cache_free
0.00 +0.1 0.14 ± 28% perf-profile.children.cycles-pp.arch_tlbbatch_flush
0.00 +0.1 0.14 ± 28% perf-profile.children.cycles-pp.try_to_unmap_flush_dirty
0.05 ± 67% +0.1 0.20 ± 35% perf-profile.children.cycles-pp.stack_trace_save_tsk
0.00 +0.2 0.15 ± 53% perf-profile.children.cycles-pp.end_page_writeback
0.00 +0.2 0.15 ± 54% perf-profile.children.cycles-pp.end_swap_bio_write
0.00 +0.2 0.16 ± 37% perf-profile.children.cycles-pp.__read_swap_cache_async
0.04 ±113% +0.2 0.20 ± 46% perf-profile.children.cycles-pp.scsi_queue_rq
0.08 ± 61% +0.2 0.25 ± 19% perf-profile.children.cycles-pp.blk_mq_run_hw_queue
0.05 ± 67% +0.2 0.23 ± 20% perf-profile.children.cycles-pp.kblockd_mod_delayed_work_on
0.05 ± 67% +0.2 0.23 ± 20% perf-profile.children.cycles-pp.mod_delayed_work_on
0.30 ± 19% +0.2 0.47 ± 19% perf-profile.children.cycles-pp.__sched_text_start
0.07 ± 29% +0.2 0.26 ± 34% perf-profile.children.cycles-pp.__account_scheduler_latency
0.05 ±106% +0.2 0.23 ± 53% perf-profile.children.cycles-pp.blk_mq_dispatch_rq_list
0.09 ± 20% +0.2 0.29 ± 57% perf-profile.children.cycles-pp.__swap_writepage
0.00 +0.2 0.20 ± 10% perf-profile.children.cycles-pp.rmqueue_bulk
0.00 +0.2 0.20 ± 40% perf-profile.children.cycles-pp.blk_mq_sched_insert_request
0.08 ± 38% +0.2 0.28 ± 43% perf-profile.children.cycles-pp.blk_mq_do_dispatch_sched
0.09 ± 39% +0.2 0.29 ± 42% perf-profile.children.cycles-pp.__blk_mq_run_hw_queue
0.08 ± 38% +0.2 0.29 ± 44% perf-profile.children.cycles-pp.blk_mq_sched_dispatch_requests
0.00 +0.2 0.21 ± 9% perf-profile.children.cycles-pp.__pagevec_lru_add_fn
0.38 ± 12% +0.2 0.61 ± 19% perf-profile.children.cycles-pp.process_one_work
0.04 ± 57% +0.2 0.28 ± 14% perf-profile.children.cycles-pp.ahci_handle_port_interrupt
0.10 ± 18% +0.2 0.34 ± 55% perf-profile.children.cycles-pp.pageout
0.54 ± 17% +0.2 0.78 ± 19% perf-profile.children.cycles-pp.worker_thread
0.00 +0.3 0.26 ± 23% perf-profile.children.cycles-pp.sync_regs
0.12 ± 18% +0.3 0.39 ± 35% perf-profile.children.cycles-pp.ttwu_do_activate
0.12 ± 16% +0.3 0.39 ± 35% perf-profile.children.cycles-pp.activate_task
0.04 ±102% +0.3 0.31 ± 11% perf-profile.children.cycles-pp.rmqueue
0.12 ± 17% +0.3 0.39 ± 33% perf-profile.children.cycles-pp.enqueue_task_fair
0.00 +0.3 0.28 ± 14% perf-profile.children.cycles-pp.__lru_cache_add
0.10 ± 30% +0.3 0.39 ± 35% perf-profile.children.cycles-pp.enqueue_entity
0.06 ± 65% +0.3 0.34 ± 47% perf-profile.children.cycles-pp.page_vma_mapped_walk
0.11 ± 28% +0.3 0.40 ± 39% perf-profile.children.cycles-pp.blk_mq_make_request
0.00 +0.3 0.30 ± 18% perf-profile.children.cycles-pp.try_charge
0.00 +0.3 0.31 ± 16% perf-profile.children.cycles-pp.pagevec_lru_move_fn
0.07 ± 35% +0.3 0.41 ± 21% perf-profile.children.cycles-pp.shrink_active_list
0.11 ± 32% +0.3 0.46 ± 40% perf-profile.children.cycles-pp.generic_make_request
0.11 ± 32% +0.3 0.46 ± 41% perf-profile.children.cycles-pp.submit_bio
0.00 +0.4 0.35 ± 43% perf-profile.children.cycles-pp.end_swap_bio_read
0.06 ± 13% +0.4 0.43 ± 19% perf-profile.children.cycles-pp.ahci_handle_port_intr
0.16 ± 5% +0.4 0.53 ± 35% perf-profile.children.cycles-pp.try_to_wake_up
0.01 ±173% +0.4 0.42 ± 30% perf-profile.children.cycles-pp.page_referenced_one
0.02 ±173% +0.5 0.48 ± 31% perf-profile.children.cycles-pp.swap_readpage
0.09 ± 14% +0.5 0.56 ± 25% perf-profile.children.cycles-pp.handle_irq_event_percpu
0.08 ± 19% +0.5 0.55 ± 25% perf-profile.children.cycles-pp.ahci_single_level_irq_intr
0.08 ± 17% +0.5 0.55 ± 26% perf-profile.children.cycles-pp.__handle_irq_event_percpu
0.00 +0.5 0.48 ± 33% perf-profile.children.cycles-pp.prepare_exit_to_usermode
0.14 ± 27% +0.5 0.62 ± 28% perf-profile.children.cycles-pp.page_referenced
0.09 ± 8% +0.5 0.59 ± 25% perf-profile.children.cycles-pp.handle_irq_event
0.09 ± 11% +0.5 0.61 ± 24% perf-profile.children.cycles-pp.handle_edge_irq
0.10 ± 11% +0.5 0.61 ± 25% perf-profile.children.cycles-pp.handle_irq
0.04 ± 58% +0.5 0.57 ± 47% perf-profile.children.cycles-pp.blk_update_request
0.13 ± 20% +0.6 0.70 ± 38% perf-profile.children.cycles-pp.scsi_end_request
0.13 ± 20% +0.6 0.70 ± 39% perf-profile.children.cycles-pp.scsi_io_completion
0.15 ± 21% +0.6 0.73 ± 39% perf-profile.children.cycles-pp.blk_done_softirq
0.02 ±173% +0.6 0.64 ± 32% perf-profile.children.cycles-pp.swapin_readahead
0.02 ±173% +0.6 0.64 ± 32% perf-profile.children.cycles-pp.read_swap_cache_async
0.03 ±102% +0.6 0.67 ± 27% perf-profile.children.cycles-pp.swapgs_restore_regs_and_return_to_usermode
0.00 +0.8 0.85 ± 12% perf-profile.children.cycles-pp.mem_cgroup_try_charge
0.04 ±110% +0.9 0.94 ± 34% perf-profile.children.cycles-pp.do_swap_page
0.02 ±173% +0.9 0.94 ± 14% perf-profile.children.cycles-pp.prep_new_page
0.00 +1.0 0.98 ± 15% perf-profile.children.cycles-pp.mem_cgroup_try_charge_delay
0.22 ± 19% +1.0 1.25 ± 36% perf-profile.children.cycles-pp.do_IRQ
0.28 ± 22% +1.0 1.31 ± 32% perf-profile.children.cycles-pp.ret_from_intr
0.06 ±103% +1.3 1.32 ± 11% perf-profile.children.cycles-pp.get_page_from_freelist
0.00 +1.4 1.38 ± 12% perf-profile.children.cycles-pp.alloc_pages_vma
0.72 ± 8% +1.4 2.16 ± 5% perf-profile.children.cycles-pp._raw_spin_lock
0.59 ± 32% +2.2 2.75 ± 21% perf-profile.children.cycles-pp.kswapd
0.59 ± 32% +2.2 2.75 ± 21% perf-profile.children.cycles-pp.balance_pgdat
1.25 ± 21% +2.4 3.64 ± 20% perf-profile.children.cycles-pp.kthread
1.25 ± 21% +2.4 3.65 ± 20% perf-profile.children.cycles-pp.ret_from_fork
0.17 ±156% +2.6 2.75 ± 92% perf-profile.children.cycles-pp.poll_idle
0.00 +4.2 4.18 ± 3% perf-profile.children.cycles-pp.do_anonymous_page
4.34 ± 21% -2.7 1.67 ± 19% perf-profile.self.cycles-pp.clear_page_erms
2.99 ± 5% -0.8 2.23 ± 7% perf-profile.self.cycles-pp.cpuidle_enter_state
0.73 ± 20% -0.5 0.21 ± 52% perf-profile.self.cycles-pp.clear_subpage
1.74 ± 14% -0.4 1.30 ± 5% perf-profile.self.cycles-pp.native_write_msr
1.48 ± 20% -0.4 1.11 perf-profile.self.cycles-pp.read_tsc
0.54 ± 42% -0.3 0.23 ± 20% perf-profile.self.cycles-pp.calc_global_load_tick
0.96 ± 14% -0.3 0.66 ± 9% perf-profile.self.cycles-pp.rcu_sched_clock_irq
0.90 ± 14% -0.3 0.63 ± 4% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.72 ± 10% -0.2 0.49 perf-profile.self.cycles-pp.native_sched_clock
0.93 ± 3% -0.2 0.71 ± 7% perf-profile.self.cycles-pp.__next_timer_interrupt
0.41 ± 17% -0.2 0.23 ± 13% perf-profile.self.cycles-pp.try_to_unmap_one
0.30 ± 19% -0.1 0.16 ± 6% perf-profile.self.cycles-pp.menu_reflect
0.43 ± 12% -0.1 0.31 ± 3% perf-profile.self.cycles-pp.hrtimer_interrupt
0.68 ± 8% -0.1 0.56 ± 2% perf-profile.self.cycles-pp.do_idle
0.18 ± 16% -0.1 0.11 perf-profile.self.cycles-pp.clockevents_program_event
0.12 ± 24% -0.1 0.06 perf-profile.self.cycles-pp.task_tick_idle
0.04 ± 60% +0.0 0.08 ± 29% perf-profile.self.cycles-pp.retint_kernel
0.00 +0.1 0.05 perf-profile.self.cycles-pp.vmacache_find
0.00 +0.1 0.05 perf-profile.self.cycles-pp.__mod_node_page_state
0.00 +0.1 0.07 ± 7% perf-profile.self.cycles-pp.get_page_from_freelist
0.00 +0.1 0.07 ± 7% perf-profile.self.cycles-pp.get_mem_cgroup_from_mm
0.16 ± 32% +0.1 0.23 ± 4% perf-profile.self.cycles-pp.account_process_tick
0.00 +0.1 0.07 ± 28% perf-profile.self.cycles-pp.mem_cgroup_try_charge_delay
0.00 +0.1 0.07 perf-profile.self.cycles-pp.handle_mm_fault
0.00 +0.1 0.07 ± 20% perf-profile.self.cycles-pp.ahci_handle_port_interrupt
0.00 +0.1 0.08 ± 6% perf-profile.self.cycles-pp.do_anonymous_page
0.00 +0.1 0.08 ± 12% perf-profile.self.cycles-pp.down_read_trylock
0.00 +0.1 0.08 ± 12% perf-profile.self.cycles-pp.mem_cgroup_throttle_swaprate
0.00 +0.1 0.09 ± 22% perf-profile.self.cycles-pp.___perf_sw_event
0.00 +0.1 0.11 ± 18% perf-profile.self.cycles-pp.__list_del_entry_valid
0.00 +0.1 0.12 ± 39% perf-profile.self.cycles-pp.ahci_single_level_irq_intr
0.00 +0.1 0.12 ± 50% perf-profile.self.cycles-pp.__slab_free
0.00 +0.1 0.12 ± 12% perf-profile.self.cycles-pp.lru_cache_add_active_or_unevictable
0.00 +0.1 0.12 ± 19% perf-profile.self.cycles-pp.ahci_scr_read
0.00 +0.1 0.13 ± 23% perf-profile.self.cycles-pp.rmqueue_bulk
0.00 +0.1 0.15 ± 10% perf-profile.self.cycles-pp.__pagevec_lru_add_fn
0.00 +0.1 0.15 ± 31% perf-profile.self.cycles-pp.ahci_handle_port_intr
0.01 ±173% +0.2 0.18 ± 13% perf-profile.self.cycles-pp.swapgs_restore_regs_and_return_to_usermode
0.00 +0.2 0.21 ± 4% perf-profile.self.cycles-pp.__handle_mm_fault
0.00 +0.3 0.26 ± 25% perf-profile.self.cycles-pp.sync_regs
0.01 ±173% +0.3 0.28 ± 49% perf-profile.self.cycles-pp.page_vma_mapped_walk
0.00 +0.3 0.29 ± 17% perf-profile.self.cycles-pp.prep_new_page
0.00 +0.3 0.30 ± 15% perf-profile.self.cycles-pp.try_charge
0.00 +0.4 0.39 ± 6% perf-profile.self.cycles-pp.mem_cgroup_try_charge
0.00 +0.5 0.46 ± 33% perf-profile.self.cycles-pp.prepare_exit_to_usermode
0.00 +0.9 0.89 ± 5% perf-profile.self.cycles-pp.do_access
0.70 ± 8% +1.4 2.06 ± 4% perf-profile.self.cycles-pp._raw_spin_lock
0.16 ±155% +2.4 2.56 ± 93% perf-profile.self.cycles-pp.poll_idle
vm-scalability.throughput
140000 +------------------------------------------------------------------+
| O O O O O O O O O O O O O O O |
120000 |-+ |
| .+. |
100000 |-+ .+.+..+..+. .+. .+. +..|
|..+.+..+.+. +. +..+..+.+..+.+..+.+..+..+.+..+ |
80000 |-+ |
| |
60000 |-+ |
| |
40000 |-+ |
| |
20000 |-+ |
| |
0 +------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
View attachment "config-5.3.0-00003-gb39d0ee2632d2" of type "text/plain" (196328 bytes)
View attachment "job-script" of type "text/plain" (8069 bytes)
View attachment "job.yaml" of type "text/plain" (5762 bytes)
View attachment "reproduce" of type "text/plain" (945 bytes)
Powered by blists - more mailing lists