[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20210414140805.GB28254@xsang-OptiPlex-9020>
Date: Wed, 14 Apr 2021 22:08:05 +0800
From: kernel test robot <oliver.sang@...el.com>
To: Christoph Hellwig <hch@....de>
Cc: Jens Axboe <axboe@...nel.dk>, LKML <linux-kernel@...r.kernel.org>,
Linux Memory Management List <linux-mm@...ck.org>,
lkp@...ts.01.org, lkp@...el.com, ying.huang@...el.com,
feng.tang@...el.com, zhengjun.xing@...el.com
Subject: [block] c76f48eb5c: stress-ng.loop.ops_per_sec -99.9% regression
Greeting,
FYI, we noticed a -99.9% regression of stress-ng.loop.ops_per_sec due to commit:
commit: c76f48eb5c084b1e15c931ae8cc1826cd771d70d ("block: take bd_mutex around delete_partitions in del_gendisk")
https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git master
in testcase: stress-ng
on test machine: 96 threads Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 192G memory
with following parameters:
nr_threads: 10%
disk: 1HDD
testtime: 60s
fs: ext4
class: os
test: loop
cpufreq_governor: performance
ucode: 0x5003006
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <oliver.sang@...el.com>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml
bin/lkp run compatible-job.yaml
=========================================================================================
class/compiler/cpufreq_governor/disk/fs/kconfig/nr_threads/rootfs/tbox_group/test/testcase/testtime/ucode:
os/gcc-9/performance/1HDD/ext4/x86_64-rhel-8.3/10%/debian-10.4-x86_64-20200603.cgz/lkp-csl-2sp5/loop/stress-ng/60s/0x5003006
commit:
d3c4a43d92 ("block: refactor blk_drop_partitions")
c76f48eb5c ("block: take bd_mutex around delete_partitions in del_gendisk")
d3c4a43d9291279c c76f48eb5c084b1e15c931ae8cc
---------------- ---------------------------
%stddev %change %stddev
\ | \
1464 -99.2% 12.00 ± 33% stress-ng.loop.ops
24.35 -99.9% 0.03 ± 41% stress-ng.loop.ops_per_sec
62.28 +789.4% 553.94 ± 13% stress-ng.time.elapsed_time
62.28 +789.4% 553.94 ± 13% stress-ng.time.elapsed_time.max
3671 -99.1% 32.20 ± 27% stress-ng.time.involuntary_context_switches
9792 -1.6% 9636 stress-ng.time.minor_page_faults
36.17 -100.0% 0.00 stress-ng.time.percent_of_cpu_this_job_got
22.65 ± 2% -98.9% 0.26 ± 35% stress-ng.time.system_time
59875 ± 14% -98.6% 851.60 ±110% stress-ng.time.voluntary_context_switches
2189 ± 2% -2.9% 2125 boot-time.idle
102.01 +482.0% 593.67 ± 12% uptime.boot
9211 +501.5% 55405 ± 11% uptime.idle
0.60 ± 3% -0.6 0.02 ± 12% mpstat.cpu.all.iowait%
1.27 +0.6 1.86 ± 27% mpstat.cpu.all.irq%
0.59 -0.6 0.03 ± 2% mpstat.cpu.all.sys%
0.07 ± 2% -0.1 0.01 ± 2% mpstat.cpu.all.usr%
179096 ± 16% +272.4% 666980 ± 17% numa-numastat.node0.local_node
224078 ± 5% +220.9% 718975 ± 19% numa-numastat.node0.numa_hit
155998 ± 18% +281.6% 595329 ± 33% numa-numastat.node1.local_node
199503 ± 4% +215.8% 630069 ± 31% numa-numastat.node1.numa_hit
1598 ± 3% -99.9% 1.20 ±122% vmstat.io.bi
5044 -66.3% 1700 ± 2% vmstat.memory.buff
1.00 -100.0% 0.00 vmstat.procs.r
8439 ± 4% -83.8% 1367 ± 6% vmstat.system.cs
5636242 -49.3% 2860359 ± 6% cpuidle.C1.time
60648 ± 2% -25.4% 45246 ± 7% cpuidle.C1.usage
5.204e+09 ± 22% +607.9% 3.684e+10 ± 41% cpuidle.C1E.time
11235694 ± 14% +684.5% 88145411 ± 16% cpuidle.C1E.usage
41714 ± 6% +162.2% 109376 ± 11% cpuidle.POLL.usage
9288 +58.8% 14748 ± 4% meminfo.Active
3197 +296.2% 12670 ± 5% meminfo.Active(anon)
6089 -65.9% 2076 ± 5% meminfo.Active(file)
86058 ± 5% +107.8% 178815 meminfo.AnonHugePages
4981 -65.9% 1698 ± 2% meminfo.Buffers
825613 -12.4% 723628 ± 3% meminfo.Committed_AS
16237013 ± 3% -11.0% 14454104 ± 8% meminfo.DirectMap2M
18587 -12.6% 16252 meminfo.KernelStack
2850592 -17.0% 2366313 meminfo.Memused
7.00 +26105.7% 1834 ± 24% meminfo.Mlocked
5522 -14.7% 4711 meminfo.PageTables
23075 +42.7% 32929 ± 2% meminfo.Shmem
915.33 ± 33% -73.1% 245.80 ± 23% numa-vmstat.node0.nr_active_file
40463 ± 35% -39.9% 24330 ± 69% numa-vmstat.node0.nr_anon_pages
42808 ± 31% -36.5% 27171 ± 57% numa-vmstat.node0.nr_inactive_anon
0.00 +2.5e+104% 247.20 ± 26% numa-vmstat.node0.nr_mlock
797.33 ± 28% -31.1% 549.60 ± 24% numa-vmstat.node0.nr_page_table_pages
12722 ± 3% -13.8% 10971 ± 9% numa-vmstat.node0.nr_slab_reclaimable
27196 ± 8% -13.5% 23535 ± 8% numa-vmstat.node0.nr_slab_unreclaimable
915.33 ± 33% -73.1% 245.80 ± 23% numa-vmstat.node0.nr_zone_active_file
42808 ± 31% -36.5% 27171 ± 57% numa-vmstat.node0.nr_zone_inactive_anon
849837 ± 15% +26.7% 1076564 ± 9% numa-vmstat.node0.numa_local
457.67 ± 20% +399.8% 2287 ± 17% numa-vmstat.node1.nr_active_anon
8770 ± 8% -15.2% 7437 ± 3% numa-vmstat.node1.nr_kernel_stack
0.00 +2.1e+104% 210.40 ± 29% numa-vmstat.node1.nr_mlock
457.67 ± 20% +399.8% 2287 ± 17% numa-vmstat.node1.nr_zone_active_anon
3665 ± 33% -73.1% 985.00 ± 23% numa-meminfo.node0.Active(file)
161847 ± 35% -39.9% 97309 ± 69% numa-meminfo.node0.AnonPages
171672 ± 31% -36.4% 109247 ± 57% numa-meminfo.node0.Inactive
171227 ± 31% -36.5% 108673 ± 57% numa-meminfo.node0.Inactive(anon)
50893 ± 3% -13.8% 43888 ± 9% numa-meminfo.node0.KReclaimable
1526012 ± 4% -26.1% 1127392 ± 12% numa-meminfo.node0.MemUsed
3.00 +32893.3% 989.80 ± 26% numa-meminfo.node0.Mlocked
3195 ± 28% -31.4% 2193 ± 24% numa-meminfo.node0.PageTables
50893 ± 3% -13.8% 43888 ± 9% numa-meminfo.node0.SReclaimable
108773 ± 8% -13.5% 94141 ± 8% numa-meminfo.node0.SUnreclaim
159668 ± 6% -13.6% 138030 ± 6% numa-meminfo.node0.Slab
4255 ± 29% +140.8% 10246 ± 14% numa-meminfo.node1.Active
1829 ± 20% +400.4% 9154 ± 17% numa-meminfo.node1.Active(anon)
8775 ± 8% -15.2% 7439 ± 3% numa-meminfo.node1.KernelStack
3.00 +28053.3% 844.60 ± 29% numa-meminfo.node1.Mlocked
799.67 +296.1% 3167 ± 5% proc-vmstat.nr_active_anon
1522 -65.9% 518.80 ± 5% proc-vmstat.nr_active_file
63795 -8.3% 58521 proc-vmstat.nr_anon_pages
87.00 ± 28% +78.9% 155.60 ± 28% proc-vmstat.nr_dirtied
68723 -7.6% 63478 proc-vmstat.nr_inactive_anon
18586 -12.6% 16247 proc-vmstat.nr_kernel_stack
10029 -4.4% 9587 proc-vmstat.nr_mapped
1.00 +45720.0% 458.20 ± 24% proc-vmstat.nr_mlock
1380 -14.9% 1174 proc-vmstat.nr_page_table_pages
5765 +42.8% 8231 ± 2% proc-vmstat.nr_shmem
24003 -5.8% 22600 proc-vmstat.nr_slab_reclaimable
49519 -7.8% 45650 proc-vmstat.nr_slab_unreclaimable
77.50 ± 31% +95.1% 151.20 ± 29% proc-vmstat.nr_written
799.67 +296.1% 3167 ± 5% proc-vmstat.nr_zone_active_anon
1522 -65.9% 518.80 ± 5% proc-vmstat.nr_zone_active_file
68723 -7.6% 63478 proc-vmstat.nr_zone_inactive_anon
971.50 ± 30% -58.3% 405.00 ± 51% proc-vmstat.numa_hint_faults
457086 +202.7% 1383638 ± 10% proc-vmstat.numa_hit
368542 +251.9% 1296853 ± 11% proc-vmstat.numa_local
88543 -2.0% 86784 proc-vmstat.numa_other
9561 ± 2% -60.7% 3757 ± 5% proc-vmstat.pgactivate
553335 +163.0% 1455536 ± 10% proc-vmstat.pgalloc_normal
220882 +623.5% 1598121 ± 12% proc-vmstat.pgfault
415665 +247.1% 1442871 ± 10% proc-vmstat.pgfree
104075 ± 3% -99.1% 916.80 ± 69% proc-vmstat.pgpgin
451476 ± 7% +734.6% 3767973 ± 15% proc-vmstat.pgpgout
14210 +649.7% 106533 ± 12% proc-vmstat.pgreuse
8304 ± 82% +301.2% 33317 ± 9% sched_debug.cfs_rq:/.load.avg
46081 ±116% +188.8% 133105 ± 7% sched_debug.cfs_rq:/.load.stddev
178.19 ± 31% -73.7% 46.80 ± 10% sched_debug.cfs_rq:/.load_avg.avg
6121 ± 14% -76.4% 1442 ± 11% sched_debug.cfs_rq:/.load_avg.max
751.44 ± 17% -72.0% 210.71 ± 9% sched_debug.cfs_rq:/.load_avg.stddev
0.17 ± 17% -58.6% 0.07 ± 8% sched_debug.cfs_rq:/.nr_running.avg
0.37 ± 7% -36.2% 0.24 ± 3% sched_debug.cfs_rq:/.nr_running.stddev
393.19 ± 8% -83.9% 63.44 ± 6% sched_debug.cfs_rq:/.runnable_avg.avg
1333 ± 17% -51.9% 641.22 ± 3% sched_debug.cfs_rq:/.runnable_avg.max
317.55 ± 10% -61.0% 123.72 ± 4% sched_debug.cfs_rq:/.runnable_avg.stddev
392.44 ± 8% -83.9% 63.32 ± 6% sched_debug.cfs_rq:/.util_avg.avg
1330 ± 17% -51.9% 640.31 ± 3% sched_debug.cfs_rq:/.util_avg.max
316.96 ± 10% -61.0% 123.61 ± 4% sched_debug.cfs_rq:/.util_avg.stddev
37.95 ± 33% -86.6% 5.08 ± 34% sched_debug.cfs_rq:/.util_est_enqueued.avg
518.00 ± 5% -79.3% 107.10 ± 25% sched_debug.cfs_rq:/.util_est_enqueued.max
121.83 ± 18% -83.3% 20.33 ± 23% sched_debug.cfs_rq:/.util_est_enqueued.stddev
740981 ± 2% +26.7% 938799 sched_debug.cpu.avg_idle.avg
1996 ± 23% +6329.5% 128386 ± 56% sched_debug.cpu.avg_idle.min
306945 ± 5% -53.4% 143028 ± 13% sched_debug.cpu.avg_idle.stddev
38618 +637.1% 284649 ± 15% sched_debug.cpu.clock.avg
38623 +637.0% 284653 ± 15% sched_debug.cpu.clock.max
38613 +637.2% 284645 ± 15% sched_debug.cpu.clock.min
38490 +626.0% 279434 ± 14% sched_debug.cpu.clock_task.avg
38609 +624.6% 279743 ± 14% sched_debug.cpu.clock_task.max
33619 +716.4% 274485 ± 15% sched_debug.cpu.clock_task.min
286.92 ± 19% -51.0% 140.52 ± 7% sched_debug.cpu.curr->pid.avg
2432 +254.8% 8630 ± 12% sched_debug.cpu.curr->pid.max
755.23 ± 8% +28.1% 967.57 ± 10% sched_debug.cpu.curr->pid.stddev
897805 ± 31% -40.4% 535075 sched_debug.cpu.max_idle_balance_cost.max
55398 ± 60% -89.2% 5981 ± 39% sched_debug.cpu.max_idle_balance_cost.stddev
0.00 ± 13% -48.9% 0.00 ± 8% sched_debug.cpu.next_balance.stddev
0.13 ± 18% -73.1% 0.03 ± 11% sched_debug.cpu.nr_running.avg
0.33 ± 7% -49.8% 0.17 ± 3% sched_debug.cpu.nr_running.stddev
1577 +225.4% 5133 ± 10% sched_debug.cpu.nr_switches.avg
10235 ± 13% +526.1% 64083 ± 22% sched_debug.cpu.nr_switches.max
471.33 ± 17% +200.5% 1416 ± 16% sched_debug.cpu.nr_switches.min
1438 ± 6% +476.3% 8287 ± 14% sched_debug.cpu.nr_switches.stddev
0.01 ± 31% +854.0% 0.12 ± 4% sched_debug.cpu.nr_uninterruptible.avg
38615 +637.1% 284646 ± 15% sched_debug.cpu_clk
38121 +645.4% 284151 ± 15% sched_debug.ktime
38979 +631.2% 285008 ± 15% sched_debug.sched_clk
11.12 ± 2% +175.9% 30.68 ±104% perf-stat.i.MPKI
3.418e+08 -49.6% 1.723e+08 ± 7% perf-stat.i.branch-instructions
1.34 ± 4% +2.2 3.49 ±107% perf-stat.i.branch-miss-rate%
8485 ± 4% -84.2% 1344 ± 6% perf-stat.i.context-switches
3.44 ± 3% +52.6% 5.25 ± 28% perf-stat.i.cpi
160.47 -39.2% 97.62 perf-stat.i.cpu-migrations
4.201e+08 -46.4% 2.254e+08 ± 3% perf-stat.i.dTLB-loads
2.097e+08 -41.9% 1.217e+08 ± 2% perf-stat.i.dTLB-stores
2039329 -16.7% 1698838 ± 21% perf-stat.i.iTLB-loads
1.625e+09 -49.1% 8.277e+08 ± 7% perf-stat.i.instructions
1029 ± 4% -41.2% 605.47 ± 22% perf-stat.i.instructions-per-iTLB-miss
0.30 ± 3% -31.8% 0.21 ± 22% perf-stat.i.ipc
11.18 -85.5% 1.62 ± 20% perf-stat.i.major-faults
1.64 ± 9% -40.8% 0.97 ± 17% perf-stat.i.metric.K/sec
10.33 -45.0% 5.69 ± 5% perf-stat.i.metric.M/sec
3162 -11.6% 2796 perf-stat.i.minor-faults
500094 ± 3% -59.2% 203915 ± 34% perf-stat.i.node-load-misses
86458 ± 5% -65.8% 29541 ± 55% perf-stat.i.node-loads
273470 -86.8% 36116 ± 14% perf-stat.i.node-store-misses
60952 ± 9% -86.2% 8425 ±101% perf-stat.i.node-stores
3205 -12.7% 2797 perf-stat.i.page-faults
10.58 ± 2% +182.6% 29.91 ±104% perf-stat.overall.MPKI
1.37 ± 3% +2.1 3.48 ±104% perf-stat.overall.branch-miss-rate%
3.29 ± 3% +55.9% 5.13 ± 28% perf-stat.overall.cpi
1034 ± 4% -41.2% 608.25 ± 22% perf-stat.overall.instructions-per-iTLB-miss
0.30 ± 3% -31.6% 0.21 ± 22% perf-stat.overall.ipc
3.363e+08 -48.9% 1.72e+08 ± 7% perf-stat.ps.branch-instructions
8351 ± 4% -83.9% 1342 ± 6% perf-stat.ps.context-switches
94471 +1.4% 95822 perf-stat.ps.cpu-clock
157.89 -38.3% 97.44 perf-stat.ps.cpu-migrations
4.134e+08 -45.6% 2.25e+08 ± 3% perf-stat.ps.dTLB-loads
2.063e+08 -41.1% 1.215e+08 ± 2% perf-stat.ps.dTLB-stores
2006862 -15.5% 1695650 ± 21% perf-stat.ps.iTLB-loads
1.6e+09 -48.3% 8.263e+08 ± 7% perf-stat.ps.instructions
11.01 -85.2% 1.63 ± 20% perf-stat.ps.major-faults
3112 -10.3% 2791 perf-stat.ps.minor-faults
492165 ± 3% -58.6% 203545 ± 33% perf-stat.ps.node-load-misses
85077 ± 5% -65.3% 29492 ± 55% perf-stat.ps.node-loads
269158 -86.6% 36060 ± 14% perf-stat.ps.node-store-misses
59987 ± 9% -86.0% 8420 ±101% perf-stat.ps.node-stores
3154 -11.5% 2793 perf-stat.ps.page-faults
94471 +1.4% 95822 perf-stat.ps.task-clock
1.016e+11 +350.5% 4.579e+11 ± 13% perf-stat.total.instructions
3935 ± 4% -57.2% 1684 ± 6% slabinfo.Acpi-Parse.active_objs
3935 ± 4% -57.2% 1684 ± 6% slabinfo.Acpi-Parse.num_objs
1575 -84.5% 244.20 ± 6% slabinfo.bdev_cache.active_objs
1575 -84.5% 244.20 ± 6% slabinfo.bdev_cache.num_objs
323.00 ± 11% -33.7% 214.00 ± 22% slabinfo.biovec-128.num_objs
2514 ± 4% -42.9% 1435 ± 4% slabinfo.buffer_head.active_objs
2535 ± 4% -43.4% 1435 ± 4% slabinfo.buffer_head.num_objs
4061 ± 5% -20.4% 3232 ± 7% slabinfo.dmaengine-unmap-16.active_objs
4072 ± 5% -20.6% 3233 ± 7% slabinfo.dmaengine-unmap-16.num_objs
2479 -90.3% 240.20 ± 28% slabinfo.dquot.active_objs
2479 -90.3% 240.20 ± 28% slabinfo.dquot.num_objs
5213 ± 13% -68.9% 1620 ± 23% slabinfo.ext4_extent_status.active_objs
5213 ± 13% -68.9% 1620 ± 23% slabinfo.ext4_extent_status.num_objs
4.67 ±127% +4400.0% 210.00 ± 36% slabinfo.ext4_pending_reservation.active_objs
4.67 ±127% +4400.0% 210.00 ± 36% slabinfo.ext4_pending_reservation.num_objs
2360 ± 2% -49.0% 1204 ± 5% slabinfo.file_lock_cache.active_objs
2360 ± 2% -49.0% 1204 ± 5% slabinfo.file_lock_cache.num_objs
7031 ± 5% -47.0% 3729 ± 14% slabinfo.fsnotify_mark_connector.active_objs
7031 ± 5% -47.0% 3729 ± 14% slabinfo.fsnotify_mark_connector.num_objs
1417 -9.7% 1280 slabinfo.inode_cache.active_slabs
76560 -9.7% 69145 slabinfo.inode_cache.num_objs
1417 -9.7% 1280 slabinfo.inode_cache.num_slabs
99699 -14.6% 85176 slabinfo.kernfs_node_cache.active_objs
3119 -14.7% 2661 slabinfo.kernfs_node_cache.active_slabs
99815 -14.7% 85176 slabinfo.kernfs_node_cache.num_objs
3119 -14.7% 2661 slabinfo.kernfs_node_cache.num_slabs
44972 -18.9% 36462 slabinfo.kmalloc-16.num_objs
7059 ± 2% -18.4% 5763 ± 2% slabinfo.kmalloc-192.num_objs
6610 ± 2% -12.7% 5771 ± 2% slabinfo.kmalloc-1k.active_objs
7524 ± 2% -21.1% 5937 slabinfo.kmalloc-1k.num_objs
9426 ± 2% -8.0% 8676 ± 3% slabinfo.kmalloc-256.active_objs
9582 ± 2% -8.8% 8737 ± 3% slabinfo.kmalloc-256.num_objs
7902 ± 2% -9.3% 7170 ± 3% slabinfo.kmalloc-2k.active_objs
8225 ± 2% -11.7% 7265 ± 3% slabinfo.kmalloc-2k.num_objs
2221 -12.4% 1945 slabinfo.kmalloc-4k.num_objs
6092 +29.2% 7873 ± 3% slabinfo.kmalloc-96.active_objs
7021 ± 2% +13.9% 7994 ± 3% slabinfo.kmalloc-96.num_objs
3282 -84.4% 513.20 ± 6% slabinfo.kmalloc-rcl-192.active_objs
3282 -84.4% 513.20 ± 6% slabinfo.kmalloc-rcl-192.num_objs
5514 ± 2% -11.6% 4875 slabinfo.kmalloc-rcl-512.active_objs
5518 ± 2% -11.6% 4876 slabinfo.kmalloc-rcl-512.num_objs
6565 ± 3% -21.8% 5131 ± 8% slabinfo.kmalloc-rcl-64.active_objs
6565 ± 3% -21.8% 5131 ± 8% slabinfo.kmalloc-rcl-64.num_objs
2880 ± 6% -19.0% 2332 ± 11% slabinfo.kmalloc-rcl-96.active_objs
2880 ± 6% -19.0% 2332 ± 11% slabinfo.kmalloc-rcl-96.num_objs
6217 -77.1% 1424 ± 21% slabinfo.mbcache.active_objs
6217 -77.1% 1424 ± 21% slabinfo.mbcache.num_objs
13404 +13.2% 15170 slabinfo.proc_inode_cache.active_objs
13413 +13.3% 15197 slabinfo.proc_inode_cache.num_objs
1246 -88.5% 142.80 ± 13% slabinfo.request_queue.active_objs
1246 -88.5% 142.80 ± 13% slabinfo.request_queue.num_objs
528.17 ± 7% -33.5% 351.00 ± 13% slabinfo.skbuff_fclone_cache.active_objs
528.17 ± 7% -33.5% 351.00 ± 13% slabinfo.skbuff_fclone_cache.num_objs
1399 -9.4% 1268 slabinfo.task_struct.active_objs
1403 -9.2% 1274 slabinfo.task_struct.active_slabs
1403 -9.2% 1274 slabinfo.task_struct.num_objs
1403 -9.2% 1274 slabinfo.task_struct.num_slabs
5015 ± 3% -23.2% 3852 ± 3% slabinfo.trace_event_file.active_objs
5015 ± 3% -23.2% 3852 ± 3% slabinfo.trace_event_file.num_objs
11008 +54.0% 16950 ± 8% slabinfo.vmap_area.active_objs
11022 +55.3% 17115 ± 8% slabinfo.vmap_area.num_objs
0.00 ±223% +1820.0% 0.01 ± 34% perf-sched.sch_delay.avg.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64
0.00 ± 56% +232.3% 0.01 ± 26% perf-sched.sch_delay.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
0.00 ±152% +428.0% 0.01 ± 30% perf-sched.sch_delay.avg.ms.schedule_timeout.ext4_lazyinit_thread.part.0.kthread
0.00 ±141% +644.0% 0.01 ± 31% perf-sched.sch_delay.avg.ms.schedule_timeout.io_schedule_timeout.wait_for_completion_io.submit_bio_wait
0.00 ±223% +753.3% 0.01 ± 32% perf-sched.sch_delay.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork
0.00 ± 22% +268.3% 0.02 ± 98% perf-sched.sch_delay.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork
0.00 ±223% +2930.0% 0.02 ± 28% perf-sched.sch_delay.max.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64
0.01 ± 63% +550.0% 0.04 ± 61% perf-sched.sch_delay.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
0.00 ±152% +756.0% 0.02 ± 35% perf-sched.sch_delay.max.ms.schedule_timeout.ext4_lazyinit_thread.part.0.kthread
0.00 ±141% +920.0% 0.02 ± 22% perf-sched.sch_delay.max.ms.schedule_timeout.io_schedule_timeout.wait_for_completion_io.submit_bio_wait
0.00 ±223% +1126.7% 0.02 ± 25% perf-sched.sch_delay.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork
0.01 ± 30% +25140.0% 2.02 ±197% perf-sched.sch_delay.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork
0.18 ± 25% +1.2e+05% 219.77 ± 10% perf-sched.total_wait_and_delay.average.ms
332.83 ± 18% +1933.6% 6768 ± 10% perf-sched.total_wait_and_delay.count.ms
14.17 ± 33% +70409.9% 9992 perf-sched.total_wait_and_delay.max.ms
0.16 ± 31% +1.3e+05% 219.43 ± 10% perf-sched.total_wait_time.average.ms
13.18 ± 43% +75740.1% 9992 perf-sched.total_wait_time.max.ms
0.00 ±223% +1.2e+08% 818.24 perf-sched.wait_and_delay.avg.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64
0.25 ±123% +1.1e+05% 272.84 ± 5% perf-sched.wait_and_delay.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
0.02 ±146% +3853.7% 0.75 ± 2% perf-sched.wait_and_delay.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64
0.04 ± 43% +2.2e+05% 76.70 ± 5% perf-sched.wait_and_delay.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read
0.00 ±141% +1.7e+07% 289.31 ± 12% perf-sched.wait_and_delay.avg.ms.schedule_timeout.io_schedule_timeout.wait_for_completion_io.submit_bio_wait
0.00 ±223% +3.2e+07% 478.42 perf-sched.wait_and_delay.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork
2.71 ± 35% +583.0% 18.48 ± 61% perf-sched.wait_and_delay.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork
0.06 ±132% +1.3e+06% 723.00 ± 3% perf-sched.wait_and_delay.avg.ms.smpboot_thread_fn.kthread.ret_from_fork
1.48 ± 92% +30586.5% 453.90 ± 7% perf-sched.wait_and_delay.avg.ms.worker_thread.kthread.ret_from_fork
0.50 ±100% +2100.0% 11.00 perf-sched.wait_and_delay.count.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64
5.67 ± 13% +4223.5% 245.00 perf-sched.wait_and_delay.count.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
0.83 ±107% +29108.0% 243.40 perf-sched.wait_and_delay.count.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64
104.83 ± 58% +1907.8% 2104 ± 5% perf-sched.wait_and_delay.count.pipe_read.new_sync_read.vfs_read.ksys_read
96.33 +956.3% 1017 ± 4% perf-sched.wait_and_delay.count.preempt_schedule_common.__cond_resched.wait_for_completion.affine_move_task.__set_cpus_allowed_ptr
0.33 ±141% +9920.0% 33.40 ± 14% perf-sched.wait_and_delay.count.schedule_timeout.io_schedule_timeout.wait_for_completion_io.submit_bio_wait
0.17 ±223% +23540.0% 39.40 ± 2% perf-sched.wait_and_delay.count.schedule_timeout.kcompactd.kthread.ret_from_fork
4.33 ± 31% +20036.9% 872.60 ± 79% perf-sched.wait_and_delay.count.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork
97.00 +1149.9% 1212 ± 3% perf-sched.wait_and_delay.count.smpboot_thread_fn.kthread.ret_from_fork
5.50 ± 20% +10583.6% 587.60 ± 2% perf-sched.wait_and_delay.count.worker_thread.kthread.ret_from_fork
0.00 ±223% +1.5e+08% 1000 perf-sched.wait_and_delay.max.ms.do_nanosleep.hrtimer_nanosleep.__x64_sys_nanosleep.do_syscall_64
0.97 ±137% +3.9e+05% 3771 ± 65% perf-sched.wait_and_delay.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
0.03 ±148% +40301.4% 13.40 perf-sched.wait_and_delay.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64
0.44 ±101% +2.3e+05% 1006 perf-sched.wait_and_delay.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read
0.00 ±223% +2.1e+08% 9992 perf-sched.wait_and_delay.max.ms.preempt_schedule_common.__cond_resched.wait_for_completion.affine_move_task.__set_cpus_allowed_ptr
0.00 ±141% +1.8e+07% 301.75 ± 12% perf-sched.wait_and_delay.max.ms.schedule_timeout.io_schedule_timeout.wait_for_completion_io.submit_bio_wait
0.00 ±223% +3.4e+07% 505.03 perf-sched.wait_and_delay.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork
4.46 ± 25% +6607.2% 299.40 ± 48% perf-sched.wait_and_delay.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork
4.80 ±147% +31635.7% 1522 ± 67% perf-sched.wait_and_delay.max.ms.smpboot_thread_fn.kthread.ret_from_fork
7.15 ±103% +1.3e+05% 8972 perf-sched.wait_and_delay.max.ms.worker_thread.kthread.ret_from_fork
0.25 ±124% +1.1e+05% 272.84 ± 5% perf-sched.wait_time.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
0.02 ±149% +4656.6% 0.75 ± 2% perf-sched.wait_time.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64
0.03 ± 51% +2.6e+05% 76.70 ± 5% perf-sched.wait_time.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read
2.08 ± 21% +2272.6% 49.41 ±143% perf-sched.wait_time.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait
2.70 ± 35% +583.6% 18.46 ± 61% perf-sched.wait_time.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork
0.03 ±222% +2.4e+06% 722.99 ± 3% perf-sched.wait_time.avg.ms.smpboot_thread_fn.kthread.ret_from_fork
0.93 ±150% +48396.0% 450.37 ± 6% perf-sched.wait_time.avg.ms.worker_thread.kthread.ret_from_fork
0.97 ±137% +3.9e+05% 3771 ± 65% perf-sched.wait_time.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
0.03 ±149% +42879.3% 13.40 perf-sched.wait_time.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64
0.43 ±106% +2.3e+05% 1006 perf-sched.wait_time.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read
0.00 ±223% +2.1e+08% 9992 perf-sched.wait_time.max.ms.preempt_schedule_common.__cond_resched.wait_for_completion.affine_move_task.__set_cpus_allowed_ptr
9.79 ± 29% +4253.2% 426.40 ± 19% perf-sched.wait_time.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait
2.42 ±141% +281.7% 9.22 ± 2% perf-sched.wait_time.max.ms.schedule_timeout.ext4_lazyinit_thread.part.0.kthread
4.46 ± 25% +6613.9% 299.37 ± 48% perf-sched.wait_time.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork
2.96 ±222% +51425.8% 1522 ± 67% perf-sched.wait_time.max.ms.smpboot_thread_fn.kthread.ret_from_fork
5.46 ±154% +1.6e+05% 8972 perf-sched.wait_time.max.ms.worker_thread.kthread.ret_from_fork
128.00 +768.1% 1111 ± 12% interrupts.9:IO-APIC.9-fasteoi.acpi
64630 +21.1% 78254 ± 3% interrupts.CAL:Function_call_interrupts
127590 +770.8% 1111103 ± 13% interrupts.CPU0.LOC:Local_timer_interrupts
128.00 +768.1% 1111 ± 12% interrupts.CPU1.9:IO-APIC.9-fasteoi.acpi
127666 +770.3% 1111040 ± 13% interrupts.CPU1.LOC:Local_timer_interrupts
127702 +770.1% 1111139 ± 13% interrupts.CPU10.LOC:Local_timer_interrupts
127701 +770.1% 1111121 ± 13% interrupts.CPU11.LOC:Local_timer_interrupts
127505 +771.5% 1111167 ± 13% interrupts.CPU12.LOC:Local_timer_interrupts
127697 +770.1% 1111136 ± 13% interrupts.CPU13.LOC:Local_timer_interrupts
127617 +770.7% 1111119 ± 13% interrupts.CPU14.LOC:Local_timer_interrupts
127415 +772.2% 1111307 ± 13% interrupts.CPU15.LOC:Local_timer_interrupts
127685 +770.3% 1111238 ± 13% interrupts.CPU16.LOC:Local_timer_interrupts
127875 +768.9% 1111078 ± 13% interrupts.CPU17.LOC:Local_timer_interrupts
127440 +771.9% 1111101 ± 13% interrupts.CPU18.LOC:Local_timer_interrupts
127667 +770.4% 1111256 ± 13% interrupts.CPU19.LOC:Local_timer_interrupts
837.00 ± 18% -26.5% 615.20 ± 2% interrupts.CPU2.CAL:Function_call_interrupts
127702 +770.1% 1111115 ± 13% interrupts.CPU2.LOC:Local_timer_interrupts
263.00 ± 43% -53.1% 123.40 ± 40% interrupts.CPU2.NMI:Non-maskable_interrupts
263.00 ± 43% -53.1% 123.40 ± 40% interrupts.CPU2.PMI:Performance_monitoring_interrupts
127644 +770.4% 1111076 ± 13% interrupts.CPU20.LOC:Local_timer_interrupts
127463 +771.7% 1111091 ± 13% interrupts.CPU21.LOC:Local_timer_interrupts
291.67 ± 15% -60.8% 114.20 ± 16% interrupts.CPU21.NMI:Non-maskable_interrupts
291.67 ± 15% -60.8% 114.20 ± 16% interrupts.CPU21.PMI:Performance_monitoring_interrupts
127839 +769.1% 1111087 ± 13% interrupts.CPU22.LOC:Local_timer_interrupts
127883 +769.1% 1111402 ± 13% interrupts.CPU23.LOC:Local_timer_interrupts
665.50 ± 7% -9.8% 600.40 interrupts.CPU24.CAL:Function_call_interrupts
127511 +771.4% 1111102 ± 13% interrupts.CPU24.LOC:Local_timer_interrupts
127638 +770.5% 1111094 ± 13% interrupts.CPU25.LOC:Local_timer_interrupts
127979 +768.3% 1111283 ± 13% interrupts.CPU26.LOC:Local_timer_interrupts
127715 +770.1% 1111187 ± 13% interrupts.CPU27.LOC:Local_timer_interrupts
242.00 ± 37% -44.0% 135.60 ± 27% interrupts.CPU27.NMI:Non-maskable_interrupts
242.00 ± 37% -44.0% 135.60 ± 27% interrupts.CPU27.PMI:Performance_monitoring_interrupts
127644 +770.5% 1111182 ± 13% interrupts.CPU28.LOC:Local_timer_interrupts
127860 +769.0% 1111086 ± 13% interrupts.CPU29.LOC:Local_timer_interrupts
127959 +768.4% 1111218 ± 13% interrupts.CPU3.LOC:Local_timer_interrupts
232.50 ± 19% -46.3% 124.80 ± 38% interrupts.CPU3.NMI:Non-maskable_interrupts
232.50 ± 19% -46.3% 124.80 ± 38% interrupts.CPU3.PMI:Performance_monitoring_interrupts
127875 +768.8% 1111027 ± 13% interrupts.CPU30.LOC:Local_timer_interrupts
248.83 ± 33% -44.7% 137.60 ± 29% interrupts.CPU30.NMI:Non-maskable_interrupts
248.83 ± 33% -44.7% 137.60 ± 29% interrupts.CPU30.PMI:Performance_monitoring_interrupts
128053 +767.6% 1111029 ± 13% interrupts.CPU31.LOC:Local_timer_interrupts
206.50 ± 21% -44.3% 115.00 ± 16% interrupts.CPU31.NMI:Non-maskable_interrupts
206.50 ± 21% -44.3% 115.00 ± 16% interrupts.CPU31.PMI:Performance_monitoring_interrupts
127400 +772.1% 1111026 ± 13% interrupts.CPU32.LOC:Local_timer_interrupts
127540 +771.2% 1111145 ± 13% interrupts.CPU33.LOC:Local_timer_interrupts
127782 +769.6% 1111188 ± 13% interrupts.CPU34.LOC:Local_timer_interrupts
127628 +770.5% 1111058 ± 13% interrupts.CPU35.LOC:Local_timer_interrupts
187.83 ± 30% -37.3% 117.80 ± 14% interrupts.CPU35.NMI:Non-maskable_interrupts
187.83 ± 30% -37.3% 117.80 ± 14% interrupts.CPU35.PMI:Performance_monitoring_interrupts
127515 +771.4% 1111140 ± 13% interrupts.CPU36.LOC:Local_timer_interrupts
127613 +770.7% 1111074 ± 13% interrupts.CPU37.LOC:Local_timer_interrupts
127622 +770.6% 1111045 ± 13% interrupts.CPU38.LOC:Local_timer_interrupts
127671 +770.2% 1111029 ± 13% interrupts.CPU39.LOC:Local_timer_interrupts
127590 +770.8% 1111091 ± 13% interrupts.CPU4.LOC:Local_timer_interrupts
280.67 ± 37% -50.0% 140.40 ± 24% interrupts.CPU4.NMI:Non-maskable_interrupts
280.67 ± 37% -50.0% 140.40 ± 24% interrupts.CPU4.PMI:Performance_monitoring_interrupts
127627 +770.5% 1111031 ± 13% interrupts.CPU40.LOC:Local_timer_interrupts
127492 +771.5% 1111110 ± 13% interrupts.CPU41.LOC:Local_timer_interrupts
127525 +771.2% 1110964 ± 13% interrupts.CPU42.LOC:Local_timer_interrupts
127450 +771.7% 1111040 ± 13% interrupts.CPU43.LOC:Local_timer_interrupts
127883 +768.8% 1111113 ± 13% interrupts.CPU44.LOC:Local_timer_interrupts
127463 +771.6% 1111027 ± 13% interrupts.CPU45.LOC:Local_timer_interrupts
127432 +771.9% 1111029 ± 13% interrupts.CPU46.LOC:Local_timer_interrupts
127545 +771.1% 1111016 ± 13% interrupts.CPU47.LOC:Local_timer_interrupts
128036 +767.8% 1111089 ± 13% interrupts.CPU48.LOC:Local_timer_interrupts
203.33 ± 15% -47.4% 107.00 ± 31% interrupts.CPU48.NMI:Non-maskable_interrupts
203.33 ± 15% -47.4% 107.00 ± 31% interrupts.CPU48.PMI:Performance_monitoring_interrupts
127859 +769.0% 1111058 ± 13% interrupts.CPU49.LOC:Local_timer_interrupts
246.33 ± 18% -59.0% 101.00 ± 31% interrupts.CPU49.NMI:Non-maskable_interrupts
246.33 ± 18% -59.0% 101.00 ± 31% interrupts.CPU49.PMI:Performance_monitoring_interrupts
127782 +769.6% 1111137 ± 13% interrupts.CPU5.LOC:Local_timer_interrupts
216.00 ± 24% -34.6% 141.20 ± 29% interrupts.CPU5.NMI:Non-maskable_interrupts
216.00 ± 24% -34.6% 141.20 ± 29% interrupts.CPU5.PMI:Performance_monitoring_interrupts
687.17 ± 7% -12.0% 604.60 interrupts.CPU50.CAL:Function_call_interrupts
127612 +770.7% 1111107 ± 13% interrupts.CPU50.LOC:Local_timer_interrupts
260.00 ± 46% -59.4% 105.60 ± 44% interrupts.CPU50.NMI:Non-maskable_interrupts
260.00 ± 46% -59.4% 105.60 ± 44% interrupts.CPU50.PMI:Performance_monitoring_interrupts
127753 +769.7% 1111107 ± 13% interrupts.CPU51.LOC:Local_timer_interrupts
127741 +769.8% 1111151 ± 13% interrupts.CPU52.LOC:Local_timer_interrupts
220.50 ± 25% -42.9% 125.80 ± 37% interrupts.CPU52.NMI:Non-maskable_interrupts
220.50 ± 25% -42.9% 125.80 ± 37% interrupts.CPU52.PMI:Performance_monitoring_interrupts
127472 +771.8% 1111284 ± 13% interrupts.CPU53.LOC:Local_timer_interrupts
127618 +770.6% 1111088 ± 13% interrupts.CPU54.LOC:Local_timer_interrupts
127806 +769.5% 1111287 ± 13% interrupts.CPU55.LOC:Local_timer_interrupts
127518 +771.4% 1111184 ± 13% interrupts.CPU56.LOC:Local_timer_interrupts
127480 +771.8% 1111368 ± 13% interrupts.CPU57.LOC:Local_timer_interrupts
127654 +770.4% 1111081 ± 13% interrupts.CPU58.LOC:Local_timer_interrupts
756.00 ± 18% -22.7% 584.40 ± 5% interrupts.CPU59.CAL:Function_call_interrupts
127593 +770.9% 1111192 ± 13% interrupts.CPU59.LOC:Local_timer_interrupts
127625 +770.6% 1111090 ± 13% interrupts.CPU6.LOC:Local_timer_interrupts
127575 +770.9% 1111088 ± 13% interrupts.CPU60.LOC:Local_timer_interrupts
127992 +768.1% 1111073 ± 13% interrupts.CPU61.LOC:Local_timer_interrupts
127785 +769.5% 1111137 ± 13% interrupts.CPU62.LOC:Local_timer_interrupts
188.33 ± 27% -53.2% 88.20 ± 25% interrupts.CPU62.NMI:Non-maskable_interrupts
188.33 ± 27% -53.2% 88.20 ± 25% interrupts.CPU62.PMI:Performance_monitoring_interrupts
127774 +769.6% 1111109 ± 13% interrupts.CPU63.LOC:Local_timer_interrupts
127792 +769.5% 1111184 ± 13% interrupts.CPU64.LOC:Local_timer_interrupts
217.67 ± 42% -52.0% 104.40 ± 31% interrupts.CPU64.NMI:Non-maskable_interrupts
217.67 ± 42% -52.0% 104.40 ± 31% interrupts.CPU64.PMI:Performance_monitoring_interrupts
127921 +768.6% 1111130 ± 13% interrupts.CPU65.LOC:Local_timer_interrupts
211.50 ± 32% -44.8% 116.80 ± 38% interrupts.CPU65.NMI:Non-maskable_interrupts
211.50 ± 32% -44.8% 116.80 ± 38% interrupts.CPU65.PMI:Performance_monitoring_interrupts
127828 +769.3% 1111230 ± 13% interrupts.CPU66.LOC:Local_timer_interrupts
201.33 ± 22% -46.1% 108.60 ± 30% interrupts.CPU66.NMI:Non-maskable_interrupts
201.33 ± 22% -46.1% 108.60 ± 30% interrupts.CPU66.PMI:Performance_monitoring_interrupts
127662 +770.4% 1111223 ± 13% interrupts.CPU67.LOC:Local_timer_interrupts
280.00 ± 29% -62.6% 104.60 ± 26% interrupts.CPU67.NMI:Non-maskable_interrupts
280.00 ± 29% -62.6% 104.60 ± 26% interrupts.CPU67.PMI:Performance_monitoring_interrupts
127887 +769.1% 1111510 ± 13% interrupts.CPU68.LOC:Local_timer_interrupts
231.17 ± 24% -52.2% 110.60 ± 33% interrupts.CPU68.NMI:Non-maskable_interrupts
231.17 ± 24% -52.2% 110.60 ± 33% interrupts.CPU68.PMI:Performance_monitoring_interrupts
127704 +770.1% 1111152 ± 13% interrupts.CPU69.LOC:Local_timer_interrupts
241.83 ± 29% -49.6% 121.80 ± 39% interrupts.CPU69.NMI:Non-maskable_interrupts
241.83 ± 29% -49.6% 121.80 ± 39% interrupts.CPU69.PMI:Performance_monitoring_interrupts
127479 +771.6% 1111091 ± 13% interrupts.CPU7.LOC:Local_timer_interrupts
127637 +770.5% 1111111 ± 13% interrupts.CPU70.LOC:Local_timer_interrupts
127568 +771.0% 1111167 ± 13% interrupts.CPU71.LOC:Local_timer_interrupts
232.00 ± 24% -49.2% 117.80 ± 43% interrupts.CPU71.NMI:Non-maskable_interrupts
232.00 ± 24% -49.2% 117.80 ± 43% interrupts.CPU71.PMI:Performance_monitoring_interrupts
127584 +770.8% 1111048 ± 13% interrupts.CPU72.LOC:Local_timer_interrupts
127752 +769.9% 1111272 ± 13% interrupts.CPU73.LOC:Local_timer_interrupts
127705 +770.1% 1111229 ± 13% interrupts.CPU74.LOC:Local_timer_interrupts
187.17 ± 30% -34.1% 123.40 ± 43% interrupts.CPU74.NMI:Non-maskable_interrupts
187.17 ± 30% -34.1% 123.40 ± 43% interrupts.CPU74.PMI:Performance_monitoring_interrupts
127802 +769.4% 1111144 ± 13% interrupts.CPU75.LOC:Local_timer_interrupts
253.83 ± 24% -54.3% 116.00 ± 22% interrupts.CPU75.NMI:Non-maskable_interrupts
253.83 ± 24% -54.3% 116.00 ± 22% interrupts.CPU75.PMI:Performance_monitoring_interrupts
127785 +769.6% 1111238 ± 13% interrupts.CPU76.LOC:Local_timer_interrupts
202.00 ± 37% -43.3% 114.60 ± 17% interrupts.CPU76.NMI:Non-maskable_interrupts
202.00 ± 37% -43.3% 114.60 ± 17% interrupts.CPU76.PMI:Performance_monitoring_interrupts
127789 +769.6% 1111269 ± 13% interrupts.CPU77.LOC:Local_timer_interrupts
203.50 ± 35% -42.4% 117.20 ± 17% interrupts.CPU77.NMI:Non-maskable_interrupts
203.50 ± 35% -42.4% 117.20 ± 17% interrupts.CPU77.PMI:Performance_monitoring_interrupts
127542 +771.2% 1111187 ± 13% interrupts.CPU78.LOC:Local_timer_interrupts
127678 +770.2% 1111113 ± 13% interrupts.CPU79.LOC:Local_timer_interrupts
200.17 ± 19% -48.1% 103.80 ± 31% interrupts.CPU79.NMI:Non-maskable_interrupts
200.17 ± 19% -48.1% 103.80 ± 31% interrupts.CPU79.PMI:Performance_monitoring_interrupts
127650 +770.6% 1111283 ± 13% interrupts.CPU8.LOC:Local_timer_interrupts
127841 +769.2% 1111160 ± 13% interrupts.CPU80.LOC:Local_timer_interrupts
127489 +771.6% 1111176 ± 13% interrupts.CPU81.LOC:Local_timer_interrupts
127668 +770.5% 1111340 ± 13% interrupts.CPU82.LOC:Local_timer_interrupts
127892 +768.8% 1111175 ± 13% interrupts.CPU83.LOC:Local_timer_interrupts
127640 +770.5% 1111123 ± 13% interrupts.CPU84.LOC:Local_timer_interrupts
127693 +770.2% 1111192 ± 13% interrupts.CPU85.LOC:Local_timer_interrupts
127458 +771.7% 1111105 ± 13% interrupts.CPU86.LOC:Local_timer_interrupts
127492 +771.5% 1111079 ± 13% interrupts.CPU87.LOC:Local_timer_interrupts
127699 +770.1% 1111103 ± 13% interrupts.CPU88.LOC:Local_timer_interrupts
127963 +768.3% 1111112 ± 13% interrupts.CPU89.LOC:Local_timer_interrupts
165.17 ± 22% -33.6% 109.60 ± 27% interrupts.CPU89.NMI:Non-maskable_interrupts
165.17 ± 22% -33.6% 109.60 ± 27% interrupts.CPU89.PMI:Performance_monitoring_interrupts
127532 +771.2% 1111094 ± 13% interrupts.CPU9.LOC:Local_timer_interrupts
127736 +769.9% 1111139 ± 13% interrupts.CPU90.LOC:Local_timer_interrupts
127585 +770.9% 1111141 ± 13% interrupts.CPU91.LOC:Local_timer_interrupts
127793 +769.6% 1111251 ± 13% interrupts.CPU92.LOC:Local_timer_interrupts
127737 +770.0% 1111275 ± 13% interrupts.CPU93.LOC:Local_timer_interrupts
207.67 ± 41% -38.3% 128.20 ± 43% interrupts.CPU93.NMI:Non-maskable_interrupts
207.67 ± 41% -38.3% 128.20 ± 43% interrupts.CPU93.PMI:Performance_monitoring_interrupts
127478 +771.7% 1111196 ± 13% interrupts.CPU94.LOC:Local_timer_interrupts
127809 +769.5% 1111269 ± 13% interrupts.CPU95.LOC:Local_timer_interrupts
12257088 +770.3% 1.067e+08 ± 13% interrupts.LOC:Local_timer_interrupts
0.00 +1.2e+104% 115.20 ± 33% interrupts.MCP:Machine_check_polls
18319 ± 2% -36.9% 11562 ± 20% interrupts.NMI:Non-maskable_interrupts
18319 ± 2% -36.9% 11562 ± 20% interrupts.PMI:Performance_monitoring_interrupts
19041 ± 2% -66.9% 6302 ± 8% softirqs.BLOCK
12376 ± 16% +188.6% 35722 ± 27% softirqs.CPU0.RCU
11554 ± 8% +416.4% 59667 ± 32% softirqs.CPU0.SCHED
9451 ± 4% +250.7% 33142 ± 37% softirqs.CPU1.RCU
9894 ± 15% +601.5% 69406 ± 14% softirqs.CPU1.SCHED
8818 ± 3% +274.3% 33010 ± 38% softirqs.CPU10.RCU
7845 ± 17% +767.1% 68029 ± 16% softirqs.CPU10.SCHED
8758 ± 2% +269.8% 32390 ± 35% softirqs.CPU11.RCU
8268 ± 12% +564.8% 54965 ± 19% softirqs.CPU11.SCHED
8777 ± 3% +283.0% 33620 ± 37% softirqs.CPU12.RCU
8562 ± 6% +635.3% 62958 ± 27% softirqs.CPU12.SCHED
8618 ± 2% +291.8% 33766 ± 39% softirqs.CPU13.RCU
8896 ± 5% +671.5% 68637 ± 15% softirqs.CPU13.SCHED
8822 ± 3% +274.8% 33071 ± 39% softirqs.CPU14.RCU
8765 ± 7% +617.6% 62905 ± 27% softirqs.CPU14.SCHED
8892 ± 5% +266.9% 32623 ± 38% softirqs.CPU15.RCU
8770 ± 13% +690.8% 69353 ± 15% softirqs.CPU15.SCHED
8633 ± 5% +261.4% 31202 ± 35% softirqs.CPU16.RCU
8236 ± 12% +728.8% 68262 ± 15% softirqs.CPU16.SCHED
8631 ± 3% +286.5% 33363 ± 35% softirqs.CPU17.RCU
8590 ± 7% +702.7% 68958 ± 15% softirqs.CPU17.SCHED
8570 ± 4% +292.6% 33648 ± 38% softirqs.CPU18.RCU
9000 ± 3% +665.7% 68922 ± 16% softirqs.CPU18.SCHED
8654 ± 4% +276.0% 32543 ± 35% softirqs.CPU19.RCU
8259 ± 7% +663.8% 63080 ± 27% softirqs.CPU19.SCHED
9003 ± 3% +278.0% 34036 ± 38% softirqs.CPU2.RCU
8969 ± 9% +664.7% 68583 ± 15% softirqs.CPU2.SCHED
8553 ± 2% +285.0% 32935 ± 37% softirqs.CPU20.RCU
8481 ± 14% +707.0% 68448 ± 16% softirqs.CPU20.SCHED
8347 ± 2% +294.6% 32937 ± 36% softirqs.CPU21.RCU
8053 ± 16% +685.5% 63253 ± 27% softirqs.CPU21.SCHED
8486 ± 2% +292.4% 33301 ± 36% softirqs.CPU22.RCU
8747 ± 7% +624.9% 63411 ± 27% softirqs.CPU22.SCHED
8466 ± 3% +288.6% 32904 ± 36% softirqs.CPU23.RCU
8409 ± 12% +715.9% 68618 ± 15% softirqs.CPU23.SCHED
8609 ± 2% +273.7% 32175 ± 35% softirqs.CPU24.RCU
7874 ± 10% +644.0% 58587 ± 36% softirqs.CPU24.SCHED
8457 ± 3% +251.9% 29760 ± 35% softirqs.CPU25.RCU
7892 ± 11% +770.0% 68665 ± 17% softirqs.CPU25.SCHED
8221 ± 3% +270.5% 30462 ± 36% softirqs.CPU26.RCU
8702 ± 5% +688.0% 68574 ± 17% softirqs.CPU26.SCHED
8603 ± 6% +269.9% 31822 ± 37% softirqs.CPU27.RCU
8671 ± 6% +692.2% 68696 ± 17% softirqs.CPU27.SCHED
8324 ± 3% +278.8% 31527 ± 35% softirqs.CPU28.RCU
8693 ± 9% +688.6% 68556 ± 17% softirqs.CPU28.SCHED
8367 ± 2% +267.4% 30742 ± 38% softirqs.CPU29.RCU
8451 ± 5% +711.8% 68609 ± 17% softirqs.CPU29.SCHED
9104 ± 7% +268.9% 33581 ± 38% softirqs.CPU3.RCU
8759 ± 4% +683.0% 68583 ± 16% softirqs.CPU3.SCHED
8364 ± 2% +276.4% 31485 ± 38% softirqs.CPU30.RCU
8018 ± 11% +646.0% 59812 ± 9% softirqs.CPU30.SCHED
8229 ± 3% +278.4% 31143 ± 35% softirqs.CPU31.RCU
8340 ± 12% +627.0% 60629 ± 30% softirqs.CPU31.SCHED
8608 +287.8% 33384 ± 36% softirqs.CPU32.RCU
8322 ± 8% +720.5% 68286 ± 17% softirqs.CPU32.SCHED
9482 ± 17% +244.0% 32616 ± 35% softirqs.CPU33.RCU
8092 ± 10% +744.1% 68312 ± 16% softirqs.CPU33.SCHED
8629 ± 3% +286.6% 33363 ± 35% softirqs.CPU34.RCU
8400 ± 11% +715.7% 68515 ± 17% softirqs.CPU34.SCHED
8834 ± 2% +279.4% 33515 ± 35% softirqs.CPU35.RCU
7691 ± 18% +683.2% 60233 ± 7% softirqs.CPU35.SCHED
9121 ± 13% +254.2% 32306 ± 35% softirqs.CPU36.RCU
8076 ± 17% +747.2% 68419 ± 17% softirqs.CPU36.SCHED
8649 +278.8% 32764 ± 35% softirqs.CPU37.RCU
8334 ± 9% +719.0% 68259 ± 17% softirqs.CPU37.SCHED
9588 ± 17% +242.1% 32807 ± 35% softirqs.CPU38.RCU
7864 ± 19% +780.2% 69217 ± 16% softirqs.CPU38.SCHED
8701 ± 8% +279.0% 32980 ± 35% softirqs.CPU39.RCU
8315 ± 15% +720.1% 68199 ± 17% softirqs.CPU39.SCHED
9298 ± 3% +260.1% 33479 ± 38% softirqs.CPU4.RCU
7896 ± 16% +768.0% 68540 ± 16% softirqs.CPU4.SCHED
8717 ± 2% +284.6% 33526 ± 36% softirqs.CPU40.RCU
8084 ± 13% +747.4% 68504 ± 16% softirqs.CPU40.SCHED
9221 ± 11% +265.7% 33718 ± 35% softirqs.CPU41.RCU
8288 ± 17% +663.6% 63284 ± 29% softirqs.CPU41.SCHED
8549 ± 5% +264.7% 31178 ± 36% softirqs.CPU42.RCU
8585 ± 15% +693.0% 68084 ± 17% softirqs.CPU42.SCHED
8447 ± 5% +289.6% 32916 ± 35% softirqs.CPU43.RCU
8799 ± 4% +674.1% 68118 ± 17% softirqs.CPU43.SCHED
9261 ± 18% +259.0% 33248 ± 36% softirqs.CPU44.RCU
8813 ± 5% +674.3% 68242 ± 17% softirqs.CPU44.SCHED
8566 ± 2% +284.7% 32959 ± 36% softirqs.CPU45.RCU
8649 ± 5% +688.2% 68177 ± 17% softirqs.CPU45.SCHED
8503 ± 2% +284.8% 32717 ± 35% softirqs.CPU46.RCU
8827 ± 4% +677.6% 68649 ± 16% softirqs.CPU46.SCHED
8764 ± 2% +288.3% 34030 ± 35% softirqs.CPU47.RCU
7976 ± 11% +465.8% 45132 ± 32% softirqs.CPU47.SCHED
8597 ± 2% +254.6% 30487 ± 34% softirqs.CPU48.RCU
6897 ± 28% +705.5% 55560 ± 42% softirqs.CPU48.SCHED
8707 ± 2% +262.2% 31540 ± 36% softirqs.CPU49.RCU
7951 ± 15% +771.6% 69308 ± 14% softirqs.CPU49.SCHED
8812 ± 4% +257.0% 31464 ± 48% softirqs.CPU5.RCU
8683 ± 7% +693.4% 68897 ± 15% softirqs.CPU5.SCHED
8667 ± 3% +284.5% 33326 ± 38% softirqs.CPU50.RCU
8629 ± 5% +639.1% 63781 ± 26% softirqs.CPU50.SCHED
8612 ± 2% +285.5% 33205 ± 39% softirqs.CPU51.RCU
8038 ± 14% +769.7% 69915 ± 15% softirqs.CPU51.SCHED
9342 ± 18% +258.4% 33480 ± 37% softirqs.CPU52.RCU
8941 ± 8% +686.8% 70350 ± 14% softirqs.CPU52.SCHED
8657 ± 4% +268.4% 31894 ± 36% softirqs.CPU53.RCU
9130 ± 5% +670.6% 70360 ± 14% softirqs.CPU53.SCHED
8567 ± 3% +294.4% 33793 ± 40% softirqs.CPU54.RCU
8566 ± 6% +720.0% 70248 ± 15% softirqs.CPU54.SCHED
8452 ± 3% +240.4% 28775 ± 35% softirqs.CPU55.RCU
8596 ± 8% +703.8% 69098 ± 15% softirqs.CPU55.SCHED
8433 ± 5% +295.7% 33366 ± 38% softirqs.CPU56.RCU
8981 ± 10% +673.9% 69506 ± 15% softirqs.CPU56.SCHED
8646 ± 4% +287.2% 33478 ± 38% softirqs.CPU57.RCU
7896 ± 14% +788.8% 70183 ± 14% softirqs.CPU57.SCHED
8628 ± 5% +281.9% 32951 ± 37% softirqs.CPU58.RCU
8711 ± 6% +701.7% 69841 ± 14% softirqs.CPU58.SCHED
9119 ± 6% +271.0% 33834 ± 40% softirqs.CPU59.RCU
7599 ± 22% +831.7% 70802 ± 14% softirqs.CPU59.SCHED
8794 ± 3% +257.0% 31390 ± 50% softirqs.CPU6.RCU
7587 ± 15% +806.2% 68752 ± 15% softirqs.CPU6.SCHED
8794 ± 7% +280.6% 33472 ± 38% softirqs.CPU60.RCU
8915 ± 12% +683.9% 69888 ± 14% softirqs.CPU60.SCHED
8736 ± 6% +272.0% 32497 ± 39% softirqs.CPU61.RCU
7830 ± 20% +789.0% 69619 ± 14% softirqs.CPU61.SCHED
8438 ± 2% +289.9% 32903 ± 38% softirqs.CPU62.RCU
8923 ± 3% +682.3% 69806 ± 14% softirqs.CPU62.SCHED
8514 ± 4% +236.4% 28638 ± 37% softirqs.CPU63.RCU
8927 ± 4% +683.3% 69927 ± 14% softirqs.CPU63.SCHED
8684 ± 5% +260.8% 31333 ± 34% softirqs.CPU64.RCU
8740 ± 9% +700.0% 69919 ± 14% softirqs.CPU64.SCHED
8657 ± 9% +285.0% 33334 ± 35% softirqs.CPU65.RCU
8775 ± 7% +643.6% 65250 ± 25% softirqs.CPU65.SCHED
8601 ± 7% +284.4% 33061 ± 36% softirqs.CPU66.RCU
7824 ± 25% +796.5% 70146 ± 14% softirqs.CPU66.SCHED
8926 ± 6% +261.6% 32278 ± 35% softirqs.CPU67.RCU
7495 ± 17% +817.6% 68774 ± 12% softirqs.CPU67.SCHED
8306 ± 4% +298.9% 33131 ± 37% softirqs.CPU68.RCU
9084 ± 4% +674.4% 70346 ± 14% softirqs.CPU68.SCHED
8712 ± 4% +276.1% 32765 ± 36% softirqs.CPU69.RCU
7531 ± 19% +829.7% 70023 ± 15% softirqs.CPU69.SCHED
8949 ± 6% +273.5% 33428 ± 37% softirqs.CPU7.RCU
8804 ± 15% +676.2% 68346 ± 16% softirqs.CPU7.SCHED
8292 ± 4% +301.4% 33290 ± 37% softirqs.CPU70.RCU
8676 ± 9% +710.8% 70353 ± 14% softirqs.CPU70.SCHED
8563 +282.7% 32769 ± 35% softirqs.CPU71.RCU
8678 ± 8% +707.5% 70081 ± 14% softirqs.CPU71.SCHED
8123 ± 3% +279.7% 30844 ± 37% softirqs.CPU72.RCU
8626 ± 10% +717.3% 70498 ± 16% softirqs.CPU72.SCHED
7931 ± 2% +277.6% 29949 ± 38% softirqs.CPU73.RCU
8313 ± 9% +668.6% 63900 ± 27% softirqs.CPU73.SCHED
7884 ± 6% +282.8% 30180 ± 36% softirqs.CPU74.RCU
8014 ± 16% +778.3% 70389 ± 16% softirqs.CPU74.SCHED
8094 ± 5% +282.6% 30968 ± 36% softirqs.CPU75.RCU
8563 ± 9% +721.0% 70309 ± 16% softirqs.CPU75.SCHED
7935 ± 2% +292.7% 31159 ± 35% softirqs.CPU76.RCU
8914 ± 5% +694.1% 70791 ± 15% softirqs.CPU76.SCHED
8066 ± 3% +296.0% 31941 ± 41% softirqs.CPU77.RCU
8786 ± 6% +665.6% 67270 ± 26% softirqs.CPU77.SCHED
8060 ± 2% +283.3% 30894 ± 37% softirqs.CPU78.RCU
8524 ± 9% +728.1% 70589 ± 16% softirqs.CPU78.SCHED
7895 ± 3% +288.4% 30666 ± 36% softirqs.CPU79.RCU
9026 ± 2% +677.7% 70198 ± 16% softirqs.CPU79.SCHED
8299 ± 9% +305.4% 33649 ± 38% softirqs.CPU8.RCU
8638 ± 8% +628.8% 62951 ± 27% softirqs.CPU8.SCHED
8113 ± 3% +295.0% 32048 ± 36% softirqs.CPU80.RCU
8343 ± 8% +744.1% 70426 ± 16% softirqs.CPU80.SCHED
8513 ± 5% +267.0% 31241 ± 36% softirqs.CPU81.RCU
8074 ± 14% +767.3% 70027 ± 16% softirqs.CPU81.SCHED
8183 ± 3% +291.7% 32057 ± 37% softirqs.CPU82.RCU
8472 ± 11% +733.9% 70651 ± 16% softirqs.CPU82.SCHED
8534 ± 3% +249.6% 29836 ± 44% softirqs.CPU83.RCU
8262 ± 7% +750.6% 70283 ± 16% softirqs.CPU83.SCHED
8470 ± 7% +267.7% 31146 ± 35% softirqs.CPU84.RCU
7999 ± 16% +775.9% 70062 ± 16% softirqs.CPU84.SCHED
8495 ± 5% +272.5% 31646 ± 36% softirqs.CPU85.RCU
7740 ± 16% +805.9% 70119 ± 16% softirqs.CPU85.SCHED
8248 ± 4% +269.8% 30507 ± 33% softirqs.CPU86.RCU
9214 ± 4% +662.6% 70274 ± 16% softirqs.CPU86.SCHED
8240 ± 2% +278.6% 31196 ± 36% softirqs.CPU87.RCU
8462 ± 10% +727.6% 70035 ± 16% softirqs.CPU87.SCHED
8201 ± 4% +295.1% 32406 ± 37% softirqs.CPU88.RCU
8727 ± 11% +708.8% 70592 ± 16% softirqs.CPU88.SCHED
8665 ± 3% +275.1% 32503 ± 36% softirqs.CPU89.RCU
8501 ± 5% +732.1% 70743 ± 16% softirqs.CPU89.SCHED
8602 ± 7% +287.1% 33299 ± 38% softirqs.CPU9.RCU
8684 ± 8% +688.1% 68440 ± 16% softirqs.CPU9.SCHED
8018 ± 3% +268.1% 29515 ± 34% softirqs.CPU90.RCU
8549 ± 8% +710.1% 69266 ± 17% softirqs.CPU90.SCHED
8327 ± 3% +280.5% 31684 ± 37% softirqs.CPU91.RCU
7884 ± 18% +787.3% 69959 ± 16% softirqs.CPU91.SCHED
8110 +291.6% 31763 ± 36% softirqs.CPU92.RCU
9117 ± 3% +667.6% 69986 ± 16% softirqs.CPU92.SCHED
8112 ± 2% +295.0% 32044 ± 37% softirqs.CPU93.RCU
8179 ± 12% +759.1% 70263 ± 16% softirqs.CPU93.SCHED
8804 ± 9% +267.0% 32313 ± 36% softirqs.CPU94.RCU
7897 ± 14% +793.6% 70567 ± 16% softirqs.CPU94.SCHED
8547 ± 5% +284.9% 32900 ± 36% softirqs.CPU95.RCU
7890 ± 16% +633.5% 57881 ± 24% softirqs.CPU95.SCHED
5318 ±154% +353.4% 24111 ± 60% softirqs.NET_RX
827812 +274.8% 3102713 ± 36% softirqs.RCU
811580 +699.7% 6489945 ± 16% softirqs.SCHED
17742 ± 4% +117.8% 38644 ± 17% softirqs.TIMER
18.86 ± 2% -18.9 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
16.79 ± 3% -16.8 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
13.83 ± 5% -13.8 0.00 perf-profile.calltrace.cycles-pp.__x64_sys_ioctl.do_syscall_64.entry_SYSCALL_64_after_hwframe
12.17 ± 3% -12.1 0.11 ±200% perf-profile.calltrace.cycles-pp.ret_from_fork
12.15 ± 3% -12.0 0.11 ±200% perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
11.77 ± 3% -11.8 0.00 perf-profile.calltrace.cycles-pp.worker_thread.kthread.ret_from_fork
11.69 ± 3% -11.7 0.00 perf-profile.calltrace.cycles-pp.process_one_work.worker_thread.kthread.ret_from_fork
11.10 ± 3% -11.1 0.00 perf-profile.calltrace.cycles-pp.drm_fb_helper_damage_work.process_one_work.worker_thread.kthread.ret_from_fork
10.71 ± 3% -10.7 0.00 perf-profile.calltrace.cycles-pp.memcpy_toio.drm_fb_helper_damage_work.process_one_work.worker_thread.kthread
10.30 ± 8% -10.3 0.00 perf-profile.calltrace.cycles-pp.block_ioctl.__x64_sys_ioctl.do_syscall_64.entry_SYSCALL_64_after_hwframe
10.30 ± 8% -10.3 0.00 perf-profile.calltrace.cycles-pp.blkdev_ioctl.block_ioctl.__x64_sys_ioctl.do_syscall_64.entry_SYSCALL_64_after_hwframe
10.29 ± 8% -10.3 0.00 perf-profile.calltrace.cycles-pp.lo_ioctl.blkdev_ioctl.block_ioctl.__x64_sys_ioctl.do_syscall_64
8.22 ± 8% -8.2 0.00 perf-profile.calltrace.cycles-pp.console_unlock.vprintk_emit.printk.set_capacity_and_notify.cold.loop_set_size
7.64 ± 8% -7.6 0.00 perf-profile.calltrace.cycles-pp.serial8250_console_write.console_unlock.vprintk_emit.printk.set_capacity_and_notify.cold
7.29 ± 8% -7.0 0.33 ±124% perf-profile.calltrace.cycles-pp.uart_console_write.serial8250_console_write.console_unlock.vprintk_emit.printk
6.73 ± 8% -6.4 0.31 ±123% perf-profile.calltrace.cycles-pp.wait_for_xmitr.serial8250_console_putchar.uart_console_write.serial8250_console_write.console_unlock
6.74 ± 8% -6.4 0.31 ±123% perf-profile.calltrace.cycles-pp.serial8250_console_putchar.uart_console_write.serial8250_console_write.console_unlock.vprintk_emit
5.37 ± 5% -5.4 0.00 perf-profile.calltrace.cycles-pp.loop_configure.lo_ioctl.blkdev_ioctl.block_ioctl.__x64_sys_ioctl
5.32 ± 5% -5.3 0.00 perf-profile.calltrace.cycles-pp.printk.set_capacity_and_notify.cold.loop_set_size.loop_configure.lo_ioctl
5.32 ± 5% -5.3 0.00 perf-profile.calltrace.cycles-pp.vprintk_emit.printk.set_capacity_and_notify.cold.loop_set_size.loop_configure
5.32 ± 5% -5.3 0.00 perf-profile.calltrace.cycles-pp.loop_set_size.loop_configure.lo_ioctl.blkdev_ioctl.block_ioctl
5.32 ± 5% -5.3 0.00 perf-profile.calltrace.cycles-pp.set_capacity_and_notify.cold.loop_set_size.loop_configure.lo_ioctl.blkdev_ioctl
5.57 ± 10% -5.3 0.26 ±123% perf-profile.calltrace.cycles-pp.io_serial_in.wait_for_xmitr.serial8250_console_putchar.uart_console_write.serial8250_console_write
0.66 ± 7% +0.5 1.19 ± 15% perf-profile.calltrace.cycles-pp.perf_mux_hrtimer_handler.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt
0.57 ± 4% +0.5 1.11 ± 17% perf-profile.calltrace.cycles-pp.ktime_get_update_offsets_now.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
0.72 ± 18% +0.6 1.30 ± 13% perf-profile.calltrace.cycles-pp.ktime_get.tick_irq_enter.irq_enter_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
0.86 ± 7% +0.6 1.45 ± 22% perf-profile.calltrace.cycles-pp.lapic_next_deadline.clockevents_program_event.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt
0.00 +0.7 0.72 ± 11% perf-profile.calltrace.cycles-pp.load_balance.rebalance_domains.__softirqentry_text_start.irq_exit_rcu.sysvec_apic_timer_interrupt
0.75 ± 5% +0.8 1.55 ± 9% perf-profile.calltrace.cycles-pp.rebalance_domains.__softirqentry_text_start.irq_exit_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
0.96 ± 14% +0.8 1.78 ± 13% perf-profile.calltrace.cycles-pp.tick_irq_enter.irq_enter_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state
1.02 ± 13% +0.8 1.87 ± 14% perf-profile.calltrace.cycles-pp.irq_enter_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter
1.17 ± 7% +0.9 2.03 ± 11% perf-profile.calltrace.cycles-pp.scheduler_tick.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues
1.71 ± 5% +1.1 2.84 ± 15% perf-profile.calltrace.cycles-pp.__softirqentry_text_start.irq_exit_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state
2.19 ± 5% +1.6 3.75 ± 16% perf-profile.calltrace.cycles-pp.irq_exit_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter
2.16 ± 5% +1.7 3.82 ± 16% perf-profile.calltrace.cycles-pp.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt
2.22 ± 5% +1.8 4.07 ± 19% perf-profile.calltrace.cycles-pp.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt
2.08 ± 7% +2.0 4.04 ± 26% perf-profile.calltrace.cycles-pp.tick_nohz_get_sleep_length.menu_select.do_idle.cpu_startup_entry.start_secondary
2.56 ± 5% +2.2 4.74 ± 15% perf-profile.calltrace.cycles-pp.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt
3.80 ± 3% +3.4 7.16 ± 18% perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
7.75 ± 2% +6.2 13.98 ± 10% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state
7.89 ± 2% +6.4 14.24 ± 10% perf-profile.calltrace.cycles-pp.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter
12.29 ± 2% +9.7 22.01 ± 10% perf-profile.calltrace.cycles-pp.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle
5.35 ± 15% +10.1 15.48 ± 30% perf-profile.calltrace.cycles-pp.menu_select.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
57.16 +16.8 73.99 ± 8% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary
17.95 ± 4% +18.3 36.22 ± 18% perf-profile.calltrace.cycles-pp.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
59.09 +20.6 79.73 ± 5% perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
65.46 +31.4 96.82 perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
65.49 +31.4 96.89 perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
65.49 +31.4 96.89 perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64_no_verify
66.20 +31.9 98.14 perf-profile.calltrace.cycles-pp.secondary_startup_64_no_verify
20.79 ± 2% -19.8 1.00 ± 21% perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
18.43 ± 3% -17.5 0.97 ± 21% perf-profile.children.cycles-pp.do_syscall_64
13.86 ± 5% -13.9 0.00 perf-profile.children.cycles-pp.__x64_sys_ioctl
12.17 ± 3% -11.8 0.40 ± 31% perf-profile.children.cycles-pp.ret_from_fork
12.15 ± 3% -11.8 0.40 ± 32% perf-profile.children.cycles-pp.kthread
11.77 ± 3% -11.5 0.22 ± 40% perf-profile.children.cycles-pp.worker_thread
11.69 ± 3% -11.5 0.21 ± 44% perf-profile.children.cycles-pp.process_one_work
11.10 ± 3% -10.9 0.19 ± 51% perf-profile.children.cycles-pp.drm_fb_helper_damage_work
11.08 ± 3% -10.9 0.19 ± 51% perf-profile.children.cycles-pp.memcpy_toio
10.33 ± 8% -10.3 0.00 perf-profile.children.cycles-pp.block_ioctl
10.33 ± 8% -10.3 0.00 perf-profile.children.cycles-pp.blkdev_ioctl
10.33 ± 8% -10.3 0.00 perf-profile.children.cycles-pp.lo_ioctl
9.80 ± 9% -9.8 0.00 perf-profile.children.cycles-pp.loop_set_size
9.80 ± 9% -9.8 0.00 perf-profile.children.cycles-pp.set_capacity_and_notify.cold
10.12 ± 9% -9.3 0.77 ± 47% perf-profile.children.cycles-pp.printk
10.12 ± 9% -9.3 0.77 ± 47% perf-profile.children.cycles-pp.vprintk_emit
8.50 ± 8% -7.7 0.77 ± 47% perf-profile.children.cycles-pp.console_unlock
7.91 ± 8% -7.2 0.74 ± 47% perf-profile.children.cycles-pp.serial8250_console_write
7.55 ± 8% -6.8 0.71 ± 47% perf-profile.children.cycles-pp.uart_console_write
7.33 ± 8% -6.6 0.69 ± 47% perf-profile.children.cycles-pp.wait_for_xmitr
6.52 ± 12% -6.5 0.00 perf-profile.children.cycles-pp.__mutex_lock
6.99 ± 8% -6.3 0.67 ± 47% perf-profile.children.cycles-pp.serial8250_console_putchar
6.03 ± 9% -5.4 0.59 ± 43% perf-profile.children.cycles-pp.io_serial_in
5.37 ± 5% -5.4 0.00 perf-profile.children.cycles-pp.loop_configure
3.14 ± 16% -3.0 0.13 ± 31% perf-profile.children.cycles-pp.do_sys_open
3.13 ± 16% -3.0 0.13 ± 31% perf-profile.children.cycles-pp.do_sys_openat2
2.99 ± 16% -2.9 0.11 ± 27% perf-profile.children.cycles-pp.do_filp_open
2.98 ± 16% -2.9 0.11 ± 27% perf-profile.children.cycles-pp.path_openat
1.29 ± 16% -1.2 0.10 ± 76% perf-profile.children.cycles-pp.delay_tsc
0.58 ± 16% -0.5 0.04 ± 85% perf-profile.children.cycles-pp.io_serial_out
0.47 ± 16% -0.3 0.16 ± 45% perf-profile.children.cycles-pp.__schedule
0.37 ± 17% -0.3 0.11 ± 52% perf-profile.children.cycles-pp.schedule
0.30 ± 16% -0.2 0.06 ± 52% perf-profile.children.cycles-pp.walk_component
0.26 ± 15% -0.2 0.07 ± 50% perf-profile.children.cycles-pp.link_path_walk
0.23 ± 14% -0.2 0.07 ± 77% perf-profile.children.cycles-pp.newidle_balance
0.30 ± 13% -0.2 0.15 ± 30% perf-profile.children.cycles-pp.ksys_read
0.30 ± 15% -0.2 0.14 ± 29% perf-profile.children.cycles-pp.vfs_read
0.20 ± 12% -0.1 0.07 ± 55% perf-profile.children.cycles-pp.new_sync_read
0.01 ±223% +0.1 0.06 ± 18% perf-profile.children.cycles-pp.unmap_page_range
0.03 ±102% +0.1 0.09 ± 27% perf-profile.children.cycles-pp.__do_sys_clone
0.01 ±223% +0.1 0.07 ± 11% perf-profile.children.cycles-pp.unmap_vmas
0.05 ± 75% +0.1 0.11 ± 22% perf-profile.children.cycles-pp.__libc_fork
0.02 ±141% +0.1 0.08 ± 23% perf-profile.children.cycles-pp.rcu_needs_cpu
0.14 ± 19% +0.1 0.21 ± 14% perf-profile.children.cycles-pp.hrtimer_get_next_event
0.04 ± 77% +0.1 0.11 ± 29% perf-profile.children.cycles-pp.do_fault
0.08 ± 22% +0.1 0.15 ± 23% perf-profile.children.cycles-pp.rcu_dynticks_eqs_enter
0.02 ±144% +0.1 0.09 ± 38% perf-profile.children.cycles-pp.begin_new_exec
0.02 ±143% +0.1 0.10 ± 27% perf-profile.children.cycles-pp.filemap_map_pages
0.07 ± 47% +0.1 0.14 ± 28% perf-profile.children.cycles-pp.mmput
0.07 ± 47% +0.1 0.14 ± 28% perf-profile.children.cycles-pp.exit_mmap
0.09 ± 17% +0.1 0.17 ± 34% perf-profile.children.cycles-pp.trigger_load_balance
0.11 ± 23% +0.1 0.20 ± 21% perf-profile.children.cycles-pp.__handle_mm_fault
0.12 ± 23% +0.1 0.21 ± 19% perf-profile.children.cycles-pp.handle_mm_fault
0.13 ± 21% +0.1 0.22 ± 19% perf-profile.children.cycles-pp.exc_page_fault
0.13 ± 21% +0.1 0.22 ± 19% perf-profile.children.cycles-pp.do_user_addr_fault
0.08 ± 23% +0.1 0.17 ± 37% perf-profile.children.cycles-pp.exec_binprm
0.07 ± 26% +0.1 0.17 ± 37% perf-profile.children.cycles-pp.load_elf_binary
0.14 ± 21% +0.1 0.25 ± 23% perf-profile.children.cycles-pp.asm_exc_page_fault
0.13 ± 12% +0.1 0.24 ± 22% perf-profile.children.cycles-pp.rcu_eqs_enter
0.10 ± 14% +0.1 0.21 ± 61% perf-profile.children.cycles-pp._raw_spin_unlock_irqrestore
0.09 ± 25% +0.1 0.21 ± 28% perf-profile.children.cycles-pp.bprm_execve
0.13 ± 22% +0.1 0.25 ± 48% perf-profile.children.cycles-pp.__hrtimer_get_next_event
0.12 ± 18% +0.1 0.24 ± 27% perf-profile.children.cycles-pp.execve
0.12 ± 18% +0.1 0.24 ± 27% perf-profile.children.cycles-pp.__x64_sys_execve
0.12 ± 18% +0.1 0.24 ± 27% perf-profile.children.cycles-pp.do_execveat_common
0.14 ± 9% +0.2 0.31 ± 40% perf-profile.children.cycles-pp.timerqueue_add
0.13 ± 6% +0.2 0.30 ± 65% perf-profile.children.cycles-pp.timerqueue_del
0.00 +0.2 0.19 ± 91% perf-profile.children.cycles-pp.run_posix_cpu_timers
0.61 ± 7% +0.2 0.80 ± 16% perf-profile.children.cycles-pp.load_balance
0.20 ± 12% +0.2 0.39 ± 28% perf-profile.children.cycles-pp.update_irq_load_avg
0.17 ± 12% +0.2 0.36 ± 33% perf-profile.children.cycles-pp.enqueue_hrtimer
0.17 ± 12% +0.2 0.37 ± 64% perf-profile.children.cycles-pp.__remove_hrtimer
0.30 ± 13% +0.2 0.51 ± 13% perf-profile.children.cycles-pp.get_next_timer_interrupt
0.19 ± 22% +0.2 0.42 ± 23% perf-profile.children.cycles-pp.tsc_verify_tsc_adjust
0.19 ± 21% +0.2 0.43 ± 25% perf-profile.children.cycles-pp.arch_cpu_idle_enter
0.24 ± 18% +0.2 0.48 ± 34% perf-profile.children.cycles-pp.__hrtimer_next_event_base
0.23 ± 10% +0.2 0.48 ± 40% perf-profile.children.cycles-pp.hrtimer_update_next_event
0.34 ± 4% +0.2 0.58 ± 19% perf-profile.children.cycles-pp.update_rq_clock
0.32 ± 14% +0.3 0.57 ± 9% perf-profile.children.cycles-pp.hrtimer_next_event_without
0.52 ± 6% +0.4 0.91 ± 14% perf-profile.children.cycles-pp.native_sched_clock
0.54 ± 5% +0.4 0.96 ± 17% perf-profile.children.cycles-pp.sched_clock
0.32 ± 29% +0.4 0.77 ± 47% perf-profile.children.cycles-pp.irq_work_single
0.61 ± 7% +0.4 1.06 ± 11% perf-profile.children.cycles-pp.read_tsc
0.32 ± 31% +0.5 0.77 ± 47% perf-profile.children.cycles-pp.asm_sysvec_irq_work
0.32 ± 31% +0.5 0.77 ± 47% perf-profile.children.cycles-pp.sysvec_irq_work
0.32 ± 31% +0.5 0.77 ± 47% perf-profile.children.cycles-pp.__sysvec_irq_work
0.32 ± 30% +0.5 0.77 ± 47% perf-profile.children.cycles-pp.irq_work_run
0.34 ± 28% +0.5 0.81 ± 46% perf-profile.children.cycles-pp.irq_work_run_list
0.61 ± 9% +0.5 1.12 ± 16% perf-profile.children.cycles-pp.sched_clock_cpu
0.60 ± 4% +0.5 1.13 ± 17% perf-profile.children.cycles-pp.ktime_get_update_offsets_now
0.69 ± 8% +0.5 1.23 ± 14% perf-profile.children.cycles-pp.perf_mux_hrtimer_handler
0.72 ± 8% +0.6 1.34 ± 24% perf-profile.children.cycles-pp.irqtime_account_irq
0.89 ± 7% +0.7 1.54 ± 23% perf-profile.children.cycles-pp.lapic_next_deadline
0.77 ± 4% +0.8 1.58 ± 8% perf-profile.children.cycles-pp.rebalance_domains
0.98 ± 14% +0.9 1.84 ± 14% perf-profile.children.cycles-pp.tick_irq_enter
1.23 ± 6% +0.9 2.09 ± 11% perf-profile.children.cycles-pp.scheduler_tick
1.02 ± 13% +0.9 1.91 ± 14% perf-profile.children.cycles-pp.irq_enter_rcu
1.81 ± 3% +1.1 2.90 ± 15% perf-profile.children.cycles-pp.__softirqentry_text_start
2.24 ± 4% +1.6 3.82 ± 16% perf-profile.children.cycles-pp.irq_exit_rcu
2.25 ± 4% +1.6 3.89 ± 16% perf-profile.children.cycles-pp.update_process_times
2.29 ± 4% +1.8 4.11 ± 19% perf-profile.children.cycles-pp.tick_sched_handle
2.12 ± 7% +2.0 4.10 ± 25% perf-profile.children.cycles-pp.tick_nohz_get_sleep_length
2.65 ± 5% +2.2 4.82 ± 15% perf-profile.children.cycles-pp.tick_sched_timer
3.94 ± 3% +3.3 7.26 ± 18% perf-profile.children.cycles-pp.__hrtimer_run_queues
7.97 ± 2% +6.2 14.18 ± 10% perf-profile.children.cycles-pp.hrtimer_interrupt
8.09 ± 2% +6.3 14.43 ± 10% perf-profile.children.cycles-pp.__sysvec_apic_timer_interrupt
12.58 ± 2% +9.7 22.32 ± 10% perf-profile.children.cycles-pp.sysvec_apic_timer_interrupt
5.43 ± 16% +10.3 15.69 ± 29% perf-profile.children.cycles-pp.menu_select
15.95 ± 3% +13.7 29.69 ± 13% perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt
59.71 +21.0 80.76 ± 5% perf-profile.children.cycles-pp.cpuidle_enter_state
59.73 +21.1 80.81 ± 5% perf-profile.children.cycles-pp.cpuidle_enter
65.49 +31.4 96.89 perf-profile.children.cycles-pp.start_secondary
66.20 +31.9 98.14 perf-profile.children.cycles-pp.secondary_startup_64_no_verify
66.20 +31.9 98.14 perf-profile.children.cycles-pp.cpu_startup_entry
66.20 +31.9 98.14 perf-profile.children.cycles-pp.do_idle
10.98 ± 3% -10.8 0.19 ± 51% perf-profile.self.cycles-pp.memcpy_toio
6.03 ± 9% -5.4 0.59 ± 43% perf-profile.self.cycles-pp.io_serial_in
1.29 ± 16% -1.2 0.10 ± 76% perf-profile.self.cycles-pp.delay_tsc
0.58 ± 16% -0.5 0.04 ± 85% perf-profile.self.cycles-pp.io_serial_out
0.06 ± 52% +0.1 0.11 ± 29% perf-profile.self.cycles-pp.tick_sched_timer
0.02 ±141% +0.1 0.08 ± 23% perf-profile.self.cycles-pp.rcu_needs_cpu
0.08 ± 12% +0.1 0.15 ± 21% perf-profile.self.cycles-pp.load_balance
0.08 ± 22% +0.1 0.15 ± 23% perf-profile.self.cycles-pp.rcu_dynticks_eqs_enter
0.08 ± 13% +0.1 0.15 ± 32% perf-profile.self.cycles-pp._raw_spin_unlock_irqrestore
0.00 +0.1 0.09 ± 23% perf-profile.self.cycles-pp.sched_clock
0.07 ± 14% +0.1 0.16 ± 13% perf-profile.self.cycles-pp.tick_irq_enter
0.07 ± 23% +0.1 0.17 ± 63% perf-profile.self.cycles-pp.sysvec_apic_timer_interrupt
0.16 ± 19% +0.1 0.26 ± 18% perf-profile.self.cycles-pp.asm_sysvec_apic_timer_interrupt
0.08 ± 14% +0.1 0.19 ± 33% perf-profile.self.cycles-pp.timerqueue_add
0.07 ± 59% +0.1 0.19 ± 49% perf-profile.self.cycles-pp.scheduler_tick
0.14 ± 20% +0.1 0.27 ± 9% perf-profile.self.cycles-pp.perf_mux_hrtimer_handler
0.15 ± 12% +0.1 0.29 ± 39% perf-profile.self.cycles-pp.__hrtimer_run_queues
0.17 ± 16% +0.2 0.33 ± 14% perf-profile.self.cycles-pp.hrtimer_interrupt
0.09 ± 23% +0.2 0.25 ± 20% perf-profile.self.cycles-pp.rebalance_domains
0.00 +0.2 0.19 ± 91% perf-profile.self.cycles-pp.run_posix_cpu_timers
0.20 ± 12% +0.2 0.39 ± 29% perf-profile.self.cycles-pp.update_irq_load_avg
0.19 ± 22% +0.2 0.40 ± 27% perf-profile.self.cycles-pp.tsc_verify_tsc_adjust
0.32 ± 13% +0.2 0.53 ± 8% perf-profile.self.cycles-pp.do_idle
0.19 ± 20% +0.2 0.42 ± 33% perf-profile.self.cycles-pp.__hrtimer_next_event_base
0.42 ± 10% +0.3 0.71 ± 29% perf-profile.self.cycles-pp.tick_nohz_next_event
0.50 ± 5% +0.3 0.85 ± 16% perf-profile.self.cycles-pp.native_sched_clock
0.44 ± 12% +0.4 0.82 ± 26% perf-profile.self.cycles-pp.irqtime_account_irq
0.42 ± 13% +0.4 0.81 ± 13% perf-profile.self.cycles-pp.update_process_times
0.60 ± 7% +0.4 1.02 ± 12% perf-profile.self.cycles-pp.read_tsc
0.50 ± 7% +0.5 0.96 ± 20% perf-profile.self.cycles-pp.ktime_get_update_offsets_now
0.89 ± 7% +0.6 1.54 ± 23% perf-profile.self.cycles-pp.lapic_next_deadline
3.09 ± 27% +8.1 11.17 ± 35% perf-profile.self.cycles-pp.menu_select
perf-sched.total_wait_time.max.ms
10000 +-------------------------------------------------------------------+
9000 |-+ |
| |
8000 |-+ |
7000 |-+ |
| |
6000 |-+ |
5000 |-+ |
4000 |-+ |
| |
3000 |-+ |
2000 |-+ |
| |
1000 |-+ |
0 +-------------------------------------------------------------------+
perf-sched.total_wait_time.average.ms
300 +---------------------------------------------------------------------+
| |
250 |-+ O O O O OOOO |
| O OO O OO OO OO O O OO OO O O |
|OO O O O |
200 |-+ O O O |
| |
150 |-+ |
| |
100 |-+ |
| |
| |
50 |-+ |
| |
0 +---------------------------------------------------------------------+
perf-sched.total_wait_and_delay.count.ms
8000 +--------------------------------------------------------------------+
| O |
7000 |OO O O OOO |
6000 |-+OOOOO OO OO OO OO OO O OO O |
| O O O O O |
5000 |-+ |
| |
4000 |-+ |
| |
3000 |-+ |
2000 |-+ |
| |
1000 |-+ |
|+ +++ ++.+++ +++ ++++++.+++ +++ +++ +++ .++++ ++++++++ +.++++++++|
0 +--------------------------------------------------------------------+
perf-sched.total_wait_and_delay.max.ms
10000 +-------------------------------------------------------------------+
9000 |-+ |
| |
8000 |-+ |
7000 |-+ |
| |
6000 |-+ |
5000 |-+ |
4000 |-+ |
| |
3000 |-+ |
2000 |-+ |
| |
1000 |-+ |
0 +-------------------------------------------------------------------+
perf-sched.total_wait_and_delay.average.ms
300 +---------------------------------------------------------------------+
| |
250 |-+ O O O O O O OOOO |
| O OO O O O OO O O OO OO O O |
|OO O O O |
200 |-+ O O O |
| |
150 |-+ |
| |
100 |-+ |
| |
| |
50 |-+ |
| |
0 +---------------------------------------------------------------------+
perf-sched.wait_time.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read
1200 +--------------------------------------------------------------------+
| |
1000 |OOOOOO OOOOOOOOOOOOO OOOOOOOOOOOO OOO |
| |
| |
800 |-+ |
| |
600 |-+ |
| |
400 |-+ |
| |
| |
200 |-+ |
| |
0 +--------------------------------------------------------------------+
perf-sched.wait_time.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read
90 +----------------------------------------------------------------------+
| O OOOOOO O O O O O O |
80 |-+ O OO O O O O O |
70 |-+ OO O O O O O |
| |
60 |-+ |
50 |-+ |
| |
40 |-+ |
30 |-+ |
| |
20 |-+ |
10 |-+ |
| |
0 +----------------------------------------------------------------------+
perf-sched.wait_and_delay.count.pipe_read.new_sync_read.vfs_read.ksys_read
2500 +--------------------------------------------------------------------+
| O O O |
| O O OO OO OO OOOO |
2000 |-O O OOOOO O O O O |
|O OO O O O OO |
| |
1500 |-+ |
| |
1000 |-+ |
| |
| |
500 |-+ |
| + + +++ + ++ |
| +++ ++.+++ + ++ ::+.++ +++ ++ +++ .+ :+++ ++ + :+ : + +++|
0 +--------------------------------------------------------------------+
perf-sched.wait_and_delay.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read
1200 +--------------------------------------------------------------------+
| |
1000 |OOOOOOOO OOOOOOOOOOOOOOOO OOOOOOOOOO |
| |
| |
800 |-+ |
| |
600 |-+ |
| |
400 |-+ |
| |
| |
200 |-+ |
| |
0 +--------------------------------------------------------------------+
perf-sched.wait_and_delay.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read
90 +----------------------------------------------------------------------+
| O OOOOOO O O O O O O |
80 |-+ O OO O O OO O |
70 |-+ O O O O O O O |
| |
60 |-+ |
50 |-+ |
| |
40 |-+ |
30 |-+ |
| |
20 |-+ |
10 |-+ |
| |
0 +----------------------------------------------------------------------+
perf-sched.wait_time.max.ms.smpboot_thread_fn.kthread.ret_from_fork
6000 +--------------------------------------------------------------------+
| O |
5000 |-+ O |
| |
| |
4000 |-+ |
| O |
3000 |-+ |
| |
2000 |-+ |
| |
| |
1000 |OO OOO OOOOOOOOOOOOO OOOOOOO OOO OOO |
| |
0 +--------------------------------------------------------------------+
perf-sched.wait_time.avg.ms.smpboot_thread_fn.kthread.ret_from_fork
800 +---------------------------------------------------------------------+
|OOOOO OOOOOOOO O OOOOOOOOOOO O OO OO |
700 |-+ O O O O |
600 |-+ |
| |
500 |-+ |
| |
400 |-+ |
| |
300 |-+ |
200 |-+ |
| |
100 |-+ |
| |
0 +---------------------------------------------------------------------+
perf-sched.wait_and_delay.count.smpboot_thread_fn.kthread.ret_from_fork
1400 +--------------------------------------------------------------------+
|OOOO OO O OO OOOOO O O OOOOO O |
1200 |-+ OO O O OO OO OO OO |
| |
1000 |-+ |
| |
800 |-+ |
| |
600 |-+ |
| |
400 |-+ |
| |
200 |-+ + |
|++++++++.++++++++++++++++.++++++++++++++++.+++ ++++++++++++.++++++++|
0 +--------------------------------------------------------------------+
perf-sched.wait_and_delay.max.ms.smpboot_thread_fn.kthread.ret_from_fork
6000 +--------------------------------------------------------------------+
| O |
5000 |-+ O |
| |
| |
4000 |-+ |
| O |
3000 |-+ |
| |
2000 |-+ |
| |
| |
1000 |OO OOOOO OOOOOOOOOOOOOOOO OO OOO OOO |
| |
0 +--------------------------------------------------------------------+
perf-sched.wait_and_delay.avg.ms.smpboot_thread_fn.kthread.ret_from_fork
800 +---------------------------------------------------------------------+
|OOOOOO OOOOOOO OOOOO OOOOOOOO OO O O |
700 |-+ O O O O |
600 |-+ |
| |
500 |-+ |
| |
400 |-+ |
| |
300 |-+ |
200 |-+ |
| |
100 |-+ |
| |
0 +---------------------------------------------------------------------+
10000 +-------------------------------------------------------------------+
9000 |-+ |
| |
8000 |-+ |
7000 |-+ |
| |
6000 |-+ |
5000 |-+ |
4000 |-+ |
| |
3000 |-+ |
2000 |-+ |
| |
1000 |-+ |
0 +-------------------------------------------------------------------+
20 +----------------------------------------------------------------------+
18 |O+OOO O O OOO OOO OOO OOO OOO OOO OO OOO OOO O |
| |
16 |-+ |
14 |-+ |
| |
12 |-+ |
10 |-+ |
8 |-+ |
| |
6 |-+ |
4 |-+ |
| |
2 |-+ |
0 +----------------------------------------------------------------------+
1100 +--------------------------------------------------------------------+
1000 |OOOO OO OOOOOOOOOO O O OOOOO O |
| OO OO OO OO OO |
900 |-+ |
800 |-+ |
700 |-+ |
600 |-+ |
| |
500 |-+ |
400 |-+ |
300 |-+ |
200 |-+ |
| |
100 |++++++++.++++++++++++++++.++++++++++++++++.++++++++++++++++.++++++++|
0 +--------------------------------------------------------------------+
10000 +-------------------------------------------------------------------+
9000 |-+ |
| |
8000 |-+ |
7000 |-+ |
| |
6000 |-+ |
5000 |-+ |
4000 |-+ |
| |
3000 |-+ |
2000 |-+ |
| |
1000 |-+ |
0 +-------------------------------------------------------------------+
20 +----------------------------------------------------------------------+
18 |O+OOO O O OOO OOO OOO OOO OOO OOO OO OOO OOO O |
| |
16 |-+ |
14 |-+ |
| |
12 |-+ |
10 |-+ |
8 |-+ |
| |
6 |-+ |
4 |-+ |
| |
2 |-+ |
0 +----------------------------------------------------------------------+
perf-sched.wait_time.max.ms.worker_thread.kthread.ret_from_fork
10000 +-------------------------------------------------------------------+
9000 |O+ OOO O O O |
| OO O O O OO OOOOOOOOOO OOO O OO |
8000 |-O O O O |
7000 |-+ |
| |
6000 |-+ |
5000 |-+ |
4000 |-+ |
| |
3000 |-+ |
2000 |-+ |
| |
1000 |-+ |
0 +-------------------------------------------------------------------+
perf-sched.wait_time.avg.ms.worker_thread.kthread.ret_from_fork
600 +---------------------------------------------------------------------+
| |
500 |-+ O O O |
| O O O O O O OO O O O O |
|O O O O OO OO O O OO OOO O O |
400 |-+ O O |
| |
300 |-+ |
| |
200 |-+ |
| |
| |
100 |-+ |
| |
0 +---------------------------------------------------------------------+
perf-sched.wait_and_delay.count.worker_thread.kthread.ret_from_fork
700 +---------------------------------------------------------------------+
| O O |
600 |OOOOOO OO OOOOOOOO OO OOOOOOO O O |
| OO O O O |
500 |-+ |
| |
400 |-+ |
| |
300 |-+ |
| |
200 |-+ |
| |
100 |-+ |
| |
0 +---------------------------------------------------------------------+
perf-sched.wait_and_delay.max.ms.worker_thread.kthread.ret_from_fork
10000 +-------------------------------------------------------------------+
9000 |O+ OOO O O O |
| OO O O O OO OOOOOOOOOO OOO O OO |
8000 |-O O O O |
7000 |-+ |
| |
6000 |-+ |
5000 |-+ |
4000 |-+ |
| |
3000 |-+ |
2000 |-+ |
| |
1000 |-+ |
0 +-------------------------------------------------------------------+
perf-sched.wait_and_delay.avg.ms.worker_thread.kthread.ret_from_fork
600 +---------------------------------------------------------------------+
| |
500 |-+ O O O O |
| O O O O O O OO O O O |
|O O O O OO OO O O OO OOO O O |
400 |-+ O O |
| |
300 |-+ |
| |
200 |-+ |
| |
| |
100 |-+ |
| |
0 +---------------------------------------------------------------------+
2500 +--------------------------------------------------------------------+
| |
| O |
2000 |-+ |
| |
| |
1500 |-+ |
| O O |
1000 |-+ O |
|O |
| O O O O |
500 |-+ O OO |
| OO O OO OO OOOOOOO OOOOOOO O |
| |
0 +--------------------------------------------------------------------+
300 +---------------------------------------------------------------------+
|O O O O OOOO O O OOOO O OO |
250 |-O OOO O OO OO OO OO OO O O |
| |
| |
200 |-+ |
| |
150 |-+ |
| |
100 |-+ |
| |
| |
50 |-+ |
| |
0 +---------------------------------------------------------------------+
250 +---------------------------------------------------------------------+
| |
| |
200 |-+ |
| |
| |
150 |-+ |
| |
100 |-+ |
| |
| |
50 |-+ |
| |
| + + + + + + + + |
0 +---------------------------------------------------------------------+
300 +---------------------------------------------------------------------+
|O O O O OOOO O O OOOO O OO |
250 |-O OOO O OO OO OO OO OO O O |
| |
| |
200 |-+ |
| |
150 |-+ |
| |
100 |-+ |
| |
| |
50 |-+ |
| |
0 +---------------------------------------------------------------------+
perf-sched.wait_time.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64
16 +----------------------------------------------------------------------+
| O O |
14 |OOOOO O OO OOOO OOOOO OO O O OOO O |
12 |-+ OO O O O O O |
| |
10 |-+ |
| |
8 |-+ |
| |
6 |-+ |
4 |-+ |
| |
2 |-+ |
| |
0 +----------------------------------------------------------------------+
perf-sched.wait_time.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64
0.9 +---------------------------------------------------------------------+
| O O OO OO OO O O O OOO |
0.8 |O+O O O O O OO OO OO O |
0.7 |-+ OO O O O O O |
| |
0.6 |-+ |
0.5 |-+ |
| |
0.4 |-+ |
0.3 |-+ |
| |
0.2 |-+ |
0.1 |-+ |
| + + + + + |
0 +---------------------------------------------------------------------+
perf-sched.wait_and_delay.count.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_6
250 +---------------------------------------------------------------------+
|O O OOOOOO O O O |
| O O O O O O O |
200 |-+ |
| |
| |
150 |-+ |
| |
100 |-+ |
| |
| |
50 |-+ |
| |
| |
0 +---------------------------------------------------------------------+
16 +----------------------------------------------------------------------+
| O O |
14 |OOOOO O OO OOOO OOOOO OO O O OOO O |
12 |-+ OO O O O O O |
| |
10 |-+ |
| |
8 |-+ |
| |
6 |-+ |
4 |-+ |
| |
2 |-+ |
| |
0 +----------------------------------------------------------------------+
0.9 +---------------------------------------------------------------------+
|OOO O OO OO O O O |
0.8 |-+ O O O OO OO OO O |
0.7 |-+ OO O O O O O |
| |
0.6 |-+ |
0.5 |-+ |
| |
0.4 |-+ |
0.3 |-+ |
| |
0.2 |-+ |
0.1 |-+ |
| + + + + + + |
0 +---------------------------------------------------------------------+
perf-sched.sch_delay.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork
0.05 +-------------------------------------------------------------------+
0.045 |-+ O O |
| O |
0.04 |-+ O O |
0.035 |-+ O O |
| |
0.03 |-+ |
0.025 |OOOO OO OO O O O |
0.02 |-+ O O O |
| O O |
0.015 |-+ OO O O OO OO O OO |
0.01 |-+ |
| |
0.005 |-+ :|
0 +-------------------------------------------------------------------+
40 +----------------------------------------------------------------------+
| O O O O O O |
35 |-+ |
30 |-+ |
| |
25 |-+ |
| |
20 |-+ |
| |
15 |-+ |
10 |-+ |
| |
5 |-+ |
| |
0 +----------------------------------------------------------------------+
perf-sched.wait_and_delay.max.ms.schedule_timeout.kcompactd.kthread.ret_from_for
600 +---------------------------------------------------------------------+
| |
500 |OOOOO OOOOOOOOOO OOOOOOOOOOO OOOOOOOO |
| |
| |
400 |-+ |
| |
300 |-+ |
| |
200 |-+ |
| |
| |
100 |-+ |
| |
0 +---------------------------------------------------------------------+
perf-sched.wait_and_delay.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_for
500 +---------------------------------------------------------------------+
450 |OOOOO OOOOOOOOOO OOOOOOOOOOO OOOOOOOO |
| |
400 |-+ |
350 |-+ |
| |
300 |-+ |
250 |-+ |
200 |-+ |
| |
150 |-+ |
100 |-+ |
| |
50 |-+ |
0 +---------------------------------------------------------------------+
50 +----------------------------------------------------------------------+
45 |-+ O O O O O O O |
| O |
40 |-+ OO O O O |
35 |-+ O O O O O |
| OO OO O O O OO |
30 |O+ O O O O |
25 |-+ |
20 |-O O |
| |
15 |-+ |
10 |-+ |
| |
5 |-+ |
0 +----------------------------------------------------------------------+
450 +---------------------------------------------------------------------+
| O |
400 |-+ |
350 |O+ O |
| O OO O O O OOOO |
300 |-+ O O O O O O |
250 |-+ O O O O O |
| O O O O O O |
200 |-+ O O O |
150 |-+ |
| |
100 |-+ |
50 |-+ |
| |
0 +---------------------------------------------------------------------+
450 +---------------------------------------------------------------------+
| O |
400 |-+ O |
350 |-+ |
|O O O O OO |
300 |-+OO OOO OO O OO |
250 |-+ O O O O OO O |
| O O O |
200 |-+ O O O O O O |
150 |-+ |
| |
100 |-+ |
50 |-+ |
| |
0 +---------------------------------------------------------------------+
20 +----------------------------------------------------------------------+
| |
| |
15 |-+ |
| |
| |
| |
10 |-+ |
| |
| |
5 |-+ |
| |
| |
| .+.+ |
0 +----------------------------------------------------------------------+
1000 +--------------------------------------------------------------------+
900 |-+ |
| |
800 |-+ |
700 |-+ |
| |
600 |-+ |
500 |-+ |
400 |-+ |
| |
300 |-+ |
200 |-+ |
| |
100 |-+ |
0 +--------------------------------------------------------------------+
900 +---------------------------------------------------------------------+
| |
800 |-+ |
700 |-+ |
| |
600 |-+ |
500 |-+ |
| |
400 |-+ |
300 |-+ |
| |
200 |-+ |
100 |-+ |
| |
0 +---------------------------------------------------------------------+
14 +----------------------------------------------------------------------+
|O |
12 |-+ O O OO |
| OO O O OOOOOO O OOOOOOOOOOO OOOOOOO |
10 |-+ |
| |
8 |-+ |
| |
6 |-+ |
| |
4 |-+ |
| |
2 |-+ |
| + +. + .+ + + + +.+ ++ ++ |
0 +----------------------------------------------------------------------+
1200 +--------------------------------------------------------------------+
| |
1000 |OOOO OOOOOOOOO OOOOOOOOO OOOOOOOO OOOO |
| |
| |
800 |-+ |
| |
600 |-+ |
| |
400 |-+ |
| |
| |
200 |-+ |
| |
0 +--------------------------------------------------------------------+
900 +---------------------------------------------------------------------+
| OO OO OOOO OO OOOO OOOOOOOO OOOOOOO |
800 |-+ O O OO |
700 |O+ |
| |
600 |-+ |
500 |-+ |
| |
400 |-+ |
300 |-+ |
| |
200 |-+ |
100 |-+ |
| |
0 +---------------------------------------------------------------------+
stress-ng.time.system_time
30 +----------------------------------------------------------------------+
| |
25 |-++ + + ++++ + + + + + ++++ + |
| ::+.+ ++ +++++ + + : ::: ::++ : :+ ++ +++ + + :++++++ .++ +|
|++ + + + + + ++.+ + +++ + + + + ++ |
20 |-+ |
| |
15 |-+ |
| |
10 |-+ |
| |
| |
5 |-+ |
| |
0 +----------------------------------------------------------------------+
stress-ng.time.elapsed_time
900 +---------------------------------------------------------------------+
| |
800 |-+ |
700 |-+O O OOOOOO O O O |
| |
600 |-+ |
500 |-O O O O O O OO OO |
| |
400 |-+ O O |
300 |O+ O O |
| |
200 |-+ |
100 |-+ |
|++++++.+++++++++++++.+++++++++++++.+++++++++++++.+++++++++++++.++++++|
0 +---------------------------------------------------------------------+
stress-ng.time.elapsed_time.max
900 +---------------------------------------------------------------------+
| |
800 |-+ |
700 |-+O O OOOOOO O O O |
| |
600 |-+ |
500 |-O O O O O O OO OO |
| |
400 |-+ O O |
300 |O+ O O |
| |
200 |-+ |
100 |-+ |
|++++++.+++++++++++++.+++++++++++++.+++++++++++++.+++++++++++++.++++++|
0 +---------------------------------------------------------------------+
stress-ng.time.voluntary_context_switches
90000 +-------------------------------------------------------------------+
| + |
80000 |-+ :+ + |
70000 |+++ .+ + : + + + + + :: + + + +|
|: :++ + : + : : : : :++ : :++ : + :: + + +: + :: :|
60000 |-:: + : : :+ : ++: +: :: +. : :: + + : + :: ::+ :+: :++ |
50000 |-: ++: ++ + : + + : ++ : + + +: +: + + : |
| + + + + + + + + |
40000 |-+ |
30000 |-+ |
| |
20000 |-+ |
10000 |-+ |
| O O O O |
0 +-------------------------------------------------------------------+
stress-ng.time.involuntary_context_switches
4500 +--------------------------------------------------------------------+
| + + +++ + ++ + + + + .+ ++++ + |
4000 |-+:: + +. +++ :+ :+ :+ : :: + : :: + : : :+ :++ +. + +|
3500 |++ ++ ++ + + + +++ ++ +++ ++ +++ + + + ++++ ++ |
| |
3000 |-+ |
2500 |-+ |
| |
2000 |-+ |
1500 |-+ |
| |
1000 |-+ |
500 |-+ |
| |
0 +--------------------------------------------------------------------+
stress-ng.loop.ops
1800 +--------------------------------------------------------------------+
| + + ++ + + + + + .+ + + |
1600 |-+:: + : + + :+ :+++ + + :: + + :: + : ++++ :+ :+ +.+ + |
1400 |++ ++ +.++ ++ + + + ++ ++ ++ ++ ++ + + ++ +++++ +|
| |
1200 |-+ |
1000 |-+ |
| |
800 |-+ |
600 |-+ |
| |
400 |-+ |
200 |-+ |
| |
0 +--------------------------------------------------------------------+
stress-ng.loop.ops_per_sec
30 +----------------------------------------------------------------------+
| + + ++ + + + + + + + + |
25 |-+:: .+ : + + + :+++ :: :: + : :+ + : ++++ + + :+ ++ + |
|++ ++ +++ ++ +++ + ++.+ ++ +++ ++ ++ + + ++ ++.+++ +|
| |
20 |-+ |
| |
15 |-+ |
| |
10 |-+ |
| |
| |
5 |-+ |
| |
0 +----------------------------------------------------------------------+
stress-ng.time.percent_of_cpu_this_job_got
42 +----------------------------------------------------------------------+
| : |
41 |-++ + : + + + + + |
40 |-+: : : : : ++ + : : : : + ++ + |
| : : : : :::: : : : :: : :::: : |
39 |-+: :: ++ : :+ : : : :: :: :: :+ : +: + + |
| :: : : : : : : : :: :: :: : : : : :: : : |
38 |-: : + : : : : : : : : + : : : + : : : :: :: : |
| : : : : : : : : : : : : : : : : : : : :: ::: : |
37 |-: : : : ++: : : : : : :: : : :: :+: : : ::+: : + +|
36 |-+ ++ ++ : + ++: : : + ++ : + ++ + + :: : + ++ +: + |
| : : : : : + : : :: : :+ :: |
35 |++ + + + ++ ++ : : + + |
| : : |
34 +----------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
---
0DAY/LKP+ Test Infrastructure Open Source Technology Center
https://lists.01.org/hyperkitty/list/lkp@lists.01.org Intel Corporation
Thanks,
Oliver Sang
View attachment "config-5.12.0-rc4-00024-gc76f48eb5c08" of type "text/plain" (172837 bytes)
View attachment "job-script" of type "text/plain" (8116 bytes)
View attachment "job.yaml" of type "text/plain" (5668 bytes)
View attachment "reproduce" of type "text/plain" (532 bytes)
Powered by blists - more mailing lists