[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <202311061616.cd495695-oliver.sang@intel.com>
Date: Tue, 7 Nov 2023 09:40:41 +0800
From: kernel test robot <oliver.sang@...el.com>
To: David Howells <dhowells@...hat.com>
CC: <oe-lkp@...ts.linux.dev>, <lkp@...el.com>,
<linux-kernel@...r.kernel.org>,
Christian Brauner <brauner@...nel.org>,
Alexander Viro <viro@...iv.linux.org.uk>,
Jens Axboe <axboe@...nel.dk>, Christoph Hellwig <hch@....de>,
Christian Brauner <christian@...uner.io>,
Matthew Wilcox <willy@...radead.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
"David Laight" <David.Laight@...lab.com>, <ying.huang@...el.com>,
<feng.tang@...el.com>, <fengwei.yin@...el.com>,
<oliver.sang@...el.com>
Subject: [linus:master] [iov_iter] c9eec08bac: vm-scalability.throughput
-16.9% regression
Hello,
kernel test robot noticed a -16.9% regression of vm-scalability.throughput on:
commit: c9eec08bac96898573c236af9cb0ccee765684fc ("iov_iter: Don't deal with iter->copy_mc in memcpy_from_iter_mc()")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
testcase: vm-scalability
test machine: 224 threads 4 sockets Intel(R) Xeon(R) Platinum 8380H CPU @ 2.90GHz (Cooper Lake) with 192G memory
parameters:
runtime: 300s
size: 256G
test: msync
cpufreq_governor: performance
test-description: The motivation behind this suite is to exercise functions and regions of the mm/ of the Linux kernel which are of interest to us.
test-url: https://git.kernel.org/cgit/linux/kernel/git/wfg/vm-scalability.git/
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <oliver.sang@...el.com>
| Closes: https://lore.kernel.org/oe-lkp/202311061616.cd495695-oliver.sang@intel.com
Details are as below:
-------------------------------------------------------------------------------------------------->
The kernel config and materials to reproduce are available at:
https://download.01.org/0day-ci/archive/20231106/202311061616.cd495695-oliver.sang@intel.com
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/size/tbox_group/test/testcase:
gcc-12/performance/x86_64-rhel-8.3/debian-11.1-x86_64-20220510.cgz/300s/256G/lkp-cpl-4sp2/msync/vm-scalability
commit:
f1982740f5 ("iov_iter: Convert iterate*() to inline funcs")
c9eec08bac ("iov_iter: Don't deal with iter->copy_mc in memcpy_from_iter_mc()")
f1982740f5e77090 c9eec08bac96898573c236af9cb
---------------- ---------------------------
%stddev %change %stddev
\ | \
17367 -16.8% 14444 vm-scalability.median
6.13 ± 26% +4.3 10.39 ± 18% vm-scalability.stddev%
6319269 -16.9% 5252011 vm-scalability.throughput
309.64 +6.3% 329.22 vm-scalability.time.elapsed_time
309.64 +6.3% 329.22 vm-scalability.time.elapsed_time.max
1.77e+09 -11.1% 1.574e+09 vm-scalability.time.file_system_outputs
2.357e+08 -11.1% 2.095e+08 vm-scalability.time.minor_page_faults
595.33 -15.1% 505.50 vm-scalability.time.percent_of_cpu_this_job_got
1452 -9.9% 1308 vm-scalability.time.system_time
392.26 ± 2% -8.9% 357.20 ± 3% vm-scalability.time.user_time
1369046 -4.4% 1308968 vm-scalability.time.voluntary_context_switches
9.952e+08 -11.1% 8.846e+08 vm-scalability.workload
95.00 ± 5% +37.9% 131.00 ± 4% perf-c2c.DRAM.local
2.17 -0.3 1.90 mpstat.cpu.all.sys%
0.39 ± 2% -0.1 0.34 ± 2% mpstat.cpu.all.usr%
264231 ± 7% +20.1% 317338 ± 9% numa-meminfo.node1.Writeback
3161539 ± 6% +11.1% 3513478 ± 3% numa-meminfo.node2.Inactive(anon)
2798749 -16.1% 2347030 vmstat.io.bo
7.61 ± 4% -12.7% 6.64 ± 6% vmstat.procs.r
16971 -10.4% 15204 vmstat.system.cs
1378902 ± 38% +389.9% 6755495 ± 45% numa-numastat.node0.numa_foreign
4177825 ± 40% -64.1% 1500551 ±107% numa-numastat.node0.numa_miss
4264407 ± 39% -63.4% 1562785 ±101% numa-numastat.node0.other_node
5590015 ± 78% -62.7% 2085869 ±113% numa-numastat.node3.numa_foreign
169.33 ± 2% -10.4% 151.67 turbostat.Avg_MHz
4.47 ± 2% -0.5 4.00 turbostat.Busy%
435.99 -1.9% 427.64 turbostat.PkgWatt
21.64 -3.5% 20.88 turbostat.RAMWatt
124185 ± 13% -32.3% 84070 ± 10% sched_debug.cfs_rq:/.avg_vruntime.min
124185 ± 13% -32.3% 84070 ± 10% sched_debug.cfs_rq:/.min_vruntime.min
105.08 ± 37% -61.2% 40.75 ± 7% sched_debug.cfs_rq:/.runnable_avg.avg
164.08 ± 14% -32.5% 110.71 ± 4% sched_debug.cfs_rq:/.runnable_avg.stddev
104.27 ± 38% -61.2% 40.49 ± 7% sched_debug.cfs_rq:/.util_avg.avg
162.41 ± 15% -32.4% 109.74 ± 4% sched_debug.cfs_rq:/.util_avg.stddev
2781 ± 24% -44.3% 1549 ± 14% sched_debug.cpu.curr->pid.stddev
0.59 ± 7% +12.4% 0.66 ± 4% sched_debug.cpu.nr_uninterruptible.avg
55282809 ± 6% -13.7% 47726748 ± 6% numa-vmstat.node0.nr_dirtied
1145143 ± 9% -20.4% 912102 ± 17% numa-vmstat.node0.nr_free_pages
55282809 ± 6% -13.7% 47726748 ± 6% numa-vmstat.node0.nr_written
1378902 ± 38% +389.9% 6755495 ± 45% numa-vmstat.node0.numa_foreign
4177825 ± 40% -64.1% 1500551 ±107% numa-vmstat.node0.numa_miss
4264407 ± 39% -63.4% 1562785 ±101% numa-vmstat.node0.numa_other
65521 ± 8% +20.8% 79128 ± 10% numa-vmstat.node1.nr_writeback
56237202 ± 6% -11.8% 49599462 ± 4% numa-vmstat.node2.nr_dirtied
789922 ± 6% +11.2% 878674 ± 3% numa-vmstat.node2.nr_inactive_anon
7363130 ± 14% -25.5% 5486565 ± 13% numa-vmstat.node2.nr_vmscan_immediate_reclaim
56237202 ± 6% -11.8% 49599462 ± 4% numa-vmstat.node2.nr_written
789923 ± 6% +11.2% 878673 ± 3% numa-vmstat.node2.nr_zone_inactive_anon
56312677 ± 5% -13.8% 48559691 ± 7% numa-vmstat.node3.nr_dirtied
56312677 ± 5% -13.8% 48559691 ± 7% numa-vmstat.node3.nr_written
5590015 ± 78% -62.7% 2085869 ±113% numa-vmstat.node3.numa_foreign
10311 ± 35% -68.9% 3204 ± 72% proc-vmstat.compact_success
14371045 ± 2% +3.7% 14896466 proc-vmstat.nr_active_file
98005 -1.9% 96096 proc-vmstat.nr_anon_pages
2.213e+08 -11.1% 1.967e+08 proc-vmstat.nr_dirtied
3160899 +9.1% 3447334 ± 2% proc-vmstat.nr_inactive_anon
27214352 ± 2% -14.1% 23369413 ± 2% proc-vmstat.nr_vmscan_immediate_reclaim
2.213e+08 -11.1% 1.967e+08 proc-vmstat.nr_written
14371387 ± 2% +3.7% 14896745 proc-vmstat.nr_zone_active_file
3160900 +9.1% 3447332 ± 2% proc-vmstat.nr_zone_inactive_anon
19216 ± 11% -71.2% 5539 ± 4% proc-vmstat.numa_hint_faults
9725 ± 31% -70.0% 2917 ± 59% proc-vmstat.numa_hint_faults_local
1996 ± 71% -76.8% 462.83 ± 80% proc-vmstat.numa_pages_migrated
3.344e+08 -11.4% 2.964e+08 proc-vmstat.pgactivate
2.646e+08 -10.0% 2.382e+08 proc-vmstat.pgalloc_normal
1.158e+08 ± 4% -13.7% 99968494 ± 3% proc-vmstat.pgdeactivate
2.374e+08 -11.1% 2.111e+08 proc-vmstat.pgfault
2.673e+08 -10.1% 2.402e+08 proc-vmstat.pgfree
3693 ± 3% -12.6% 3227 ± 3% proc-vmstat.pgmajfault
8.853e+08 -11.1% 7.869e+08 proc-vmstat.pgpgout
1.158e+08 ± 4% -13.7% 99968494 ± 3% proc-vmstat.pgrefill
127095 ± 2% -10.4% 113877 proc-vmstat.pgreuse
28337584 ± 2% -14.4% 24247338 proc-vmstat.pgrotated
61733485 ± 2% -11.9% 54360975 ± 4% proc-vmstat.pgscan_file
45323460 ± 5% -9.5% 40999529 ± 8% proc-vmstat.pgscan_kswapd
6262367 ± 4% -20.7% 4965325 ± 11% proc-vmstat.pgsteal_direct
31649958 ± 3% -11.4% 28049294 ± 6% proc-vmstat.pgsteal_file
11627904 -6.8% 10841344 proc-vmstat.unevictable_pgs_scanned
6985805 ± 2% -18.0% 5728058 ± 10% proc-vmstat.workingset_activate_file
7061865 ± 3% -12.0% 6214389 ± 5% proc-vmstat.workingset_refault_file
6985038 ± 2% -18.0% 5727368 ± 10% proc-vmstat.workingset_restore_file
9.17 -9.3% 8.31 perf-stat.i.MPKI
5.443e+09 -17.6% 4.486e+09 perf-stat.i.branch-instructions
13498005 ± 4% -13.9% 11626468 ± 2% perf-stat.i.branch-misses
80.16 -2.8 77.38 perf-stat.i.cache-miss-rate%
1.985e+08 -26.1% 1.467e+08 perf-stat.i.cache-misses
2.439e+08 -24.0% 1.854e+08 perf-stat.i.cache-references
16944 -10.7% 15132 perf-stat.i.context-switches
1.41 +12.8% 1.60 ± 5% perf-stat.i.cpi
3.482e+10 ± 2% -11.4% 3.086e+10 perf-stat.i.cpu-cycles
360.77 -6.1% 338.94 perf-stat.i.cpu-migrations
0.02 ± 5% +0.0 0.02 ± 3% perf-stat.i.dTLB-load-miss-rate%
826726 -9.6% 747639 ± 3% perf-stat.i.dTLB-load-misses
5.721e+09 ± 2% -21.9% 4.467e+09 perf-stat.i.dTLB-loads
0.13 +0.0 0.15 perf-stat.i.dTLB-store-miss-rate%
3828019 ± 2% -17.2% 3168599 perf-stat.i.dTLB-store-misses
2.814e+09 -26.4% 2.071e+09 perf-stat.i.dTLB-stores
55.13 -0.7 54.40 perf-stat.i.iTLB-load-miss-rate%
4113741 -10.8% 3670976 ± 2% perf-stat.i.iTLB-load-misses
3278251 -8.8% 2989968 perf-stat.i.iTLB-loads
2.171e+10 -20.0% 1.736e+10 perf-stat.i.instructions
5084 -10.0% 4578 perf-stat.i.instructions-per-iTLB-miss
0.85 -13.1% 0.74 ± 3% perf-stat.i.ipc
1448 ± 5% -14.8% 1233 ± 9% perf-stat.i.major-faults
0.15 ± 2% -11.4% 0.14 perf-stat.i.metric.GHz
925.09 -11.4% 819.35 perf-stat.i.metric.K/sec
62.72 -21.4% 49.31 perf-stat.i.metric.M/sec
703868 -16.4% 588183 perf-stat.i.minor-faults
80.84 -5.4 75.47 perf-stat.i.node-load-miss-rate%
10635123 +18.7% 12623931 perf-stat.i.node-load-misses
2610552 +59.3% 4158136 ± 2% perf-stat.i.node-loads
76.21 -2.7 73.54 perf-stat.i.node-store-miss-rate%
46019985 ± 2% -36.6% 29187579 perf-stat.i.node-store-misses
16991220 ± 3% -26.7% 12448792 perf-stat.i.node-stores
705316 -16.4% 589415 perf-stat.i.page-faults
9.21 -7.2% 8.54 perf-stat.overall.MPKI
81.37 -2.2 79.20 perf-stat.overall.cache-miss-rate%
1.65 +10.7% 1.82 perf-stat.overall.cpi
178.71 +19.4% 213.30 perf-stat.overall.cycles-between-cache-misses
0.01 ± 2% +0.0 0.02 ± 4% perf-stat.overall.dTLB-load-miss-rate%
0.14 +0.0 0.15 perf-stat.overall.dTLB-store-miss-rate%
5329 -10.1% 4789 perf-stat.overall.instructions-per-iTLB-miss
0.61 -9.7% 0.55 perf-stat.overall.ipc
80.15 -5.0 75.17 perf-stat.overall.node-load-miss-rate%
72.82 -2.9 69.89 perf-stat.overall.node-store-miss-rate%
6918 -4.2% 6625 perf-stat.overall.path-length
5.505e+09 -17.4% 4.549e+09 perf-stat.ps.branch-instructions
13649952 ± 4% -13.7% 11775243 ± 2% perf-stat.ps.branch-misses
2.023e+08 -25.6% 1.506e+08 perf-stat.ps.cache-misses
2.487e+08 -23.6% 1.901e+08 perf-stat.ps.cache-references
16976 -10.7% 15156 perf-stat.ps.context-switches
3.616e+10 ± 2% -11.2% 3.211e+10 perf-stat.ps.cpu-cycles
364.05 -6.1% 341.97 perf-stat.ps.cpu-migrations
836446 -9.5% 756895 ± 3% perf-stat.ps.dTLB-load-misses
5.773e+09 ± 2% -21.7% 4.521e+09 perf-stat.ps.dTLB-loads
3873256 ± 2% -16.9% 3216923 perf-stat.ps.dTLB-store-misses
2.862e+09 -26.0% 2.119e+09 perf-stat.ps.dTLB-stores
4124868 -10.7% 3682495 ± 2% perf-stat.ps.iTLB-load-misses
3267583 -8.7% 2983240 perf-stat.ps.iTLB-loads
2.198e+10 -19.8% 1.763e+10 perf-stat.ps.instructions
1468 ± 4% -15.2% 1245 ± 9% perf-stat.ps.major-faults
711215 -16.2% 596020 perf-stat.ps.minor-faults
10655899 +18.5% 12627905 perf-stat.ps.node-load-misses
2638928 +58.0% 4170441 ± 2% perf-stat.ps.node-loads
47247465 ± 2% -35.7% 30389920 perf-stat.ps.node-store-misses
17634674 ± 3% -25.7% 13094211 ± 2% perf-stat.ps.node-stores
712683 -16.2% 597265 perf-stat.ps.page-faults
6.885e+12 -14.9% 5.861e+12 perf-stat.total.instructions
0.00 ± 35% -100.0% 0.00 perf-sched.sch_delay.avg.ms.__cond_resched.__get_user_pages.populate_vma_page_range.__mm_populate.vm_mmap_pgoff
0.02 ± 52% -100.0% 0.00 perf-sched.sch_delay.avg.ms.__cond_resched.cgroup_rstat_flush_locked.cgroup_rstat_flush.do_flush_stats.mem_cgroup_wb_stats
0.00 ± 17% -100.0% 0.00 perf-sched.sch_delay.avg.ms.__cond_resched.down_read.__mm_populate.vm_mmap_pgoff.ksys_mmap_pgoff
0.00 ± 62% -100.0% 0.00 perf-sched.sch_delay.avg.ms.__cond_resched.down_read.page_cache_ra_order.filemap_fault.__do_fault
0.00 ± 28% -100.0% 0.00 perf-sched.sch_delay.avg.ms.__cond_resched.down_read.xfs_ilock_for_iomap.xfs_read_iomap_begin.iomap_iter
0.00 ± 31% -100.0% 0.00 perf-sched.sch_delay.avg.ms.__cond_resched.shrink_active_list.shrink_lruvec.shrink_node_memcgs.shrink_node
0.02 ± 43% -78.5% 0.00 ± 22% perf-sched.sch_delay.avg.ms.__cond_resched.smpboot_thread_fn.kthread.ret_from_fork.ret_from_fork_asm
0.02 ± 24% -91.6% 0.00 ±100% perf-sched.sch_delay.avg.ms.__cond_resched.stop_one_cpu.sched_exec.bprm_execve.part
3.27 ± 12% -51.4% 1.59 ± 73% perf-sched.sch_delay.avg.ms.__cond_resched.ww_mutex_lock.drm_gem_vunmap_unlocked.drm_gem_fb_vunmap.drm_atomic_helper_cleanup_planes
0.01 ± 21% -61.2% 0.01 ± 7% perf-sched.sch_delay.avg.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown]
0.01 ± 12% -100.0% 0.00 perf-sched.sch_delay.avg.ms.d_alloc_parallel.__lookup_slow.walk_component.link_path_walk.part
0.01 ± 9% -41.5% 0.01 ± 7% perf-sched.sch_delay.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64
0.01 ± 32% -54.5% 0.01 ± 11% perf-sched.sch_delay.avg.ms.do_nanosleep.hrtimer_nanosleep.common_nsleep.__x64_sys_clock_nanosleep
0.00 ± 10% -50.0% 0.00 ± 17% perf-sched.sch_delay.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
0.02 ± 7% -73.9% 0.01 perf-sched.sch_delay.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64
0.01 ± 26% -67.9% 0.00 ±108% perf-sched.sch_delay.avg.ms.exit_to_user_mode_loop.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_exc_page_fault
0.01 ± 58% -100.0% 0.00 perf-sched.sch_delay.avg.ms.exit_to_user_mode_loop.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_call_function_single
0.00 ± 56% -100.0% 0.00 perf-sched.sch_delay.avg.ms.exit_to_user_mode_loop.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi
0.01 ± 27% -82.2% 0.00 ±141% perf-sched.sch_delay.avg.ms.io_schedule.folio_wait_bit_common.filemap_fault.__do_fault
0.01 ± 11% -36.8% 0.00 perf-sched.sch_delay.avg.ms.io_schedule.rq_qos_wait.wbt_wait.__rq_qos_throttle
0.01 ± 13% -68.8% 0.00 ±141% perf-sched.sch_delay.avg.ms.kswapd_try_to_sleep.kswapd.kthread.ret_from_fork
0.01 ± 15% -69.3% 0.00 ± 9% perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.do_poll.constprop.0.do_sys_poll
0.02 ± 9% -62.6% 0.01 ± 8% perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.do_select.core_sys_select.kern_select
0.01 ± 8% -44.6% 0.01 ± 9% perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait
0.01 ± 82% -77.3% 0.00 ± 50% perf-sched.sch_delay.avg.ms.schedule_preempt_disabled.rwsem_down_read_slowpath.down_read.xfs_map_blocks
0.00 ± 63% +3966.7% 0.06 ± 68% perf-sched.sch_delay.avg.ms.schedule_timeout.__wait_for_common.__flush_workqueue.xlog_cil_push_now.isra
0.01 ± 10% -31.7% 0.00 ± 10% perf-sched.sch_delay.avg.ms.schedule_timeout.io_schedule_timeout.balance_dirty_pages.balance_dirty_pages_ratelimited_flags
0.01 ± 10% -58.9% 0.00 ± 9% perf-sched.sch_delay.avg.ms.schedule_timeout.xfsaild.kthread.ret_from_fork
0.01 ± 9% -25.6% 0.01 ± 8% perf-sched.sch_delay.avg.ms.sigsuspend.__x64_sys_rt_sigsuspend.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.03 ± 7% -65.2% 0.01 ± 5% perf-sched.sch_delay.avg.ms.smpboot_thread_fn.kthread.ret_from_fork.ret_from_fork_asm
0.01 ± 12% -45.2% 0.01 ± 7% perf-sched.sch_delay.avg.ms.syslog_print.do_syslog.kmsg_read.vfs_read
0.00 ± 11% -40.7% 0.00 ± 17% perf-sched.sch_delay.avg.ms.wait_for_partner.fifo_open.do_dentry_open.do_open
0.25 ± 11% -82.4% 0.04 ± 35% perf-sched.sch_delay.avg.ms.worker_thread.kthread.ret_from_fork.ret_from_fork_asm
0.01 ± 51% -100.0% 0.00 perf-sched.sch_delay.avg.ms.xlog_wait_on_iclog.xfs_extent_busy_flush.xfs_alloc_ag_vextent_near.xfs_alloc_vextent_near_bno
0.01 ± 23% -33.3% 0.00 ± 14% perf-sched.sch_delay.avg.ms.xlog_wait_on_iclog.xfs_log_force_seq.xfs_file_fsync.__do_sys_msync
0.01 ± 50% -49.2% 0.01 ± 7% perf-sched.sch_delay.avg.ms.xlog_wait_on_iclog.xlog_cil_push_work.process_one_work.worker_thread
0.01 ± 34% -100.0% 0.00 perf-sched.sch_delay.max.ms.__cond_resched.__alloc_pages_slowpath.constprop.0.__alloc_pages
0.01 ± 58% -100.0% 0.00 perf-sched.sch_delay.max.ms.__cond_resched.__get_user_pages.populate_vma_page_range.__mm_populate.vm_mmap_pgoff
0.03 ± 39% -79.6% 0.01 ± 17% perf-sched.sch_delay.max.ms.__cond_resched.__wait_for_common.affine_move_task.__set_cpus_allowed_ptr.__sched_setaffinity
0.03 ± 60% -95.7% 0.00 ±141% perf-sched.sch_delay.max.ms.__cond_resched.__xfs_filemap_fault.do_page_mkwrite.do_fault.__handle_mm_fault
0.01 ± 68% -100.0% 0.00 perf-sched.sch_delay.max.ms.__cond_resched.balance_pgdat.kswapd.kthread.ret_from_fork
0.02 ± 45% -100.0% 0.00 perf-sched.sch_delay.max.ms.__cond_resched.cgroup_rstat_flush_locked.cgroup_rstat_flush.do_flush_stats.mem_cgroup_wb_stats
0.05 ± 49% -100.0% 0.00 perf-sched.sch_delay.max.ms.__cond_resched.down_read.__mm_populate.vm_mmap_pgoff.ksys_mmap_pgoff
0.00 ± 20% -100.0% 0.00 perf-sched.sch_delay.max.ms.__cond_resched.down_read.page_cache_ra_order.filemap_fault.__do_fault
0.01 ± 31% -100.0% 0.00 perf-sched.sch_delay.max.ms.__cond_resched.down_read.xfs_ilock_for_iomap.xfs_read_iomap_begin.iomap_iter
0.03 ± 38% -65.9% 0.01 ± 27% perf-sched.sch_delay.max.ms.__cond_resched.generic_perform_write.shmem_file_write_iter.do_iter_readv_writev.do_iter_write
0.01 ± 51% -100.0% 0.00 perf-sched.sch_delay.max.ms.__cond_resched.shrink_active_list.shrink_lruvec.shrink_node_memcgs.shrink_node
0.03 ± 31% -100.0% 0.00 perf-sched.sch_delay.max.ms.__cond_resched.shrink_node_memcgs.shrink_node.shrink_zones.constprop
0.79 ± 77% -96.4% 0.03 ±169% perf-sched.sch_delay.max.ms.__cond_resched.smpboot_thread_fn.kthread.ret_from_fork.ret_from_fork_asm
0.57 ± 29% -99.7% 0.00 ±100% perf-sched.sch_delay.max.ms.__cond_resched.stop_one_cpu.sched_exec.bprm_execve.part
6.22 ± 36% -70.0% 1.87 ± 70% perf-sched.sch_delay.max.ms.__cond_resched.ww_mutex_lock.drm_gem_vunmap_unlocked.drm_gem_fb_vunmap.drm_atomic_helper_cleanup_planes
0.04 ± 45% -82.6% 0.01 ± 38% perf-sched.sch_delay.max.ms.__x64_sys_pause.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown]
0.02 ± 18% -100.0% 0.00 perf-sched.sch_delay.max.ms.d_alloc_parallel.__lookup_slow.walk_component.link_path_walk.part
0.10 ± 58% -82.7% 0.02 ± 7% perf-sched.sch_delay.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64
0.03 ± 53% -78.9% 0.01 ± 25% perf-sched.sch_delay.max.ms.do_nanosleep.hrtimer_nanosleep.common_nsleep.__x64_sys_clock_nanosleep
0.51 ± 36% -94.9% 0.03 ±129% perf-sched.sch_delay.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
0.89 ± 90% -98.6% 0.01 ± 17% perf-sched.sch_delay.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64
0.06 ± 41% -94.9% 0.00 ±108% perf-sched.sch_delay.max.ms.exit_to_user_mode_loop.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_exc_page_fault
0.06 ± 48% -90.4% 0.01 ± 22% perf-sched.sch_delay.max.ms.exit_to_user_mode_loop.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt
0.04 ± 84% -100.0% 0.00 perf-sched.sch_delay.max.ms.exit_to_user_mode_loop.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_call_function_single
0.02 ± 38% -100.0% 0.00 perf-sched.sch_delay.max.ms.exit_to_user_mode_loop.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi
0.03 ± 57% -95.4% 0.00 ±141% perf-sched.sch_delay.max.ms.io_schedule.folio_wait_bit_common.filemap_fault.__do_fault
0.01 ± 43% -80.7% 0.00 ±142% perf-sched.sch_delay.max.ms.kswapd_try_to_sleep.kswapd.kthread.ret_from_fork
0.28 ± 94% -97.0% 0.01 ± 16% perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.do_poll.constprop.0.do_sys_poll
0.73 ±161% -98.3% 0.01 ± 32% perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.do_select.core_sys_select.kern_select
0.18 ±119% -91.8% 0.02 ± 13% perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait
0.06 ± 65% -95.5% 0.00 ± 46% perf-sched.sch_delay.max.ms.schedule_preempt_disabled.rwsem_down_read_slowpath.down_read.xfs_map_blocks
0.01 ± 48% +6086.4% 0.45 ± 32% perf-sched.sch_delay.max.ms.schedule_timeout.__wait_for_common.__flush_workqueue.xlog_cil_push_now.isra
0.04 ± 47% -81.9% 0.01 ± 20% perf-sched.sch_delay.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork
0.07 ± 35% -78.7% 0.01 ± 87% perf-sched.sch_delay.max.ms.schedule_timeout.xfsaild.kthread.ret_from_fork
13.32 ± 13% -71.3% 3.83 ± 4% perf-sched.sch_delay.max.ms.smpboot_thread_fn.kthread.ret_from_fork.ret_from_fork_asm
0.10 ± 46% -82.0% 0.02 ± 23% perf-sched.sch_delay.max.ms.syslog_print.do_syslog.kmsg_read.vfs_read
0.02 ± 40% -71.1% 0.01 ± 45% perf-sched.sch_delay.max.ms.wait_for_partner.fifo_open.do_dentry_open.do_open
20.07 ± 28% -76.3% 4.76 ± 58% perf-sched.sch_delay.max.ms.worker_thread.kthread.ret_from_fork.ret_from_fork_asm
0.00 ±145% +320.0% 0.00 ± 27% perf-sched.sch_delay.max.ms.xlog_force_lsn.xfs_log_force_seq.xfs_file_fsync.__do_sys_msync
0.01 ± 51% -100.0% 0.00 perf-sched.sch_delay.max.ms.xlog_wait_on_iclog.xfs_extent_busy_flush.xfs_alloc_ag_vextent_near.xfs_alloc_vextent_near_bno
0.03 ± 62% -74.2% 0.01 ± 22% perf-sched.sch_delay.max.ms.xlog_wait_on_iclog.xlog_cil_push_work.process_one_work.worker_thread
0.03 ± 3% -75.6% 0.01 ± 14% perf-sched.total_sch_delay.average.ms
52.36 ± 2% +16.5% 61.01 ± 2% perf-sched.total_wait_and_delay.average.ms
56658 ± 4% -21.4% 44529 ± 5% perf-sched.total_wait_and_delay.count.ms
52.33 ± 2% +16.6% 61.01 ± 2% perf-sched.total_wait_time.average.ms
0.15 ± 15% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.__cond_resched.down_read.__mm_populate.vm_mmap_pgoff.ksys_mmap_pgoff
0.50 ± 30% +92.1% 0.95 ± 20% perf-sched.wait_and_delay.avg.ms.__cond_resched.loop_process_work.process_one_work.worker_thread.kthread
0.95 ± 50% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.__cond_resched.shrink_node_memcgs.shrink_node.shrink_zones.constprop
26.03 ± 10% +166.5% 69.36 ± 14% perf-sched.wait_and_delay.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
1.09 ± 31% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64
0.06 ± 31% +675.6% 0.44 ± 59% perf-sched.wait_and_delay.avg.ms.io_schedule.folio_wait_bit_common.folio_wait_writeback.__filemap_fdatawait_range
0.72 ± 65% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.io_schedule.folio_wait_bit_common.write_cache_pages.iomap_writepages
67.31 ± 6% +771.1% 586.35 ± 10% perf-sched.wait_and_delay.avg.ms.schedule_hrtimeout_range_clock.do_poll.constprop.0.do_sys_poll
8.78 ± 21% +1848.5% 171.11 ± 33% perf-sched.wait_and_delay.avg.ms.schedule_hrtimeout_range_clock.do_select.core_sys_select.kern_select
3.98 ± 2% -100.0% 0.00 perf-sched.wait_and_delay.avg.ms.schedule_timeout.__wait_for_common.wait_for_completion_state.kernel_clone
43.48 ± 3% +12.9% 49.09 ± 2% perf-sched.wait_and_delay.avg.ms.schedule_timeout.io_schedule_timeout.balance_dirty_pages.balance_dirty_pages_ratelimited_flags
111.19 ± 16% -89.7% 11.50 ±223% perf-sched.wait_and_delay.avg.ms.schedule_timeout.kswapd_try_to_sleep.kswapd.kthread
4.66 ± 5% +171.8% 12.67 ± 16% perf-sched.wait_and_delay.avg.ms.schedule_timeout.rcu_gp_fqs_loop.rcu_gp_kthread.kthread
194.78 ± 8% +71.6% 334.26 ± 6% perf-sched.wait_and_delay.avg.ms.worker_thread.kthread.ret_from_fork.ret_from_fork_asm
369.50 ± 7% -100.0% 0.00 perf-sched.wait_and_delay.count.__cond_resched.down_read.__mm_populate.vm_mmap_pgoff.ksys_mmap_pgoff
1208 ± 22% -100.0% 0.00 perf-sched.wait_and_delay.count.__cond_resched.shrink_node_memcgs.shrink_node.shrink_zones.constprop
9.00 ± 40% -83.3% 1.50 ±120% perf-sched.wait_and_delay.count.__cond_resched.ww_mutex_lock.drm_gem_vunmap_unlocked.drm_gem_fb_vunmap.drm_atomic_helper_cleanup_planes
421.00 ± 7% -89.9% 42.67 ± 49% perf-sched.wait_and_delay.count.devkmsg_read.vfs_read.ksys_read.do_syscall_64
848.50 -73.3% 226.17 ± 20% perf-sched.wait_and_delay.count.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
1048 -100.0% 0.00 perf-sched.wait_and_delay.count.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64
1875 ± 36% -100.0% 0.00 perf-sched.wait_and_delay.count.io_schedule.folio_wait_bit_common.write_cache_pages.iomap_writepages
411.67 ± 13% +379.1% 1972 ± 91% perf-sched.wait_and_delay.count.io_schedule.rq_qos_wait.wbt_wait.__rq_qos_throttle
9.67 ± 22% -91.4% 0.83 ±175% perf-sched.wait_and_delay.count.kswapd_try_to_sleep.kswapd.kthread.ret_from_fork
188.33 ± 2% -87.9% 22.83 ± 6% perf-sched.wait_and_delay.count.schedule_hrtimeout_range_clock.do_poll.constprop.0.do_sys_poll
1268 -92.1% 99.83 ± 56% perf-sched.wait_and_delay.count.schedule_hrtimeout_range_clock.do_select.core_sys_select.kern_select
433.67 ± 6% -87.1% 56.00 ± 46% perf-sched.wait_and_delay.count.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait
341.50 -100.0% 0.00 perf-sched.wait_and_delay.count.schedule_timeout.__wait_for_common.wait_for_completion_state.kernel_clone
39.33 ± 17% -94.9% 2.00 ±223% perf-sched.wait_and_delay.count.schedule_timeout.kswapd_try_to_sleep.kswapd.kthread
1003 ± 7% -61.6% 385.67 ± 21% perf-sched.wait_and_delay.count.schedule_timeout.rcu_gp_fqs_loop.rcu_gp_kthread.kthread
422.17 ± 6% -90.0% 42.33 ± 49% perf-sched.wait_and_delay.count.syslog_print.do_syslog.kmsg_read.vfs_read
4337 ± 7% -47.9% 2261 ± 6% perf-sched.wait_and_delay.count.worker_thread.kthread.ret_from_fork.ret_from_fork_asm
2.86 ± 48% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.__cond_resched.down_read.__mm_populate.vm_mmap_pgoff.ksys_mmap_pgoff
206.38 ±172% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.__cond_resched.shrink_node_memcgs.shrink_node.shrink_zones.constprop
799.81 ± 24% -62.8% 297.34 ±108% perf-sched.wait_and_delay.max.ms.__cond_resched.ww_mutex_lock.drm_gem_vunmap_unlocked.drm_gem_fb_vunmap.drm_atomic_helper_cleanup_planes
2688 ± 42% -62.8% 1000 perf-sched.wait_and_delay.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
198.42 ±181% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64
87.70 ± 27% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.io_schedule.folio_wait_bit_common.write_cache_pages.iomap_writepages
108.24 -67.7% 34.93 ±141% perf-sched.wait_and_delay.max.ms.kswapd_try_to_sleep.kswapd.kthread.ret_from_fork
1386 ± 21% -26.7% 1015 perf-sched.wait_and_delay.max.ms.pipe_read.vfs_read.ksys_read.do_syscall_64
20.54 ± 18% -100.0% 0.00 perf-sched.wait_and_delay.max.ms.schedule_timeout.__wait_for_common.wait_for_completion_state.kernel_clone
59.91 ± 3% -7.5% 55.43 perf-sched.wait_and_delay.max.ms.schedule_timeout.io_schedule_timeout.balance_dirty_pages.balance_dirty_pages_ratelimited_flags
484.94 ± 15% -96.4% 17.50 ±223% perf-sched.wait_and_delay.max.ms.schedule_timeout.kswapd_try_to_sleep.kswapd.kthread
20.82 ± 4% +1402.3% 312.82 ± 16% perf-sched.wait_and_delay.max.ms.schedule_timeout.rcu_gp_fqs_loop.rcu_gp_kthread.kthread
2686 ± 28% +45.1% 3897 ± 11% perf-sched.wait_and_delay.max.ms.worker_thread.kthread.ret_from_fork.ret_from_fork_asm
9.41 ±165% -100.0% 0.00 perf-sched.wait_time.avg.ms.__cond_resched.__alloc_pages.folio_alloc.page_cache_ra_order.filemap_fault
6.01 ±143% -100.0% 0.00 perf-sched.wait_time.avg.ms.__cond_resched.__alloc_pages.folio_alloc.page_cache_ra_unbounded.filemap_fault
0.48 ± 49% -100.0% 0.00 perf-sched.wait_time.avg.ms.__cond_resched.__alloc_pages_slowpath.constprop.0.__alloc_pages
0.14 ± 12% -100.0% 0.00 perf-sched.wait_time.avg.ms.__cond_resched.__get_user_pages.populate_vma_page_range.__mm_populate.vm_mmap_pgoff
11.89 ±142% -100.0% 0.00 perf-sched.wait_time.avg.ms.__cond_resched.__kmem_cache_alloc_node.__kmalloc.ifs_alloc.isra
28.12 ± 52% -100.0% 0.00 ±223% perf-sched.wait_time.avg.ms.__cond_resched.balance_pgdat.kswapd.kthread.ret_from_fork
1.65 ± 83% -100.0% 0.00 perf-sched.wait_time.avg.ms.__cond_resched.cgroup_rstat_flush.do_flush_stats.prepare_scan_count.shrink_node
31.34 ± 44% -100.0% 0.00 perf-sched.wait_time.avg.ms.__cond_resched.cgroup_rstat_flush_locked.cgroup_rstat_flush.do_flush_stats.mem_cgroup_wb_stats
0.15 ± 15% -100.0% 0.00 perf-sched.wait_time.avg.ms.__cond_resched.down_read.__mm_populate.vm_mmap_pgoff.ksys_mmap_pgoff
0.14 ± 41% -100.0% 0.00 perf-sched.wait_time.avg.ms.__cond_resched.down_read.page_cache_ra_order.filemap_fault.__do_fault
0.45 ± 98% -100.0% 0.00 perf-sched.wait_time.avg.ms.__cond_resched.down_read.xfs_ilock_for_iomap.xfs_read_iomap_begin.iomap_iter
0.49 ± 30% +92.1% 0.95 ± 20% perf-sched.wait_time.avg.ms.__cond_resched.loop_process_work.process_one_work.worker_thread.kthread
16.59 ± 84% -100.0% 0.00 perf-sched.wait_time.avg.ms.__cond_resched.shrink_active_list.shrink_lruvec.shrink_node_memcgs.shrink_node
3.71 ± 98% -100.0% 0.00 perf-sched.wait_time.avg.ms.__cond_resched.shrink_folio_list.shrink_inactive_list.shrink_lruvec.shrink_node_memcgs
22.01 ±144% -100.0% 0.00 perf-sched.wait_time.avg.ms.__cond_resched.shrink_lruvec.shrink_node_memcgs.shrink_node.balance_pgdat
2.88 ± 70% -100.0% 0.00 perf-sched.wait_time.avg.ms.__cond_resched.shrink_lruvec.shrink_node_memcgs.shrink_node.shrink_zones
0.95 ± 50% -100.0% 0.00 perf-sched.wait_time.avg.ms.__cond_resched.shrink_node_memcgs.shrink_node.shrink_zones.constprop
1.75 ±217% -100.0% 0.00 perf-sched.wait_time.avg.ms.__cond_resched.shrink_slab.shrink_node_memcgs.shrink_node.shrink_zones
0.10 ± 25% -100.0% 0.00 perf-sched.wait_time.avg.ms.__cond_resched.stop_one_cpu.sched_exec.bprm_execve.part
0.02 ± 63% -100.0% 0.00 perf-sched.wait_time.avg.ms.d_alloc_parallel.__lookup_slow.walk_component.link_path_walk.part
11.83 ± 6% +614.1% 84.45 ± 33% perf-sched.wait_time.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64
26.02 ± 10% +166.5% 69.36 ± 14% perf-sched.wait_time.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
1.07 ± 32% -63.2% 0.40 ± 3% perf-sched.wait_time.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64
5.52 ± 36% +867.7% 53.45 ± 96% perf-sched.wait_time.avg.ms.exit_to_user_mode_loop.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt
6.45 ±157% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_loop.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_call_function_single
6.30 ± 91% -100.0% 0.00 perf-sched.wait_time.avg.ms.exit_to_user_mode_loop.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi
0.66 ± 9% +26159.1% 172.08 ± 32% perf-sched.wait_time.avg.ms.exit_to_user_mode_loop.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64
0.06 ± 33% +704.8% 0.44 ± 59% perf-sched.wait_time.avg.ms.io_schedule.folio_wait_bit_common.folio_wait_writeback.__filemap_fdatawait_range
3.29 ± 6% +24.0% 4.08 ± 8% perf-sched.wait_time.avg.ms.rcu_gp_kthread.kthread.ret_from_fork.ret_from_fork_asm
67.30 ± 6% +771.3% 586.35 ± 10% perf-sched.wait_time.avg.ms.schedule_hrtimeout_range_clock.do_poll.constprop.0.do_sys_poll
8.77 ± 21% +1851.8% 171.10 ± 33% perf-sched.wait_time.avg.ms.schedule_hrtimeout_range_clock.do_select.core_sys_select.kern_select
22.02 ± 5% +469.1% 125.31 ± 28% perf-sched.wait_time.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait
3.90 ±143% -100.0% 0.00 perf-sched.wait_time.avg.ms.schedule_preempt_disabled.rwsem_down_write_slowpath.down_write.xfs_ilock
0.23 ±144% +1155.4% 2.93 ± 70% perf-sched.wait_time.avg.ms.schedule_timeout.__wait_for_common.__flush_workqueue.xlog_cil_push_now.isra
3.97 ± 2% -87.0% 0.52 ± 2% perf-sched.wait_time.avg.ms.schedule_timeout.__wait_for_common.wait_for_completion_state.kernel_clone
43.47 ± 3% +12.9% 49.09 ± 2% perf-sched.wait_time.avg.ms.schedule_timeout.io_schedule_timeout.balance_dirty_pages.balance_dirty_pages_ratelimited_flags
111.18 ± 16% -89.7% 11.50 ±223% perf-sched.wait_time.avg.ms.schedule_timeout.kswapd_try_to_sleep.kswapd.kthread
4.65 ± 5% +172.1% 12.66 ± 16% perf-sched.wait_time.avg.ms.schedule_timeout.rcu_gp_fqs_loop.rcu_gp_kthread.kthread
11.82 ± 6% +611.7% 84.14 ± 33% perf-sched.wait_time.avg.ms.syslog_print.do_syslog.kmsg_read.vfs_read
194.53 ± 8% +71.8% 334.21 ± 6% perf-sched.wait_time.avg.ms.worker_thread.kthread.ret_from_fork.ret_from_fork_asm
1.73 ± 35% -100.0% 0.00 perf-sched.wait_time.avg.ms.xlog_wait_on_iclog.xfs_extent_busy_flush.xfs_alloc_ag_vextent_near.xfs_alloc_vextent_near_bno
2.84 ±141% +460.6% 15.94 ± 71% perf-sched.wait_time.avg.ms.xlog_wait_on_iclog.xfs_file_fsync.__do_sys_msync.do_syscall_64
14.04 ±140% -100.0% 0.00 perf-sched.wait_time.max.ms.__cond_resched.__alloc_pages.folio_alloc.page_cache_ra_order.filemap_fault
14.40 ±140% -100.0% 0.00 perf-sched.wait_time.max.ms.__cond_resched.__alloc_pages.folio_alloc.page_cache_ra_unbounded.filemap_fault
43.33 ± 5% -100.0% 0.00 perf-sched.wait_time.max.ms.__cond_resched.__alloc_pages_slowpath.constprop.0.__alloc_pages
0.89 ± 26% -100.0% 0.00 perf-sched.wait_time.max.ms.__cond_resched.__get_user_pages.populate_vma_page_range.__mm_populate.vm_mmap_pgoff
14.41 ±139% -100.0% 0.00 perf-sched.wait_time.max.ms.__cond_resched.__kmem_cache_alloc_node.__kmalloc.ifs_alloc.isra
335.74 ± 32% -100.0% 0.00 ±223% perf-sched.wait_time.max.ms.__cond_resched.balance_pgdat.kswapd.kthread.ret_from_fork
28.27 ± 70% -100.0% 0.00 perf-sched.wait_time.max.ms.__cond_resched.cgroup_rstat_flush.do_flush_stats.prepare_scan_count.shrink_node
44.68 ± 7% -100.0% 0.00 perf-sched.wait_time.max.ms.__cond_resched.cgroup_rstat_flush_locked.cgroup_rstat_flush.do_flush_stats.mem_cgroup_wb_stats
2.83 ± 48% -100.0% 0.00 perf-sched.wait_time.max.ms.__cond_resched.down_read.__mm_populate.vm_mmap_pgoff.ksys_mmap_pgoff
0.23 ± 54% -100.0% 0.00 perf-sched.wait_time.max.ms.__cond_resched.down_read.page_cache_ra_order.filemap_fault.__do_fault
15.76 ±131% -100.0% 0.00 perf-sched.wait_time.max.ms.__cond_resched.down_read.xfs_ilock_for_iomap.xfs_read_iomap_begin.iomap_iter
261.27 ± 84% -100.0% 0.00 perf-sched.wait_time.max.ms.__cond_resched.shrink_active_list.shrink_lruvec.shrink_node_memcgs.shrink_node
20.86 ± 98% -100.0% 0.00 perf-sched.wait_time.max.ms.__cond_resched.shrink_folio_list.shrink_inactive_list.shrink_lruvec.shrink_node_memcgs
114.79 ±142% -100.0% 0.00 perf-sched.wait_time.max.ms.__cond_resched.shrink_lruvec.shrink_node_memcgs.shrink_node.balance_pgdat
35.78 ± 44% -100.0% 0.00 perf-sched.wait_time.max.ms.__cond_resched.shrink_lruvec.shrink_node_memcgs.shrink_node.shrink_zones
206.37 ±172% -100.0% 0.00 perf-sched.wait_time.max.ms.__cond_resched.shrink_node_memcgs.shrink_node.shrink_zones.constprop
7.04 ±218% -100.0% 0.00 perf-sched.wait_time.max.ms.__cond_resched.shrink_slab.shrink_node_memcgs.shrink_node.shrink_zones
3.32 ± 38% -100.0% 0.00 perf-sched.wait_time.max.ms.__cond_resched.stop_one_cpu.sched_exec.bprm_execve.part
794.55 ± 24% -62.7% 296.03 ±108% perf-sched.wait_time.max.ms.__cond_resched.ww_mutex_lock.drm_gem_vunmap_unlocked.drm_gem_fb_vunmap.drm_atomic_helper_cleanup_planes
0.09 ± 49% -100.0% 0.00 perf-sched.wait_time.max.ms.d_alloc_parallel.__lookup_slow.walk_component.link_path_walk.part
2688 ± 42% -62.8% 1000 perf-sched.wait_time.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
198.19 ±181% -87.0% 25.83 ± 2% perf-sched.wait_time.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64
47.05 ± 5% +48.0% 69.64 ± 61% perf-sched.wait_time.max.ms.exit_to_user_mode_loop.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt
22.53 ± 99% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_loop.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_call_function_single
38.60 ± 34% -100.0% 0.00 perf-sched.wait_time.max.ms.exit_to_user_mode_loop.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi
16.45 ± 39% +4001.0% 674.63 ± 19% perf-sched.wait_time.max.ms.exit_to_user_mode_loop.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64
108.23 -67.7% 34.92 ±141% perf-sched.wait_time.max.ms.kswapd_try_to_sleep.kswapd.kthread.ret_from_fork
1386 ± 21% -26.7% 1015 perf-sched.wait_time.max.ms.pipe_read.vfs_read.ksys_read.do_syscall_64
121.93 ±134% -82.0% 21.99 ± 84% perf-sched.wait_time.max.ms.schedule_preempt_disabled.rwsem_down_read_slowpath.down_read.xfs_map_blocks
8.42 ±177% -100.0% 0.00 perf-sched.wait_time.max.ms.schedule_preempt_disabled.rwsem_down_write_slowpath.down_write.xfs_ilock
2.00 ±173% +1142.8% 24.86 ± 44% perf-sched.wait_time.max.ms.schedule_timeout.__wait_for_common.__flush_workqueue.xlog_cil_push_now.isra
20.49 ± 18% -94.7% 1.09 ± 5% perf-sched.wait_time.max.ms.schedule_timeout.__wait_for_common.wait_for_completion_state.kernel_clone
59.90 ± 3% -7.5% 55.43 perf-sched.wait_time.max.ms.schedule_timeout.io_schedule_timeout.balance_dirty_pages.balance_dirty_pages_ratelimited_flags
484.93 ± 15% -96.4% 17.50 ±223% perf-sched.wait_time.max.ms.schedule_timeout.kswapd_try_to_sleep.kswapd.kthread
20.82 ± 4% +1402.6% 312.82 ± 16% perf-sched.wait_time.max.ms.schedule_timeout.rcu_gp_fqs_loop.rcu_gp_kthread.kthread
2686 ± 28% +45.1% 3897 ± 11% perf-sched.wait_time.max.ms.worker_thread.kthread.ret_from_fork.ret_from_fork_asm
1.73 ± 35% -100.0% 0.00 perf-sched.wait_time.max.ms.xlog_wait_on_iclog.xfs_extent_busy_flush.xfs_alloc_ag_vextent_near.xfs_alloc_vextent_near_bno
3.34 ±142% +804.0% 30.21 ± 72% perf-sched.wait_time.max.ms.xlog_wait_on_iclog.xfs_file_fsync.__do_sys_msync.do_syscall_64
11.14 ± 5% -11.1 0.00 perf-profile.calltrace.cycles-pp.memcpy_from_iter_mc.copy_page_from_iter_atomic.generic_perform_write.shmem_file_write_iter.do_iter_readv_writev
11.12 ± 5% -11.1 0.00 perf-profile.calltrace.cycles-pp.memcpy_orig.memcpy_from_iter_mc.copy_page_from_iter_atomic.generic_perform_write.shmem_file_write_iter
27.39 ± 2% -5.6 21.82 ± 12% perf-profile.calltrace.cycles-pp.do_access
17.96 ± 2% -3.7 14.22 ± 12% perf-profile.calltrace.cycles-pp.asm_exc_page_fault.do_access
13.46 ± 3% -3.1 10.38 ± 10% perf-profile.calltrace.cycles-pp.do_rw_once
12.32 ± 2% -2.4 9.88 ± 11% perf-profile.calltrace.cycles-pp.exc_page_fault.asm_exc_page_fault.do_access
12.22 ± 2% -2.4 9.78 ± 11% perf-profile.calltrace.cycles-pp.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.do_access
10.75 ± 2% -2.2 8.60 ± 12% perf-profile.calltrace.cycles-pp.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.do_access
9.73 ± 2% -2.0 7.78 ± 12% perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
8.89 ± 2% -1.8 7.14 ± 12% perf-profile.calltrace.cycles-pp.do_fault.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault
3.51 ± 2% -0.7 2.81 ± 12% perf-profile.calltrace.cycles-pp.do_page_mkwrite.do_fault.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
3.45 ± 2% -0.7 2.78 ± 12% perf-profile.calltrace.cycles-pp.__xfs_filemap_fault.do_page_mkwrite.do_fault.__handle_mm_fault.handle_mm_fault
1.10 ± 25% -0.6 0.51 ± 72% perf-profile.calltrace.cycles-pp.do_writepages.__writeback_single_inode.writeback_sb_inodes.__writeback_inodes_wb.wb_writeback
1.10 ± 25% -0.6 0.51 ± 72% perf-profile.calltrace.cycles-pp.xfs_vm_writepages.do_writepages.__writeback_single_inode.writeback_sb_inodes.__writeback_inodes_wb
1.10 ± 25% -0.6 0.51 ± 72% perf-profile.calltrace.cycles-pp.wb_workfn.process_one_work.worker_thread.kthread.ret_from_fork
1.10 ± 25% -0.6 0.51 ± 72% perf-profile.calltrace.cycles-pp.wb_do_writeback.wb_workfn.process_one_work.worker_thread.kthread
1.10 ± 25% -0.6 0.51 ± 72% perf-profile.calltrace.cycles-pp.wb_writeback.wb_do_writeback.wb_workfn.process_one_work.worker_thread
1.10 ± 25% -0.6 0.51 ± 72% perf-profile.calltrace.cycles-pp.__writeback_inodes_wb.wb_writeback.wb_do_writeback.wb_workfn.process_one_work
1.10 ± 25% -0.6 0.51 ± 72% perf-profile.calltrace.cycles-pp.writeback_sb_inodes.__writeback_inodes_wb.wb_writeback.wb_do_writeback.wb_workfn
1.10 ± 25% -0.6 0.51 ± 72% perf-profile.calltrace.cycles-pp.__writeback_single_inode.writeback_sb_inodes.__writeback_inodes_wb.wb_writeback.wb_do_writeback
1.10 ± 25% -0.6 0.51 ± 72% perf-profile.calltrace.cycles-pp.iomap_writepages.xfs_vm_writepages.do_writepages.__writeback_single_inode.writeback_sb_inodes
1.10 ± 25% -0.6 0.51 ± 72% perf-profile.calltrace.cycles-pp.write_cache_pages.iomap_writepages.xfs_vm_writepages.do_writepages.__writeback_single_inode
0.86 ± 12% -0.5 0.32 ±100% perf-profile.calltrace.cycles-pp.io_schedule_timeout.balance_dirty_pages.balance_dirty_pages_ratelimited_flags.fault_dirty_shared_page.do_fault
0.86 ± 12% -0.5 0.32 ±100% perf-profile.calltrace.cycles-pp.schedule_timeout.io_schedule_timeout.balance_dirty_pages.balance_dirty_pages_ratelimited_flags.fault_dirty_shared_page
2.56 ± 2% -0.5 2.02 ± 11% perf-profile.calltrace.cycles-pp.iomap_page_mkwrite.__xfs_filemap_fault.do_page_mkwrite.do_fault.__handle_mm_fault
0.83 ± 13% -0.5 0.31 ±100% perf-profile.calltrace.cycles-pp.schedule.schedule_timeout.io_schedule_timeout.balance_dirty_pages.balance_dirty_pages_ratelimited_flags
0.83 ± 13% -0.5 0.31 ±100% perf-profile.calltrace.cycles-pp.__schedule.schedule.schedule_timeout.io_schedule_timeout.balance_dirty_pages
1.55 ± 7% -0.4 1.18 ± 15% perf-profile.calltrace.cycles-pp.balance_dirty_pages.balance_dirty_pages_ratelimited_flags.fault_dirty_shared_page.do_fault.__handle_mm_fault
0.62 ± 5% -0.3 0.28 ±100% perf-profile.calltrace.cycles-pp.filemap_get_entry.__filemap_get_folio.filemap_fault.__do_fault.do_fault
0.82 ± 5% -0.3 0.56 ± 46% perf-profile.calltrace.cycles-pp.sync_regs.asm_exc_page_fault.do_access
1.26 -0.3 1.01 ± 11% perf-profile.calltrace.cycles-pp.iomap_iter.iomap_page_mkwrite.__xfs_filemap_fault.do_page_mkwrite.do_fault
0.98 -0.2 0.79 ± 13% perf-profile.calltrace.cycles-pp.xfs_buffered_write_iomap_begin.iomap_iter.iomap_page_mkwrite.__xfs_filemap_fault.do_page_mkwrite
0.66 ± 5% -0.2 0.46 ± 45% perf-profile.calltrace.cycles-pp.__filemap_get_folio.filemap_fault.__do_fault.do_fault.__handle_mm_fault
0.66 ± 5% -0.2 0.47 ± 45% perf-profile.calltrace.cycles-pp.finish_fault.do_fault.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
1.30 ± 9% +0.2 1.54 ± 10% perf-profile.calltrace.cycles-pp.rebalance_domains.__do_softirq.__irq_exit_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
0.37 ± 70% +0.3 0.65 ± 4% perf-profile.calltrace.cycles-pp.__intel_pmu_enable_all.perf_adjust_freq_unthr_context.perf_event_task_tick.scheduler_tick.update_process_times
1.60 ± 4% +0.3 1.94 ± 4% perf-profile.calltrace.cycles-pp.perf_adjust_freq_unthr_context.perf_event_task_tick.scheduler_tick.update_process_times.tick_sched_handle
1.64 ± 4% +0.3 1.98 ± 4% perf-profile.calltrace.cycles-pp.perf_event_task_tick.scheduler_tick.update_process_times.tick_sched_handle.tick_sched_timer
3.00 ± 6% +0.5 3.48 ± 7% perf-profile.calltrace.cycles-pp.scheduler_tick.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues
0.09 ±223% +0.5 0.58 ± 10% perf-profile.calltrace.cycles-pp.load_balance.rebalance_domains.__do_softirq.__irq_exit_rcu.sysvec_apic_timer_interrupt
0.46 ± 74% +0.5 1.00 ± 25% perf-profile.calltrace.cycles-pp.folio_mkclean.folio_clear_dirty_for_io.write_cache_pages.iomap_writepages.xfs_vm_writepages
0.44 ± 74% +0.5 0.97 ± 25% perf-profile.calltrace.cycles-pp.page_mkclean_one.rmap_walk_file.folio_mkclean.folio_clear_dirty_for_io.write_cache_pages
0.42 ± 74% +0.5 0.96 ± 25% perf-profile.calltrace.cycles-pp.page_vma_mkclean_one.page_mkclean_one.rmap_walk_file.folio_mkclean.folio_clear_dirty_for_io
0.44 ± 74% +0.5 0.98 ± 25% perf-profile.calltrace.cycles-pp.rmap_walk_file.folio_mkclean.folio_clear_dirty_for_io.write_cache_pages.iomap_writepages
0.59 ± 50% +0.6 1.17 ± 21% perf-profile.calltrace.cycles-pp.folio_clear_dirty_for_io.write_cache_pages.iomap_writepages.xfs_vm_writepages.do_writepages
4.49 ± 5% +0.7 5.15 ± 8% perf-profile.calltrace.cycles-pp.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt
0.36 ± 70% +0.7 1.02 ± 25% perf-profile.calltrace.cycles-pp.drm_fb_helper_damage_work.process_one_work.worker_thread.kthread.ret_from_fork
0.36 ± 70% +0.7 1.02 ± 25% perf-profile.calltrace.cycles-pp.drm_fbdev_generic_helper_fb_dirty.drm_fb_helper_damage_work.process_one_work.worker_thread.kthread
0.09 ±223% +0.7 0.80 ± 27% perf-profile.calltrace.cycles-pp.ptep_clear_flush.page_vma_mkclean_one.page_mkclean_one.rmap_walk_file.folio_mkclean
0.17 ±141% +0.7 0.90 ± 40% perf-profile.calltrace.cycles-pp.serial8250_console_write.console_flush_all.console_unlock.vprintk_emit.devkmsg_emit
0.18 ±141% +0.8 0.95 ± 39% perf-profile.calltrace.cycles-pp.console_flush_all.console_unlock.vprintk_emit.devkmsg_emit.devkmsg_write
0.18 ±141% +0.8 0.95 ± 39% perf-profile.calltrace.cycles-pp.console_unlock.vprintk_emit.devkmsg_emit.devkmsg_write.vfs_write
0.18 ±141% +0.8 0.96 ± 38% perf-profile.calltrace.cycles-pp.vprintk_emit.devkmsg_emit.devkmsg_write.vfs_write.ksys_write
0.18 ±141% +0.8 0.96 ± 38% perf-profile.calltrace.cycles-pp.devkmsg_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.18 ±141% +0.8 0.96 ± 38% perf-profile.calltrace.cycles-pp.devkmsg_emit.devkmsg_write.vfs_write.ksys_write.do_syscall_64
0.19 ±141% +0.8 0.97 ± 37% perf-profile.calltrace.cycles-pp.write
0.18 ±141% +0.8 0.97 ± 37% perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
0.18 ±141% +0.8 0.97 ± 37% perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
0.18 ±141% +0.8 0.97 ± 37% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.write
0.18 ±141% +0.8 0.97 ± 37% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
0.09 ±223% +0.8 0.93 ± 25% perf-profile.calltrace.cycles-pp.memcpy_toio.drm_fb_memcpy.ast_primary_plane_helper_atomic_update.drm_atomic_helper_commit_planes.drm_atomic_helper_commit_tail_rpm
0.09 ±223% +0.9 0.94 ± 25% perf-profile.calltrace.cycles-pp.drm_atomic_helper_commit_planes.drm_atomic_helper_commit_tail_rpm.ast_mode_config_helper_atomic_commit_tail.commit_tail.drm_atomic_helper_commit
0.09 ±223% +0.9 0.94 ± 25% perf-profile.calltrace.cycles-pp.ast_primary_plane_helper_atomic_update.drm_atomic_helper_commit_planes.drm_atomic_helper_commit_tail_rpm.ast_mode_config_helper_atomic_commit_tail.commit_tail
0.09 ±223% +0.9 0.94 ± 25% perf-profile.calltrace.cycles-pp.drm_fb_memcpy.ast_primary_plane_helper_atomic_update.drm_atomic_helper_commit_planes.drm_atomic_helper_commit_tail_rpm.ast_mode_config_helper_atomic_commit_tail
0.09 ±223% +0.9 0.94 ± 25% perf-profile.calltrace.cycles-pp.drm_atomic_helper_commit_tail_rpm.ast_mode_config_helper_atomic_commit_tail.commit_tail.drm_atomic_helper_commit.drm_atomic_commit
0.09 ±223% +0.9 0.95 ± 25% perf-profile.calltrace.cycles-pp.commit_tail.drm_atomic_helper_commit.drm_atomic_commit.drm_atomic_helper_dirtyfb.drm_fbdev_generic_helper_fb_dirty
0.09 ±223% +0.9 0.95 ± 25% perf-profile.calltrace.cycles-pp.ast_mode_config_helper_atomic_commit_tail.commit_tail.drm_atomic_helper_commit.drm_atomic_commit.drm_atomic_helper_dirtyfb
0.09 ±223% +0.9 0.96 ± 24% perf-profile.calltrace.cycles-pp.drm_atomic_helper_commit.drm_atomic_commit.drm_atomic_helper_dirtyfb.drm_fbdev_generic_helper_fb_dirty.drm_fb_helper_damage_work
0.09 ±223% +0.9 0.96 ± 24% perf-profile.calltrace.cycles-pp.drm_atomic_helper_dirtyfb.drm_fbdev_generic_helper_fb_dirty.drm_fb_helper_damage_work.process_one_work.worker_thread
0.09 ±223% +0.9 0.96 ± 24% perf-profile.calltrace.cycles-pp.drm_atomic_commit.drm_atomic_helper_dirtyfb.drm_fbdev_generic_helper_fb_dirty.drm_fb_helper_damage_work.process_one_work
0.18 ±141% +1.1 1.26 ± 36% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
0.18 ±141% +1.1 1.26 ± 36% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +1.1 1.12 ± 38% perf-profile.calltrace.cycles-pp.zap_pte_range.zap_pmd_range.unmap_page_range.unmap_vmas.exit_mmap
0.00 +1.1 1.15 ± 38% perf-profile.calltrace.cycles-pp.unmap_vmas.exit_mmap.__mmput.exit_mm.do_exit
0.00 +1.1 1.15 ± 38% perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.exit_mmap.__mmput.exit_mm
0.00 +1.1 1.15 ± 38% perf-profile.calltrace.cycles-pp.zap_pmd_range.unmap_page_range.unmap_vmas.exit_mmap.__mmput
0.00 +1.2 1.18 ± 37% perf-profile.calltrace.cycles-pp.__mmput.exit_mm.do_exit.do_group_exit.__x64_sys_exit_group
0.00 +1.2 1.18 ± 37% perf-profile.calltrace.cycles-pp.exit_mmap.__mmput.exit_mm.do_exit.do_group_exit
0.00 +1.2 1.18 ± 37% perf-profile.calltrace.cycles-pp.exit_mm.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
0.00 +1.2 1.21 ± 37% perf-profile.calltrace.cycles-pp.__x64_sys_exit_group.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +1.2 1.21 ± 37% perf-profile.calltrace.cycles-pp.do_group_exit.__x64_sys_exit_group.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +1.2 1.21 ± 37% perf-profile.calltrace.cycles-pp.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64.entry_SYSCALL_64_after_hwframe
6.62 ± 7% +1.2 7.85 ± 8% perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
9.43 ± 6% +1.7 11.11 ± 6% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.acpi_safe_halt
9.45 ± 6% +1.7 11.13 ± 6% perf-profile.calltrace.cycles-pp.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.acpi_safe_halt.acpi_idle_enter
0.00 +1.9 1.95 ± 28% perf-profile.calltrace.cycles-pp.iomap_writepages.xfs_vm_writepages.do_writepages.filemap_fdatawrite_wbc.__filemap_fdatawrite_range
0.00 +1.9 1.95 ± 28% perf-profile.calltrace.cycles-pp.write_cache_pages.iomap_writepages.xfs_vm_writepages.do_writepages.filemap_fdatawrite_wbc
0.00 +2.0 1.98 ± 30% perf-profile.calltrace.cycles-pp.do_writepages.filemap_fdatawrite_wbc.__filemap_fdatawrite_range.file_write_and_wait_range.xfs_file_fsync
0.00 +2.0 1.98 ± 30% perf-profile.calltrace.cycles-pp.xfs_vm_writepages.do_writepages.filemap_fdatawrite_wbc.__filemap_fdatawrite_range.file_write_and_wait_range
0.00 +2.0 1.98 ± 30% perf-profile.calltrace.cycles-pp.__filemap_fdatawrite_range.file_write_and_wait_range.xfs_file_fsync.__do_sys_msync.do_syscall_64
0.00 +2.0 1.98 ± 30% perf-profile.calltrace.cycles-pp.filemap_fdatawrite_wbc.__filemap_fdatawrite_range.file_write_and_wait_range.xfs_file_fsync.__do_sys_msync
0.00 +2.0 2.01 ± 30% perf-profile.calltrace.cycles-pp.file_write_and_wait_range.xfs_file_fsync.__do_sys_msync.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +2.0 2.02 ± 30% perf-profile.calltrace.cycles-pp.__do_sys_msync.do_syscall_64.entry_SYSCALL_64_after_hwframe.msync
0.00 +2.0 2.02 ± 30% perf-profile.calltrace.cycles-pp.xfs_file_fsync.__do_sys_msync.do_syscall_64.entry_SYSCALL_64_after_hwframe.msync
0.00 +2.0 2.02 ± 30% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.msync
0.00 +2.0 2.02 ± 30% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.msync
0.00 +2.0 2.02 ± 30% perf-profile.calltrace.cycles-pp.msync
13.56 ± 4% +2.1 15.67 ± 6% perf-profile.calltrace.cycles-pp.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.acpi_safe_halt.acpi_idle_enter.cpuidle_enter_state
15.08 ± 4% +2.5 17.59 ± 7% perf-profile.calltrace.cycles-pp.acpi_safe_halt.acpi_idle_enter.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call
16.61 ± 3% +2.7 19.34 ± 5% perf-profile.calltrace.cycles-pp.ret_from_fork_asm
16.61 ± 3% +2.7 19.34 ± 5% perf-profile.calltrace.cycles-pp.ret_from_fork.ret_from_fork_asm
16.61 ± 3% +2.7 19.34 ± 5% perf-profile.calltrace.cycles-pp.kthread.ret_from_fork.ret_from_fork_asm
14.27 ± 5% +2.8 17.05 ± 5% perf-profile.calltrace.cycles-pp.loop_process_work.process_one_work.worker_thread.kthread.ret_from_fork
14.06 ± 5% +2.8 16.85 ± 5% perf-profile.calltrace.cycles-pp.do_iter_write.lo_write_simple.loop_process_work.process_one_work.worker_thread
14.20 ± 5% +2.8 16.99 ± 5% perf-profile.calltrace.cycles-pp.lo_write_simple.loop_process_work.process_one_work.worker_thread.kthread
16.33 ± 3% +2.8 19.12 ± 5% perf-profile.calltrace.cycles-pp.worker_thread.kthread.ret_from_fork.ret_from_fork_asm
13.91 ± 5% +2.8 16.71 ± 5% perf-profile.calltrace.cycles-pp.do_iter_readv_writev.do_iter_write.lo_write_simple.loop_process_work.process_one_work
16.19 ± 3% +2.8 19.00 ± 4% perf-profile.calltrace.cycles-pp.process_one_work.worker_thread.kthread.ret_from_fork.ret_from_fork_asm
13.30 ± 5% +2.8 16.12 ± 5% perf-profile.calltrace.cycles-pp.generic_perform_write.shmem_file_write_iter.do_iter_readv_writev.do_iter_write.lo_write_simple
13.74 ± 4% +2.8 16.56 ± 5% perf-profile.calltrace.cycles-pp.shmem_file_write_iter.do_iter_readv_writev.do_iter_write.lo_write_simple.loop_process_work
11.27 ± 5% +3.7 14.93 ± 5% perf-profile.calltrace.cycles-pp.copy_page_from_iter_atomic.generic_perform_write.shmem_file_write_iter.do_iter_readv_writev.do_iter_write
11.14 ± 5% -11.1 0.00 perf-profile.children.cycles-pp.memcpy_from_iter_mc
11.20 ± 5% -11.1 0.10 ± 16% perf-profile.children.cycles-pp.memcpy_orig
26.79 ± 2% -5.6 21.19 ± 12% perf-profile.children.cycles-pp.do_access
14.94 -3.4 11.59 ± 11% perf-profile.children.cycles-pp.do_rw_once
15.77 ± 2% -3.2 12.54 ± 11% perf-profile.children.cycles-pp.asm_exc_page_fault
12.39 ± 2% -2.4 9.94 ± 11% perf-profile.children.cycles-pp.exc_page_fault
12.30 ± 2% -2.4 9.87 ± 11% perf-profile.children.cycles-pp.do_user_addr_fault
10.82 ± 2% -2.1 8.67 ± 12% perf-profile.children.cycles-pp.handle_mm_fault
9.79 ± 2% -1.9 7.84 ± 12% perf-profile.children.cycles-pp.__handle_mm_fault
8.93 ± 2% -1.8 7.17 ± 12% perf-profile.children.cycles-pp.do_fault
1.06 ± 10% -0.9 0.20 ± 12% perf-profile.children.cycles-pp.shmem_write_end
1.06 ± 10% -0.8 0.23 ± 17% perf-profile.children.cycles-pp.folio_unlock
3.52 ± 2% -0.7 2.82 ± 12% perf-profile.children.cycles-pp.do_page_mkwrite
3.51 -0.7 2.82 ± 12% perf-profile.children.cycles-pp.__xfs_filemap_fault
2.58 ± 2% -0.5 2.03 ± 11% perf-profile.children.cycles-pp.iomap_page_mkwrite
1.10 ± 25% -0.4 0.66 ± 26% perf-profile.children.cycles-pp.wb_workfn
1.10 ± 25% -0.4 0.66 ± 26% perf-profile.children.cycles-pp.wb_do_writeback
1.10 ± 25% -0.4 0.66 ± 26% perf-profile.children.cycles-pp.wb_writeback
1.10 ± 25% -0.4 0.66 ± 26% perf-profile.children.cycles-pp.__writeback_inodes_wb
1.10 ± 25% -0.4 0.66 ± 26% perf-profile.children.cycles-pp.writeback_sb_inodes
1.10 ± 25% -0.4 0.66 ± 26% perf-profile.children.cycles-pp.__writeback_single_inode
1.55 ± 7% -0.4 1.19 ± 15% perf-profile.children.cycles-pp.balance_dirty_pages
0.89 ± 13% -0.3 0.55 ± 24% perf-profile.children.cycles-pp.schedule_timeout
0.86 ± 12% -0.3 0.52 ± 24% perf-profile.children.cycles-pp.io_schedule_timeout
1.27 -0.3 1.02 ± 11% perf-profile.children.cycles-pp.iomap_iter
0.86 ± 5% -0.2 0.65 ± 16% perf-profile.children.cycles-pp.sync_regs
0.99 -0.2 0.80 ± 13% perf-profile.children.cycles-pp.xfs_buffered_write_iomap_begin
0.69 ± 8% -0.1 0.55 ± 7% perf-profile.children.cycles-pp.iomap_dirty_folio
0.64 ± 6% -0.1 0.51 ± 8% perf-profile.children.cycles-pp.__perf_sw_event
0.56 ± 4% -0.1 0.43 ± 12% perf-profile.children.cycles-pp.filemap_dirty_folio
0.66 ± 5% -0.1 0.54 ± 13% perf-profile.children.cycles-pp.__filemap_get_folio
0.49 ± 8% -0.1 0.37 ± 6% perf-profile.children.cycles-pp.ifs_set_range_dirty
0.68 ± 4% -0.1 0.56 ± 9% perf-profile.children.cycles-pp.finish_fault
0.42 ± 6% -0.1 0.33 ± 13% perf-profile.children.cycles-pp.lock_mm_and_find_vma
0.52 ± 7% -0.1 0.43 ± 6% perf-profile.children.cycles-pp.___perf_sw_event
0.48 ± 6% -0.1 0.40 ± 9% perf-profile.children.cycles-pp.lock_vma_under_rcu
0.34 ± 7% -0.1 0.26 ± 15% perf-profile.children.cycles-pp.xfs_ilock
0.27 ± 6% -0.1 0.21 ± 15% perf-profile.children.cycles-pp.handle_pte_fault
0.19 ± 4% -0.0 0.14 ± 17% perf-profile.children.cycles-pp.down_read_trylock
0.22 ± 4% -0.0 0.17 ± 12% perf-profile.children.cycles-pp.iomap_iter_advance
0.17 ± 7% -0.0 0.13 ± 17% perf-profile.children.cycles-pp.pte_offset_map_nolock
0.17 ± 13% -0.0 0.13 ± 16% perf-profile.children.cycles-pp.__folio_end_writeback
0.15 ± 10% -0.0 0.11 ± 13% perf-profile.children.cycles-pp.run_timer_softirq
0.19 ± 7% -0.0 0.16 ± 11% perf-profile.children.cycles-pp.__pte_offset_map_lock
0.14 ± 11% -0.0 0.10 ± 13% perf-profile.children.cycles-pp.call_timer_fn
0.14 ± 12% -0.0 0.10 ± 10% perf-profile.children.cycles-pp.down_read
0.09 ± 10% -0.0 0.07 ± 16% perf-profile.children.cycles-pp.dequeue_task_fair
0.06 ± 11% +0.0 0.08 ± 6% perf-profile.children.cycles-pp.update_rq_clock
0.06 ± 17% +0.0 0.09 ± 10% perf-profile.children.cycles-pp.x86_pmu_disable
0.05 ± 46% +0.0 0.08 ± 7% perf-profile.children.cycles-pp.timerqueue_add
0.07 ± 16% +0.0 0.10 ± 9% perf-profile.children.cycles-pp.enqueue_hrtimer
0.18 ± 6% +0.0 0.22 ± 7% perf-profile.children.cycles-pp.read_tsc
0.29 ± 7% +0.0 0.33 ± 8% perf-profile.children.cycles-pp.lapic_next_deadline
0.04 ± 75% +0.1 0.10 ± 18% perf-profile.children.cycles-pp.__folio_start_writeback
0.18 ± 10% +0.1 0.23 ± 20% perf-profile.children.cycles-pp.__mod_lruvec_page_state
0.10 ± 14% +0.1 0.16 ± 28% perf-profile.children.cycles-pp._find_next_bit
0.10 ± 16% +0.1 0.19 ± 14% perf-profile.children.cycles-pp.folio_mark_accessed
0.00 +0.1 0.09 ± 41% perf-profile.children.cycles-pp.free_swap_cache
0.07 ± 12% +0.1 0.17 ± 37% perf-profile.children.cycles-pp._compound_head
0.00 +0.1 0.10 ± 46% perf-profile.children.cycles-pp.free_pages_and_swap_cache
0.03 ± 70% +0.1 0.14 ± 31% perf-profile.children.cycles-pp.release_pages
0.08 ± 16% +0.1 0.19 ± 33% perf-profile.children.cycles-pp.wait_for_xmitr
0.00 +0.1 0.13 ± 43% perf-profile.children.cycles-pp.io_schedule
0.06 ± 50% +0.2 0.22 ± 39% perf-profile.children.cycles-pp.tlb_batch_pages_flush
0.10 ± 9% +0.2 0.26 ± 29% perf-profile.children.cycles-pp.asm_sysvec_call_function_single
1.31 ± 9% +0.2 1.55 ± 10% perf-profile.children.cycles-pp.rebalance_domains
0.15 ± 22% +0.3 0.48 ± 39% perf-profile.children.cycles-pp.page_remove_rmap
1.65 ± 4% +0.3 2.00 ± 3% perf-profile.children.cycles-pp.perf_adjust_freq_unthr_context
1.67 ± 3% +0.3 2.02 ± 4% perf-profile.children.cycles-pp.perf_event_task_tick
0.16 ± 33% +0.4 0.52 ± 35% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
0.08 ± 19% +0.4 0.46 ± 26% perf-profile.children.cycles-pp.native_flush_tlb_one_user
0.00 +0.4 0.39 ± 49% perf-profile.children.cycles-pp.prepare_to_wait_exclusive
0.10 ± 16% +0.4 0.52 ± 27% perf-profile.children.cycles-pp.flush_tlb_func
0.54 ± 13% +0.4 0.98 ± 36% perf-profile.children.cycles-pp.wait_for_lsr
0.43 ± 22% +0.5 0.88 ± 19% perf-profile.children.cycles-pp.flush_tlb_mm_range
0.56 ± 14% +0.5 1.03 ± 37% perf-profile.children.cycles-pp.serial8250_console_write
0.47 ± 9% +0.5 0.94 ± 25% perf-profile.children.cycles-pp.drm_atomic_helper_commit_planes
0.47 ± 9% +0.5 0.94 ± 25% perf-profile.children.cycles-pp.ast_primary_plane_helper_atomic_update
0.47 ± 9% +0.5 0.94 ± 25% perf-profile.children.cycles-pp.drm_fb_memcpy
0.47 ± 9% +0.5 0.94 ± 25% perf-profile.children.cycles-pp.memcpy_toio
0.47 ± 9% +0.5 0.94 ± 25% perf-profile.children.cycles-pp.drm_atomic_helper_commit_tail_rpm
0.48 ± 8% +0.5 0.95 ± 25% perf-profile.children.cycles-pp.commit_tail
0.48 ± 8% +0.5 0.95 ± 25% perf-profile.children.cycles-pp.ast_mode_config_helper_atomic_commit_tail
0.48 ± 8% +0.5 0.96 ± 24% perf-profile.children.cycles-pp.drm_atomic_helper_commit
0.48 ± 8% +0.5 0.96 ± 24% perf-profile.children.cycles-pp.drm_atomic_helper_dirtyfb
0.48 ± 8% +0.5 0.96 ± 24% perf-profile.children.cycles-pp.drm_atomic_commit
0.49 ± 12% +0.5 0.97 ± 37% perf-profile.children.cycles-pp.write
0.48 ± 11% +0.5 0.97 ± 37% perf-profile.children.cycles-pp.vfs_write
0.48 ± 11% +0.5 0.98 ± 37% perf-profile.children.cycles-pp.ksys_write
0.47 ± 11% +0.5 0.96 ± 38% perf-profile.children.cycles-pp.devkmsg_write
0.47 ± 11% +0.5 0.96 ± 38% perf-profile.children.cycles-pp.devkmsg_emit
3.06 ± 6% +0.5 3.56 ± 7% perf-profile.children.cycles-pp.scheduler_tick
0.59 ± 14% +0.5 1.09 ± 36% perf-profile.children.cycles-pp.console_unlock
0.59 ± 14% +0.5 1.09 ± 36% perf-profile.children.cycles-pp.console_flush_all
0.52 ± 22% +0.5 1.02 ± 19% perf-profile.children.cycles-pp.ptep_clear_flush
0.59 ± 14% +0.5 1.10 ± 35% perf-profile.children.cycles-pp.vprintk_emit
0.00 +0.5 0.51 ± 47% perf-profile.children.cycles-pp.__rq_qos_throttle
0.00 +0.5 0.51 ± 47% perf-profile.children.cycles-pp.wbt_wait
0.00 +0.5 0.51 ± 47% perf-profile.children.cycles-pp.rq_qos_wait
0.51 ± 8% +0.5 1.02 ± 25% perf-profile.children.cycles-pp.drm_fb_helper_damage_work
0.51 ± 8% +0.5 1.02 ± 25% perf-profile.children.cycles-pp.drm_fbdev_generic_helper_fb_dirty
0.21 ± 24% +0.5 0.73 ± 31% perf-profile.children.cycles-pp.iomap_add_to_ioend
0.00 +0.5 0.52 ± 47% perf-profile.children.cycles-pp.blk_mq_get_new_requests
0.73 ± 23% +0.6 1.28 ± 16% perf-profile.children.cycles-pp.page_vma_mkclean_one
0.74 ± 23% +0.6 1.29 ± 17% perf-profile.children.cycles-pp.page_mkclean_one
0.00 +0.6 0.56 ± 46% perf-profile.children.cycles-pp.submit_bio_noacct_nocheck
0.00 +0.6 0.56 ± 46% perf-profile.children.cycles-pp.blk_mq_submit_bio
0.78 ± 24% +0.6 1.34 ± 17% perf-profile.children.cycles-pp.folio_mkclean
0.76 ± 23% +0.6 1.33 ± 17% perf-profile.children.cycles-pp.rmap_walk_file
0.41 ± 23% +0.6 1.02 ± 24% perf-profile.children.cycles-pp.iomap_writepage_map
0.85 ± 23% +0.6 1.46 ± 17% perf-profile.children.cycles-pp.folio_clear_dirty_for_io
4.57 ± 5% +0.7 5.25 ± 8% perf-profile.children.cycles-pp.tick_sched_timer
0.35 ± 20% +0.8 1.15 ± 38% perf-profile.children.cycles-pp.unmap_vmas
0.35 ± 20% +0.8 1.15 ± 38% perf-profile.children.cycles-pp.unmap_page_range
0.34 ± 20% +0.8 1.15 ± 38% perf-profile.children.cycles-pp.zap_pmd_range
0.34 ± 20% +0.8 1.15 ± 38% perf-profile.children.cycles-pp.zap_pte_range
0.38 ± 19% +0.8 1.19 ± 37% perf-profile.children.cycles-pp.__mmput
0.38 ± 19% +0.8 1.19 ± 37% perf-profile.children.cycles-pp.exit_mmap
0.37 ± 20% +0.8 1.18 ± 37% perf-profile.children.cycles-pp.exit_mm
0.39 ± 19% +0.8 1.21 ± 37% perf-profile.children.cycles-pp.__x64_sys_exit_group
0.39 ± 19% +0.8 1.21 ± 37% perf-profile.children.cycles-pp.do_group_exit
0.39 ± 19% +0.8 1.21 ± 37% perf-profile.children.cycles-pp.do_exit
6.74 ± 6% +1.3 8.00 ± 8% perf-profile.children.cycles-pp.__hrtimer_run_queues
1.35 ± 23% +1.3 2.61 ± 19% perf-profile.children.cycles-pp.iomap_writepages
1.35 ± 23% +1.3 2.61 ± 19% perf-profile.children.cycles-pp.write_cache_pages
1.35 ± 23% +1.3 2.64 ± 20% perf-profile.children.cycles-pp.do_writepages
1.35 ± 23% +1.3 2.64 ± 20% perf-profile.children.cycles-pp.xfs_vm_writepages
9.59 ± 6% +1.7 11.30 ± 6% perf-profile.children.cycles-pp.__sysvec_apic_timer_interrupt
9.57 ± 6% +1.7 11.28 ± 6% perf-profile.children.cycles-pp.hrtimer_interrupt
0.25 ± 18% +1.7 1.98 ± 30% perf-profile.children.cycles-pp.__filemap_fdatawrite_range
0.25 ± 18% +1.7 1.98 ± 30% perf-profile.children.cycles-pp.filemap_fdatawrite_wbc
0.27 ± 17% +1.7 2.02 ± 30% perf-profile.children.cycles-pp.__do_sys_msync
0.27 ± 17% +1.7 2.02 ± 30% perf-profile.children.cycles-pp.xfs_file_fsync
0.26 ± 16% +1.8 2.01 ± 30% perf-profile.children.cycles-pp.file_write_and_wait_range
0.27 ± 17% +1.8 2.02 ± 30% perf-profile.children.cycles-pp.msync
13.72 ± 4% +2.1 15.87 ± 6% perf-profile.children.cycles-pp.sysvec_apic_timer_interrupt
16.61 ± 3% +2.7 19.34 ± 5% perf-profile.children.cycles-pp.kthread
16.61 ± 3% +2.7 19.34 ± 5% perf-profile.children.cycles-pp.ret_from_fork_asm
16.61 ± 3% +2.7 19.34 ± 5% perf-profile.children.cycles-pp.ret_from_fork
14.27 ± 5% +2.8 17.05 ± 5% perf-profile.children.cycles-pp.loop_process_work
14.06 ± 5% +2.8 16.85 ± 5% perf-profile.children.cycles-pp.do_iter_write
14.20 ± 5% +2.8 16.99 ± 5% perf-profile.children.cycles-pp.lo_write_simple
16.33 ± 3% +2.8 19.12 ± 5% perf-profile.children.cycles-pp.worker_thread
13.92 ± 5% +2.8 16.72 ± 5% perf-profile.children.cycles-pp.do_iter_readv_writev
16.19 ± 3% +2.8 19.00 ± 4% perf-profile.children.cycles-pp.process_one_work
13.32 ± 5% +2.8 16.13 ± 5% perf-profile.children.cycles-pp.generic_perform_write
13.76 ± 4% +2.8 16.57 ± 5% perf-profile.children.cycles-pp.shmem_file_write_iter
1.50 ± 9% +3.1 4.60 ± 29% perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
1.50 ± 9% +3.1 4.60 ± 29% perf-profile.children.cycles-pp.do_syscall_64
11.28 ± 5% +3.7 14.94 ± 5% perf-profile.children.cycles-pp.copy_page_from_iter_atomic
11.10 ± 5% -11.0 0.10 ± 16% perf-profile.self.cycles-pp.memcpy_orig
12.38 -2.8 9.61 ± 11% perf-profile.self.cycles-pp.do_rw_once
7.73 ± 4% -1.8 5.95 ± 11% perf-profile.self.cycles-pp.do_access
1.05 ± 10% -0.8 0.21 ± 11% perf-profile.self.cycles-pp.folio_unlock
0.85 ± 5% -0.2 0.65 ± 15% perf-profile.self.cycles-pp.sync_regs
0.56 ± 5% -0.1 0.42 ± 10% perf-profile.self.cycles-pp.__handle_mm_fault
0.38 ± 10% -0.1 0.30 ± 11% perf-profile.self.cycles-pp.handle_mm_fault
0.36 ± 3% -0.1 0.28 ± 17% perf-profile.self.cycles-pp.iomap_page_mkwrite
0.46 ± 7% -0.1 0.38 ± 6% perf-profile.self.cycles-pp.___perf_sw_event
0.26 ± 3% -0.1 0.20 ± 15% perf-profile.self.cycles-pp.filemap_fault
0.14 ± 9% -0.1 0.09 ± 21% perf-profile.self.cycles-pp.ifs_set_range_dirty
0.18 ± 8% -0.0 0.14 ± 12% perf-profile.self.cycles-pp.asm_exc_page_fault
0.11 ± 4% -0.0 0.07 ± 20% perf-profile.self.cycles-pp.do_fault
0.23 ± 7% -0.0 0.18 ± 11% perf-profile.self.cycles-pp.xfs_buffered_write_iomap_begin
0.21 ± 4% -0.0 0.17 ± 14% perf-profile.self.cycles-pp.iomap_iter_advance
0.19 ± 5% -0.0 0.14 ± 17% perf-profile.self.cycles-pp.down_read_trylock
0.17 ± 8% -0.0 0.13 ± 7% perf-profile.self.cycles-pp.filemap_dirty_folio
0.16 ± 12% -0.0 0.12 ± 11% perf-profile.self.cycles-pp.error_entry
0.10 ± 15% -0.0 0.07 ± 11% perf-profile.self.cycles-pp.down_read
0.11 ± 10% -0.0 0.09 ± 15% perf-profile.self.cycles-pp.generic_perform_write
0.06 ± 47% +0.0 0.08 ± 8% perf-profile.self.cycles-pp.x86_pmu_disable
0.08 ± 19% +0.0 0.11 ± 11% perf-profile.self.cycles-pp.folio_mark_accessed
0.18 ± 7% +0.0 0.21 ± 8% perf-profile.self.cycles-pp.read_tsc
0.08 ± 47% +0.1 0.14 ± 18% perf-profile.self.cycles-pp.ptep_clear_flush
0.08 ± 12% +0.1 0.14 ± 25% perf-profile.self.cycles-pp._find_next_bit
0.18 ± 7% +0.1 0.26 ± 17% perf-profile.self.cycles-pp.asm_sysvec_apic_timer_interrupt
0.00 +0.1 0.09 ± 40% perf-profile.self.cycles-pp.free_swap_cache
0.06 ± 11% +0.1 0.16 ± 38% perf-profile.self.cycles-pp._compound_head
0.02 ± 99% +0.1 0.13 ± 32% perf-profile.self.cycles-pp.release_pages
0.06 ± 46% +0.1 0.17 ± 46% perf-profile.self.cycles-pp.zap_pte_range
0.91 ± 5% +0.2 1.09 ± 4% perf-profile.self.cycles-pp.perf_adjust_freq_unthr_context
0.12 ± 21% +0.3 0.39 ± 42% perf-profile.self.cycles-pp.page_remove_rmap
0.16 ± 33% +0.4 0.52 ± 35% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
0.08 ± 19% +0.4 0.46 ± 26% perf-profile.self.cycles-pp.native_flush_tlb_one_user
0.47 ± 8% +0.4 0.90 ± 28% perf-profile.self.cycles-pp.memcpy_toio
0.14 ± 10% +14.7 14.83 ± 5% perf-profile.self.cycles-pp.copy_page_from_iter_atomic
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
Powered by blists - more mailing lists