[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20171225024608.GC31543@yexl-desktop>
Date: Mon, 25 Dec 2017 10:46:08 +0800
From: kernel test robot <xiaolong.ye@...el.com>
To: Jeff Layton <jlayton@...hat.com>
Cc: LKML <linux-kernel@...r.kernel.org>,
Jeff Layton <jlayton@...hat.com>, lkp@...org
Subject: [lkp-robot] [fs] 62231fd3ed: aim7.jobs-per-min 229.5% improvement
Greeting,
FYI, we noticed a 229.5% improvement of aim7.jobs-per-min due to commit:
commit: 62231fd3ed94c9d40b4e58ab14fe173f09e252da ("fs: handle inode->i_version more efficiently")
https://git.kernel.org/cgit/linux/kernel/git/jlayton/linux.git iversion-next
in testcase: aim7
on test machine: 40 threads Intel(R) Xeon(R) CPU E5-2690 v2 @ 3.00GHz with 384G memory
with following parameters:
disk: 4BRD_12G
md: RAID0
fs: xfs
test: disk_cp
load: 3000
cpufreq_governor: performance
test-description: AIM7 is a traditional UNIX system level benchmark suite which is used to test and measure the performance of multiuser system.
test-url: https://sourceforge.net/projects/aimbench/files/aim-suite7/
In addition to that, the commit also has significant impact on the following tests:
+------------------+-----------------------------------------------------------------------+
| testcase: change | aim7: aim7.jobs-per-min 202.6% improvement |
| test machine | 40 threads Intel(R) Xeon(R) CPU E5-2690 v2 @ 3.00GHz with 384G memory |
| test parameters | cpufreq_governor=performance |
| | disk=4BRD_12G |
| | fs=xfs |
| | load=3000 |
| | md=RAID1 |
| | test=disk_rr |
+------------------+-----------------------------------------------------------------------+
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/disk/fs/kconfig/load/md/rootfs/tbox_group/test/testcase:
gcc-7/performance/4BRD_12G/xfs/x86_64-rhel-7.2/3000/RAID0/debian-x86_64-2016-08-31.cgz/lkp-ivb-ep01/disk_cp/aim7
commit:
269503cada ("btrfs: only dirty the inode in btrfs_update_time if something was changed")
62231fd3ed ("fs: handle inode->i_version more efficiently")
269503cada9f3e17 62231fd3ed94c9d40b4e58ab14
---------------- --------------------------
%stddev %change %stddev
\ | \
86992 +229.5% 286610 ± 4% aim7.jobs-per-min
207.43 -69.5% 63.28 ± 3% aim7.time.elapsed_time
207.43 -69.5% 63.28 ± 3% aim7.time.elapsed_time.max
449153 ± 7% -84.7% 68560 ± 6% aim7.time.involuntary_context_switches
456322 ± 8% -67.0% 150364 aim7.time.minor_page_faults
7526 -74.9% 1890 ± 5% aim7.time.system_time
28.02 ± 8% -27.2% 20.39 aim7.time.user_time
981140 ± 5% -39.6% 592973 ± 2% aim7.time.voluntary_context_switches
565790 ± 7% -93.7% 35389 ± 3% interrupts.CAL:Function_call_interrupts
6592 ± 19% -69.9% 1984 ±168% numa-numastat.node0.other_node
12.66 ± 2% +3.6% 13.12 boot-time.dhcp
12.85 ± 2% +3.5% 13.30 boot-time.kernel_boot
552697 ± 5% -47.3% 291516 softirqs.RCU
288353 ± 3% -27.6% 208732 ± 2% softirqs.SCHED
3045613 -70.8% 888561 ± 4% softirqs.TIMER
4.50 ± 11% -100.0% 0.00 vmstat.procs.b
61.25 ± 9% -54.3% 28.00 ± 9% vmstat.procs.r
10861 ± 5% +70.0% 18460 ± 3% vmstat.system.cs
7.93 ± 2% +22.4 30.34 ± 18% mpstat.cpu.idle%
1.99 ± 15% -2.0 0.01 ± 36% mpstat.cpu.iowait%
89.72 -20.9 68.86 ± 7% mpstat.cpu.sys%
0.37 ± 7% +0.4 0.79 ± 4% mpstat.cpu.usr%
296776 ± 4% -10.4% 265931 ± 2% meminfo.Active
240797 ± 5% -12.6% 210345 ± 3% meminfo.Active(anon)
5509632 ± 9% -23.4% 4218880 ± 10% meminfo.DirectMap2M
31481 ± 8% -52.8% 14850 ± 11% meminfo.Dirty
51320 ± 20% -35.0% 33366 ± 13% meminfo.Shmem
58841383 ± 10% -14.6% 50257608 ± 5% cpuidle.C1.time
1.192e+08 ± 14% -94.2% 6965057 ± 2% cpuidle.C1E.time
300054 ± 12% -77.9% 66189 ± 3% cpuidle.C1E.usage
23556029 ± 4% -15.3% 19957273 ± 2% cpuidle.C3.time
82726 +34.9% 111602 ± 3% cpuidle.C3.usage
6400 ± 12% -89.9% 644.00 ± 6% cpuidle.POLL.usage
15407 ± 6% -49.4% 7789 ± 11% numa-meminfo.node0.Dirty
3688 ± 98% +417.0% 19072 ± 33% numa-meminfo.node0.Shmem
155500 ± 4% -22.5% 120500 ± 12% numa-meminfo.node1.Active
127527 ± 4% -27.9% 91903 ± 17% numa-meminfo.node1.Active(anon)
15790 ± 8% -52.5% 7508 ± 10% numa-meminfo.node1.Dirty
41433 ± 12% -27.0% 30258 ± 11% numa-meminfo.node1.SReclaimable
47611 ± 26% -69.9% 14313 ± 65% numa-meminfo.node1.Shmem
18119808 -9.7% 16362111 ± 6% numa-vmstat.node0.nr_dirtied
3800 ± 5% -48.2% 1967 ± 8% numa-vmstat.node0.nr_dirty
922.00 ± 98% +415.2% 4750 ± 33% numa-vmstat.node0.nr_shmem
3605 ± 4% -53.3% 1683 ± 12% numa-vmstat.node0.nr_zone_write_pending
18679990 -10.0% 16818181 ± 6% numa-vmstat.node0.numa_hit
18673388 -9.9% 16816061 ± 6% numa-vmstat.node0.numa_local
31851 ± 4% -27.6% 23059 ± 16% numa-vmstat.node1.nr_active_anon
3950 ± 7% -51.1% 1930 ± 8% numa-vmstat.node1.nr_dirty
11897 ± 26% -69.9% 3580 ± 65% numa-vmstat.node1.nr_shmem
10363 ± 12% -27.0% 7561 ± 11% numa-vmstat.node1.nr_slab_reclaimable
31851 ± 4% -27.6% 23059 ± 16% numa-vmstat.node1.nr_zone_active_anon
3740 ± 8% -54.4% 1705 ± 9% numa-vmstat.node1.nr_zone_write_pending
60214 ± 5% -12.6% 52654 ± 3% proc-vmstat.nr_active_anon
8014 ± 7% -51.9% 3851 ± 9% proc-vmstat.nr_dirty
12824 ± 20% -35.0% 8336 ± 13% proc-vmstat.nr_shmem
60214 ± 5% -12.6% 52654 ± 3% proc-vmstat.nr_zone_active_anon
7635 ± 7% -56.5% 3322 ± 8% proc-vmstat.nr_zone_write_pending
19816 ± 5% -99.9% 14.00 ±101% proc-vmstat.numa_hint_faults
2832 ± 12% -99.8% 6.25 ±138% proc-vmstat.numa_hint_faults_local
8891 -11.9% 7836 proc-vmstat.numa_other
13349 ± 5% -99.9% 7.25 ±150% proc-vmstat.numa_pages_migrated
115643 ± 4% -100.0% 35.50 ±100% proc-vmstat.numa_pte_updates
932798 ± 4% -65.0% 326824 proc-vmstat.pgfault
13349 ± 5% -99.9% 7.25 ±150% proc-vmstat.pgmigrate_success
2708 -24.1% 2055 ± 7% turbostat.Avg_MHz
90.76 -19.7 71.11 ± 7% turbostat.Busy%
0.70 ± 10% +1.1 1.82 ± 5% turbostat.C1%
299987 ± 12% -78.0% 66069 ± 3% turbostat.C1E
1.43 ± 13% -1.2 0.25 ± 3% turbostat.C1E%
82600 +35.0% 111470 ± 3% turbostat.C3
0.28 ± 4% +0.4 0.73 ± 4% turbostat.C3%
6.88 ± 3% +19.4 26.25 ± 20% turbostat.C6%
4.65 ± 7% +134.7% 10.90 ± 11% turbostat.CPU%c1
0.01 ± 34% +580.0% 0.09 ± 5% turbostat.CPU%c3
4.58 ± 10% +291.0% 17.91 ± 29% turbostat.CPU%c6
127.25 -20.8% 100.72 ± 6% turbostat.CorWatt
10195212 ± 2% -67.6% 3306880 ± 3% turbostat.IRQ
2.61 ± 41% +296.2% 10.35 ± 27% turbostat.Pkg%pc2
154.68 -17.4% 127.72 ± 5% turbostat.PkgWatt
39.62 -5.9% 37.29 ± 3% turbostat.RAMWatt
17030 -67.7% 5500 ± 3% turbostat.SMI
1.327e+12 -75.3% 3.276e+11 perf-stat.branch-instructions
0.26 ± 3% +0.1 0.40 ± 2% perf-stat.branch-miss-rate%
3.514e+09 ± 3% -62.4% 1.32e+09 ± 2% perf-stat.branch-misses
25.04 -13.0 12.08 ± 4% perf-stat.cache-miss-rate%
9.306e+09 -75.3% 2.298e+09 ± 7% perf-stat.cache-misses
3.716e+10 -48.9% 1.899e+10 ± 3% perf-stat.cache-references
2280553 ± 5% -43.7% 1284205 ± 2% perf-stat.context-switches
3.79 -13.1% 3.29 ± 5% perf-stat.cpi
2.254e+13 -75.2% 5.594e+12 ± 5% perf-stat.cpu-cycles
444640 ± 5% -65.7% 152686 perf-stat.cpu-migrations
1.357e+10 ± 8% -47.0% 7.193e+09 ± 28% perf-stat.dTLB-load-misses
1.642e+12 -67.6% 5.323e+11 perf-stat.dTLB-loads
0.27 ± 17% -0.2 0.07 ± 26% perf-stat.dTLB-store-miss-rate%
1.452e+09 ± 17% -82.7% 2.515e+08 ± 26% perf-stat.dTLB-store-misses
5.373e+11 -33.7% 3.564e+11 perf-stat.dTLB-stores
1.492e+08 ± 69% -54.6% 67749320 ± 19% perf-stat.iTLB-load-misses
8.861e+08 ± 57% -85.4% 1.29e+08 ± 5% perf-stat.iTLB-loads
5.949e+12 -71.4% 1.699e+12 perf-stat.instructions
0.26 +15.4% 0.30 ± 5% perf-stat.ipc
916839 ± 4% -65.6% 315501 ± 2% perf-stat.minor-faults
4.383e+09 ± 2% -77.0% 1.008e+09 ± 15% perf-stat.node-load-misses
4.996e+09 ± 2% -76.6% 1.171e+09 ± 15% perf-stat.node-loads
3.029e+09 ± 2% -75.7% 7.349e+08 ± 2% perf-stat.node-store-misses
4.055e+09 ± 3% -75.6% 9.91e+08 ± 2% perf-stat.node-stores
916844 ± 4% -65.6% 315502 ± 2% perf-stat.page-faults
1365 ± 4% -20.4% 1087 ± 3% slabinfo.btrfs_ordered_extent.active_objs
1365 ± 4% -20.4% 1087 ± 3% slabinfo.btrfs_ordered_extent.num_objs
23854 -20.1% 19055 slabinfo.buffer_head.active_objs
613.25 -20.2% 489.50 slabinfo.buffer_head.active_slabs
23944 -20.1% 19123 slabinfo.buffer_head.num_objs
613.25 -20.2% 489.50 slabinfo.buffer_head.num_slabs
4419 ± 9% -17.8% 3633 slabinfo.kmalloc-128.active_objs
4426 ± 8% -17.9% 3633 slabinfo.kmalloc-128.num_objs
21091 -11.6% 18647 slabinfo.kmalloc-16.active_objs
21091 -11.6% 18647 slabinfo.kmalloc-16.num_objs
3152 -14.1% 2706 ± 4% slabinfo.kmalloc-4096.active_objs
3235 -14.7% 2758 ± 4% slabinfo.kmalloc-4096.num_objs
2705 ± 3% -10.2% 2430 slabinfo.nsproxy.active_objs
2705 ± 3% -10.2% 2430 slabinfo.nsproxy.num_objs
1327 -11.1% 1180 slabinfo.posix_timers_cache.active_objs
1327 -11.1% 1180 slabinfo.posix_timers_cache.num_objs
25562 -17.3% 21132 slabinfo.proc_inode_cache.active_objs
25978 -16.9% 21587 slabinfo.proc_inode_cache.num_objs
266.25 ± 21% -28.2% 191.25 ± 6% slabinfo.request_queue.num_objs
1348 -9.4% 1221 slabinfo.xfs_buf_item.active_objs
1348 -9.4% 1221 slabinfo.xfs_buf_item.num_objs
1262 -11.8% 1113 slabinfo.xfs_da_state.active_objs
1262 -11.8% 1113 slabinfo.xfs_da_state.num_objs
1428 ± 5% +40.6% 2007 slabinfo.xfs_inode.active_objs
1479 ± 9% +41.5% 2093 ± 2% slabinfo.xfs_inode.num_objs
1668 -10.9% 1486 slabinfo.xfs_log_ticket.active_objs
1668 -10.9% 1486 slabinfo.xfs_log_ticket.num_objs
88.68 -87.6 1.04 ± 5% perf-profile.calltrace.cycles-pp.xfs_vn_update_time.file_update_time.xfs_file_aio_write_checks.xfs_file_buffered_aio_write.xfs_file_write_iter
88.84 -87.5 1.35 ± 3% perf-profile.calltrace.cycles-pp.file_update_time.xfs_file_aio_write_checks.xfs_file_buffered_aio_write.xfs_file_write_iter.__vfs_write
88.95 -86.9 2.08 ± 2% perf-profile.calltrace.cycles-pp.xfs_file_aio_write_checks.xfs_file_buffered_aio_write.xfs_file_write_iter.__vfs_write.vfs_write
85.91 -85.1 0.78 ± 5% perf-profile.calltrace.cycles-pp.__xfs_trans_commit.xfs_vn_update_time.file_update_time.xfs_file_aio_write_checks.xfs_file_buffered_aio_write
85.42 -84.7 0.76 ± 5% perf-profile.calltrace.cycles-pp.xfs_log_commit_cil.__xfs_trans_commit.xfs_vn_update_time.file_update_time.xfs_file_aio_write_checks
80.48 -80.1 0.42 ± 57% perf-profile.calltrace.cycles-pp._raw_spin_lock.xfs_log_commit_cil.__xfs_trans_commit.xfs_vn_update_time.file_update_time
79.94 -79.5 0.40 ± 57% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.xfs_log_commit_cil.__xfs_trans_commit.xfs_vn_update_time
92.30 -73.4 18.93 ± 6% perf-profile.calltrace.cycles-pp.xfs_file_buffered_aio_write.xfs_file_write_iter.__vfs_write.vfs_write.sys_write
92.41 -73.0 19.37 ± 6% perf-profile.calltrace.cycles-pp.xfs_file_write_iter.__vfs_write.vfs_write.sys_write.entry_SYSCALL_64_fastpath
92.49 -72.8 19.66 ± 6% perf-profile.calltrace.cycles-pp.__vfs_write.vfs_write.sys_write.entry_SYSCALL_64_fastpath
92.82 -71.6 21.20 ± 5% perf-profile.calltrace.cycles-pp.vfs_write.sys_write.entry_SYSCALL_64_fastpath
92.87 -71.4 21.46 ± 5% perf-profile.calltrace.cycles-pp.sys_write.entry_SYSCALL_64_fastpath
96.84 -7.0 89.88 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_fastpath
1.35 ± 3% +5.2 6.54 ± 6% perf-profile.calltrace.cycles-pp.iomap_write_begin.iomap_write_actor.iomap_apply.iomap_file_buffered_write.xfs_file_buffered_aio_write
0.00 +5.4 5.39 ± 46% perf-profile.calltrace.cycles-pp.__atime_needs_update.touch_atime.generic_file_read_iter.xfs_file_buffered_aio_read.xfs_file_read_iter
0.00 +5.5 5.45 ± 46% perf-profile.calltrace.cycles-pp.touch_atime.generic_file_read_iter.xfs_file_buffered_aio_read.xfs_file_read_iter.__vfs_read
2.43 ± 2% +10.0 12.43 ± 6% perf-profile.calltrace.cycles-pp.iomap_write_actor.iomap_apply.iomap_file_buffered_write.xfs_file_buffered_aio_write.xfs_file_write_iter
3.15 ± 3% +12.0 15.15 ± 6% perf-profile.calltrace.cycles-pp.iomap_apply.iomap_file_buffered_write.xfs_file_buffered_aio_write.xfs_file_write_iter.__vfs_write
3.17 ± 3% +12.1 15.23 ± 6% perf-profile.calltrace.cycles-pp.iomap_file_buffered_write.xfs_file_buffered_aio_write.xfs_file_write_iter.__vfs_write.vfs_write
0.00 +13.5 13.45 ± 14% perf-profile.calltrace.cycles-pp.down_read.xfs_ilock.xfs_file_buffered_aio_read.xfs_file_read_iter.__vfs_read
0.00 +13.5 13.49 ± 14% perf-profile.calltrace.cycles-pp.xfs_ilock.xfs_file_buffered_aio_read.xfs_file_read_iter.__vfs_read.vfs_read
0.00 +15.4 15.37 ± 5% perf-profile.calltrace.cycles-pp.up_read.xfs_iunlock.xfs_file_buffered_aio_read.xfs_file_read_iter.__vfs_read
0.00 +15.4 15.38 ± 5% perf-profile.calltrace.cycles-pp.xfs_iunlock.xfs_file_buffered_aio_read.xfs_file_read_iter.__vfs_read.vfs_read
1.23 ± 3% +15.6 16.79 ± 25% perf-profile.calltrace.cycles-pp.generic_file_read_iter.xfs_file_buffered_aio_read.xfs_file_read_iter.__vfs_read.vfs_read
1.72 ± 4% +44.0 45.70 ± 3% perf-profile.calltrace.cycles-pp.xfs_file_buffered_aio_read.xfs_file_read_iter.__vfs_read.vfs_read.sys_read
1.80 ± 4% +45.7 47.51 ± 4% perf-profile.calltrace.cycles-pp.xfs_file_read_iter.__vfs_read.vfs_read.sys_read.entry_SYSCALL_64_fastpath
2.12 ± 4% +56.0 58.16 ± 2% perf-profile.calltrace.cycles-pp.__vfs_read.vfs_read.sys_read.entry_SYSCALL_64_fastpath
2.49 ± 4% +60.8 63.25 ± 3% perf-profile.calltrace.cycles-pp.vfs_read.sys_read.entry_SYSCALL_64_fastpath
2.58 ± 4% +60.9 63.52 ± 3% perf-profile.calltrace.cycles-pp.sys_read.entry_SYSCALL_64_fastpath
88.68 -87.6 1.04 ± 5% perf-profile.children.cycles-pp.xfs_vn_update_time
88.84 -87.5 1.38 ± 3% perf-profile.children.cycles-pp.file_update_time
88.97 -86.8 2.17 perf-profile.children.cycles-pp.xfs_file_aio_write_checks
86.42 -85.2 1.25 ± 3% perf-profile.children.cycles-pp.__xfs_trans_commit
85.96 -84.7 1.21 ± 2% perf-profile.children.cycles-pp.xfs_log_commit_cil
80.96 -79.9 1.07 ± 5% perf-profile.children.cycles-pp._raw_spin_lock
80.33 -79.7 0.62 ± 5% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
92.31 -73.3 18.99 ± 6% perf-profile.children.cycles-pp.xfs_file_buffered_aio_write
92.42 -73.0 19.40 ± 6% perf-profile.children.cycles-pp.xfs_file_write_iter
92.52 -72.8 19.75 ± 6% perf-profile.children.cycles-pp.__vfs_write
92.85 -71.6 21.27 ± 5% perf-profile.children.cycles-pp.vfs_write
92.90 -71.3 21.56 ± 5% perf-profile.children.cycles-pp.sys_write
96.88 -6.9 89.93 perf-profile.children.cycles-pp.entry_SYSCALL_64_fastpath
0.17 ± 7% +5.2 5.40 ± 46% perf-profile.children.cycles-pp.__atime_needs_update
1.37 ± 3% +5.3 6.63 ± 6% perf-profile.children.cycles-pp.iomap_write_begin
0.19 ± 6% +5.3 5.47 ± 46% perf-profile.children.cycles-pp.touch_atime
1.15 ± 4% +5.9 7.10 ± 3% perf-profile.children.cycles-pp.pagecache_get_page
2.46 ± 2% +10.1 12.58 ± 6% perf-profile.children.cycles-pp.iomap_write_actor
3.17 ± 3% +12.1 15.24 ± 6% perf-profile.children.cycles-pp.iomap_apply
3.18 ± 3% +12.1 15.29 ± 6% perf-profile.children.cycles-pp.iomap_file_buffered_write
0.84 ± 7% +12.7 13.52 ± 14% perf-profile.children.cycles-pp.down_read
0.37 ± 6% +14.4 14.78 ± 13% perf-profile.children.cycles-pp.xfs_ilock
0.65 ± 4% +14.7 15.39 ± 5% perf-profile.children.cycles-pp.up_read
1.25 ± 3% +15.6 16.85 ± 25% perf-profile.children.cycles-pp.generic_file_read_iter
0.48 ± 5% +15.9 16.35 ± 5% perf-profile.children.cycles-pp.xfs_iunlock
1.75 ± 4% +44.1 45.85 ± 3% perf-profile.children.cycles-pp.xfs_file_buffered_aio_read
1.80 ± 4% +45.7 47.52 ± 4% perf-profile.children.cycles-pp.xfs_file_read_iter
2.14 ± 4% +56.1 58.21 ± 2% perf-profile.children.cycles-pp.__vfs_read
2.51 ± 4% +60.8 63.31 ± 3% perf-profile.children.cycles-pp.vfs_read
2.60 ± 4% +61.0 63.61 ± 3% perf-profile.children.cycles-pp.sys_read
79.99 -79.4 0.62 ± 5% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
0.12 ± 3% +5.1 5.18 ± 48% perf-profile.self.cycles-pp.__atime_needs_update
0.27 ± 4% +5.9 6.12 ± 31% perf-profile.self.cycles-pp.generic_file_read_iter
0.32 ± 5% +10.2 10.53 ± 6% perf-profile.self.cycles-pp.__vfs_read
0.76 ± 6% +12.5 13.26 ± 14% perf-profile.self.cycles-pp.down_read
0.65 ± 4% +14.7 15.32 ± 5% perf-profile.self.cycles-pp.up_read
78108 -71.8% 22028 sched_debug.cfs_rq:/.exec_clock.avg
78381 -71.6% 22252 sched_debug.cfs_rq:/.exec_clock.max
77700 -72.0% 21783 sched_debug.cfs_rq:/.exec_clock.min
134.07 ± 13% -24.0% 101.86 ± 11% sched_debug.cfs_rq:/.exec_clock.stddev
3042129 -73.9% 794721 sched_debug.cfs_rq:/.min_vruntime.avg
3167769 -73.2% 848465 sched_debug.cfs_rq:/.min_vruntime.max
2976915 -73.9% 776601 sched_debug.cfs_rq:/.min_vruntime.min
36725 ± 17% -65.6% 12636 ± 21% sched_debug.cfs_rq:/.min_vruntime.stddev
0.75 -16.2% 0.63 ± 3% sched_debug.cfs_rq:/.nr_running.avg
0.28 ± 6% +20.6% 0.34 ± 7% sched_debug.cfs_rq:/.nr_running.stddev
0.88 ± 3% -37.8% 0.55 ± 8% sched_debug.cfs_rq:/.nr_spread_over.avg
2.31 ± 20% -56.8% 1.00 sched_debug.cfs_rq:/.nr_spread_over.max
0.75 -33.3% 0.50 sched_debug.cfs_rq:/.nr_spread_over.min
0.37 ± 23% -65.0% 0.13 ± 41% sched_debug.cfs_rq:/.nr_spread_over.stddev
19.78 ± 7% -27.3% 14.38 ± 4% sched_debug.cfs_rq:/.runnable_load_avg.avg
69.12 ± 14% -43.6% 39.00 ± 22% sched_debug.cfs_rq:/.runnable_load_avg.max
15.00 ± 10% -38.5% 9.22 ± 15% sched_debug.cfs_rq:/.runnable_load_avg.stddev
118889 ± 31% -53.6% 55144 ± 24% sched_debug.cfs_rq:/.spread0.max
-72304 -77.0% -16642 sched_debug.cfs_rq:/.spread0.min
36743 ± 17% -65.6% 12631 ± 21% sched_debug.cfs_rq:/.spread0.stddev
904.02 ± 2% -21.7% 707.67 ± 10% sched_debug.cfs_rq:/.util_avg.avg
2151 ± 7% -26.8% 1575 ± 13% sched_debug.cfs_rq:/.util_avg.max
423.44 ± 5% -26.6% 311.00 ± 10% sched_debug.cfs_rq:/.util_avg.stddev
497178 ± 2% -22.5% 385267 ± 10% sched_debug.cpu.avg_idle.avg
127739 ± 17% -63.0% 47277 ± 17% sched_debug.cpu.avg_idle.min
116003 -49.0% 59128 ± 8% sched_debug.cpu.clock.avg
116026 -49.0% 59137 ± 8% sched_debug.cpu.clock.max
115978 -49.0% 59118 ± 8% sched_debug.cpu.clock.min
14.40 ± 15% -61.6% 5.54 ± 47% sched_debug.cpu.clock.stddev
116003 -49.0% 59128 ± 8% sched_debug.cpu.clock_task.avg
116026 -49.0% 59137 ± 8% sched_debug.cpu.clock_task.max
115978 -49.0% 59118 ± 8% sched_debug.cpu.clock_task.min
14.40 ± 15% -61.6% 5.54 ± 47% sched_debug.cpu.clock_task.stddev
20.25 ± 7% -28.1% 14.56 ± 5% sched_debug.cpu.cpu_load[0].avg
68.56 ± 15% -32.5% 46.25 ± 30% sched_debug.cpu.cpu_load[0].max
15.17 ± 11% -35.3% 9.82 ± 20% sched_debug.cpu.cpu_load[0].stddev
21.01 ± 5% -26.4% 15.47 ± 7% sched_debug.cpu.cpu_load[1].avg
73.81 ± 9% -44.3% 41.12 ± 32% sched_debug.cpu.cpu_load[1].max
15.20 ± 12% -43.8% 8.54 ± 24% sched_debug.cpu.cpu_load[1].stddev
21.56 ± 5% -22.6% 16.68 ± 10% sched_debug.cpu.cpu_load[2].avg
76.00 ± 10% -39.5% 46.00 ± 42% sched_debug.cpu.cpu_load[2].max
14.87 ± 12% -40.7% 8.82 ± 38% sched_debug.cpu.cpu_load[2].stddev
22.19 ± 7% -22.2% 17.26 ± 9% sched_debug.cpu.cpu_load[3].avg
85.25 ± 33% -45.6% 46.38 ± 30% sched_debug.cpu.cpu_load[3].max
15.36 ± 24% -43.0% 8.75 ± 34% sched_debug.cpu.cpu_load[3].stddev
22.53 ± 14% -26.9% 16.46 ± 9% sched_debug.cpu.cpu_load[4].avg
115.44 ± 85% -63.7% 41.88 ± 24% sched_debug.cpu.cpu_load[4].max
19.94 ± 72% -59.8% 8.02 ± 26% sched_debug.cpu.cpu_load[4].stddev
2376 ± 8% -24.9% 1783 ± 9% sched_debug.cpu.curr->pid.avg
5595 ± 7% -33.0% 3749 ± 9% sched_debug.cpu.curr->pid.max
1108 -18.7% 901.58 ± 2% sched_debug.cpu.curr->pid.stddev
86474 -66.0% 29373 sched_debug.cpu.nr_load_updates.avg
90842 -63.8% 32895 sched_debug.cpu.nr_load_updates.max
85649 -66.5% 28665 sched_debug.cpu.nr_load_updates.min
1.26 ± 9% -46.0% 0.68 ± 2% sched_debug.cpu.nr_running.avg
4.19 ± 11% -58.2% 1.75 ± 14% sched_debug.cpu.nr_running.max
0.90 ± 10% -53.5% 0.42 ± 4% sched_debug.cpu.nr_running.stddev
26972 ± 2% -38.2% 16677 ± 6% sched_debug.cpu.nr_switches.avg
36396 ± 7% -29.3% 25723 ± 5% sched_debug.cpu.nr_switches.max
23604 ± 3% -39.4% 14307 ± 5% sched_debug.cpu.nr_switches.min
52.70 -64.5% 18.69 ± 56% sched_debug.cpu.nr_uninterruptible.avg
234.94 ± 5% -61.1% 91.50 ± 11% sched_debug.cpu.nr_uninterruptible.max
-156.12 -64.9% -54.88 sched_debug.cpu.nr_uninterruptible.min
95.53 ± 16% -63.0% 35.32 ± 13% sched_debug.cpu.nr_uninterruptible.stddev
25576 ± 3% -41.9% 14862 ± 7% sched_debug.cpu.sched_count.avg
31339 ± 7% -40.8% 18552 ± 4% sched_debug.cpu.sched_count.max
23353 ± 3% -41.8% 13594 ± 6% sched_debug.cpu.sched_count.min
12755 ± 2% -40.8% 7556 ± 7% sched_debug.cpu.ttwu_count.avg
17351 ± 5% -39.0% 10586 ± 12% sched_debug.cpu.ttwu_count.max
10212 -32.0% 6949 ± 7% sched_debug.cpu.ttwu_count.min
1712 ± 15% -53.0% 805.03 ± 17% sched_debug.cpu.ttwu_count.stddev
2128 ± 2% -71.6% 605.08 sched_debug.cpu.ttwu_local.avg
4887 ± 16% -82.0% 878.25 ± 6% sched_debug.cpu.ttwu_local.max
1406 ± 7% -65.8% 481.00 ± 4% sched_debug.cpu.ttwu_local.min
707.25 ± 24% -87.8% 86.41 ± 13% sched_debug.cpu.ttwu_local.stddev
115976 -49.0% 59118 ± 8% sched_debug.cpu_clk
115976 -49.0% 59118 ± 8% sched_debug.ktime
0.05 ± 10% +85.5% 0.08 ± 39% sched_debug.rt_rq:/.rt_time.avg
1.49 ± 11% +111.4% 3.14 ± 39% sched_debug.rt_rq:/.rt_time.max
0.00 ± 24% -100.0% 0.00 sched_debug.rt_rq:/.rt_time.min
0.23 ± 11% +110.4% 0.49 ± 39% sched_debug.rt_rq:/.rt_time.stddev
116590 -48.9% 59525 ± 8% sched_debug.sched_clk
aim7.jobs-per-min
350000 +-+----------------------------------------------------------------+
| |
300000 +-+ O O |
O O O O O O O O O O O O O O O O O O O O
| O |
250000 +-+ |
| |
200000 +-+ |
| |
150000 +-+ |
| |
| |
100000 +-++..+..+..+..+..+..+..+..+..+...+..+..+..+..+..+..+..+..+..+..+..|
| |
50000 +-+----------------------------------------------------------------+
vmstat.system.cs
20000 +-+-----------------------------------------------------------------+
19000 +-+ O O O O O |
O O O O O O O O O O O O O
18000 +-+ O O O |
17000 +-+ O O |
| |
16000 +-+ |
15000 +-+ |
14000 +-+ |
| |
13000 +-+ |
12000 +-+ .+ |
| +.. +.. +.. +..+.. ..+. + +.. |
11000 +-+ +..+.. .. .. .. +..+. + .. .|
10000 +-+-----------------------------------------------------------------+
interrupts.CAL:Function_call_interrupts
700000 +-+----------------------------------------------------------------+
| +.. |
600000 +-++.. .+..+.. .+.. .+..+.. .. |
|. .+..+..+. +.. ..+.. .+. +. +..+ +..|
500000 +-+ +..+. +. +. |
| |
400000 +-+ |
| |
300000 +-+ |
| |
200000 +-+ |
| |
100000 +-+ |
O O O O O O O O O O O O O O O O O O O O O O O
0 +-+----------------------------------------------------------------+
perf-stat.cpu-cycles
2.4e+13 +-+---------------------------------------------------------------+
2.2e+13 +-++..+..+..+..+..+..+..+..+. +. +..+..+..+..+..+..+..+..+..|
| |
2e+13 +-+ |
1.8e+13 +-+ |
| |
1.6e+13 +-+ |
1.4e+13 +-+ |
1.2e+13 +-+ |
| |
1e+13 +-+ |
8e+12 +-+ |
| |
6e+12 O-+O O O O O O O O O O O O O O O O O O O O O O
4e+12 +-+---------------------------------------------------------------+
perf-stat.instructions
6.5e+12 +-+---------------------------------------------------------------+
6e+12 +-+ .+..+..+..+..+..+..+..+..+..+..+..+..+..+..+.. .+..|
| +..+..+. +..+. |
5.5e+12 +-+ |
5e+12 +-+ |
| |
4.5e+12 +-+ |
4e+12 +-+ |
3.5e+12 +-+ |
| |
3e+12 +-+ |
2.5e+12 +-+ |
| |
2e+12 O-+O O O O O O O O O O O O O O O O O O O O O O
1.5e+12 +-+---------------------------------------------------------------+
perf-stat.cache-references
4e+10 +-+---------------------------------------------------------------+
| .+..+..+..+.. .+.. .+.. .+.. .+.. |
|..+..+..+..+. +..+. +. +..+. +..+. +..+..|
3.5e+10 +-+ |
| |
| |
3e+10 +-+ |
| |
2.5e+10 +-+ |
| |
| |
2e+10 O-+O O O O O O O O O O O O O O O O O O
| O O O O |
| |
1.5e+10 +-+---------------------------------------------------------------+
perf-stat.cache-misses
1e+10 +-+-----------------------------------------------------------------+
|..+..+..+..+.. .+..+.. .. .. +..+..+...+. .+..+..|
9e+09 +-+ +...+. +..+ + +. |
8e+09 +-+ |
| |
7e+09 +-+ |
6e+09 +-+ |
| |
5e+09 +-+ |
4e+09 +-+ |
| |
3e+09 O-+O O O O O |
2e+09 +-+ O O O O O O O O O O O O O O O O O
| |
1e+09 +-+-----------------------------------------------------------------+
perf-stat.branch-instructions
1.4e+12 +-+---------------------------------------------------------------+
|..+..+..+..+. +. +..+. +. +..+..+..+..+..|
1.2e+12 +-+ |
| |
| |
1e+12 +-+ |
| |
8e+11 +-+ |
| |
6e+11 +-+ |
| |
| |
4e+11 +-+ |
O O O O O O O O O O O O O O O O O O O O O O O
2e+11 +-+---------------------------------------------------------------+
perf-stat.branch-misses
4.5e+09 +-+---------------------------------------------------------------+
| .+ |
4e+09 +-++..+..+. + |
| + .+.. |
3.5e+09 +-+ +..+. +..+..+..+..+..+..+.. .+..+..+..+..+..+..|
| +. |
3e+09 +-+ |
| |
2.5e+09 +-+ |
| |
2e+09 +-+ |
| O |
1.5e+09 +-+O O O O O O O O O O O |
O O O O O O O O O O O
1e+09 +-+---------------------------------------------------------------+
perf-stat.dTLB-loads
1.8e+12 +-+---------------------------------------------------------------+
|.. .+..+..+..+..+..+..+..+..+..+.. .+..+.. .+..|
1.6e+12 +-++..+..+. +. +..+..+..+. |
| |
1.4e+12 +-+ |
| |
1.2e+12 +-+ |
| |
1e+12 +-+ |
| |
8e+11 +-+ |
| |
6e+11 +-+ |
O O O O O O O O O O O O O O O O O O O O O O O
4e+11 +-+---------------------------------------------------------------+
perf-stat.dTLB-stores
5.6e+11 +-+---------------------------------------------------------------+
5.4e+11 +-++..+.. .+..+..+..+..+..+..+..+..+..+.. .+..+.. .+..+..|
|. +. +. +..+..+. |
5.2e+11 +-+ |
5e+11 +-+ |
4.8e+11 +-+ |
4.6e+11 +-+ |
| |
4.4e+11 +-+ |
4.2e+11 +-+ |
4e+11 +-+ |
3.8e+11 +-+ |
| |
3.6e+11 O-+O O O O O O O O O O O O O O O O O O O O O O
3.4e+11 +-+---------------------------------------------------------------+
perf-stat.node-loads
5.5e+09 +-+---------------------------------------------------------------+
5e+09 +-+ .+.. .+.. .+..+.. .+.. .+..|
| +. +..+..+..+..+..+.. .+. +. +..+..+. +..+. |
4.5e+09 +-+ +. |
4e+09 +-+ |
| |
3.5e+09 +-+ |
3e+09 +-+ |
2.5e+09 +-+ |
| |
2e+09 +-+ |
1.5e+09 +-+ |
O O O O O O O O O O O O O O O O O O O O O
1e+09 +-+ O O |
5e+08 +-+---------------------------------------------------------------+
perf-stat.node-load-misses
5e+09 +-+---------------------------------------------------------------+
|.. .+..|
4.5e+09 +-++..+..+..+..+.. .+..+.. .+.. .+..+..+..+..+..+..+..+. |
4e+09 +-+ +. +..+. +. |
| |
3.5e+09 +-+ |
3e+09 +-+ |
| |
2.5e+09 +-+ |
2e+09 +-+ |
| |
1.5e+09 +-+ |
1e+09 O-+O O O O O O O O O O O O O O O O O O O O
| O O |
5e+08 +-+---------------------------------------------------------------+
perf-stat.node-stores
4.5e+09 +-+---------------------------------------------------------------+
| .+.. .+.. .+..+.. .+.. .+..+..+..+..+..+.. .+.. |
4e+09 +-+ +. +..+..+. +..+. +. +. +..|
3.5e+09 +-+ |
| |
3e+09 +-+ |
| |
2.5e+09 +-+ |
| |
2e+09 +-+ |
1.5e+09 O-+O O O O O |
| |
1e+09 +-+ O O O O O O O O O O O O O O O O O
| |
5e+08 +-+---------------------------------------------------------------+
perf-stat.node-store-misses
3.5e+09 +-+---------------------------------------------------------------+
| .+.. +.. +..+.. |
3e+09 +-++.. .+.. .+. .. .. +..+..+..+.. .+.. .|
| +. +..+..+. +..+ + +. +. |
| |
2.5e+09 +-+ |
| |
2e+09 +-+ |
| |
1.5e+09 +-+ |
| |
O O O O O O |
1e+09 +-+ |
| O O O O O O O O O O O O O O O O O
5e+08 +-+---------------------------------------------------------------+
perf-stat.page-faults
1.1e+06 +-+---------------------------------------------------------------+
| +.. |
1e+06 +-++.. + +.. +.. +..+.. .+.. .+.. |
|.. .+.. + .. .. +. +. .+.. |
900000 +-+ +. +..+..+ +..+ + +. +..|
800000 +-+ |
| |
700000 +-+ |
| |
600000 +-+ |
500000 +-+ |
| |
400000 +-+ |
| |
300000 O-+O--O--O--O--O--O--O--O--O--O--O--O--O--O--O--O--O--O--O--O--O--O
perf-stat.context-switches
2.6e+06 +-+---------------------------------------------------------------+
| + .+ + +..+.. .+ |
2.4e+06 +-+ + +. + + + + +..+..+. : + |
|+ + .+.. .. + + + + : + + |
2.2e+06 +-+ +. +..+..+ +..+ + : + + .|
| + +. |
2e+06 +-+ |
| |
1.8e+06 +-+ |
| |
1.6e+06 +-+ |
| |
1.4e+06 +-+ |
O O O O O O O O O O O O O O O
1.2e+06 +-+---O--O--O--O-----------O-----O--------------O-----------O-----+
perf-stat.cpu-migrations
500000 +-+----------------------------------------------------------------+
| .+.. .+. + .+.. +..+..+. +..+.. +.. |
450000 +-+ +..+.. .+..+. + .. .. .. |
| +. +..+ + + +..|
400000 +-+ |
350000 +-+ |
| |
300000 +-+ |
| |
250000 +-+ |
200000 +-+ |
| |
150000 O-+O O O O O O O O O O O O O O O O O O O O O O
| |
100000 +-+----------------------------------------------------------------+
perf-stat.minor-faults
1.1e+06 +-+---------------------------------------------------------------+
| +.. |
1e+06 +-++.. + +.. +.. +..+.. .+.. .+.. |
|.. .+.. + .. .. +. +. .+.. |
900000 +-+ +. +..+..+ +..+ + +. +..|
800000 +-+ |
| |
700000 +-+ |
| |
600000 +-+ |
500000 +-+ |
| |
400000 +-+ |
| |
300000 O-+O--O--O--O--O--O--O--O--O--O--O--O--O--O--O--O--O--O--O--O--O--O
perf-stat.cache-miss-rate_
28 +-+--------------------------------------------------------------------+
| .+.. |
26 +-++.. ..+.. .+.. ..+.. .+..+..+...+..+. ..+..+..|
24 +-+ +. +.. ..+. +..+. +. +. |
| +..+. |
22 +-+ |
20 +-+ |
| |
18 +-+ |
16 +-+ |
O O O O O O |
14 +-+ |
12 +-+ O O O O O O O O O O O O
| O O O O O |
10 +-+--------------------------------------------------------------------+
perf-stat.ipc
0.34 +-+------------------------------------------------------------------+
| O O |
0.33 +-+ |
0.32 +-+ |
| |
0.31 +-+ |
| |
0.3 +-+ O O O O O O O O O |
| O O O O O O O O O
0.29 O-+ O |
0.28 +-+ O |
| |
0.27 +-+ |
|..+..+..+...+..+..+..+..+..+..+...+..+..+..+..+..+..+..+...+..+..+..|
0.26 +-+------------------------------------------------------------------+
perf-stat.cpi
3.9 +-+-------------------------------------------------------------------+
3.8 +-+ .+.. .+..+.. .+..|
|..+..+...+. +..+..+..+...+..+..+..+..+...+..+..+. +...+. |
3.7 +-+ |
3.6 +-+ |
| O |
3.5 O-+ O |
3.4 +-+O O O O O O O O O
3.3 +-+ O O O O O O O O O |
| |
3.2 +-+ |
3.1 +-+ |
| |
3 +-+ O O |
2.9 +-+-------------------------------------------------------------------+
aim7.time.system_time
8000 +-+------------------------------------------------------------------+
|..+..+..+...+..+..+..+..+..+. +..+. +..+...+..+..+..|
7000 +-+ |
| |
6000 +-+ |
| |
5000 +-+ |
| |
4000 +-+ |
| |
3000 +-+ |
| |
2000 O-+O O O O O O O O O O O O O O O O O O O O
| O O |
1000 +-+------------------------------------------------------------------+
aim7.time.elapsed_time
220 +-+-------------------------------------------------------------------+
|..+..+...+..+..+..+..+..+...+..+. +. +..+. +..+..+...+..+..|
200 +-+ |
180 +-+ |
| |
160 +-+ |
140 +-+ |
| |
120 +-+ |
100 +-+ |
| |
80 +-+ |
60 O-+O O O O O O O O O O O O O O O O O O O O
| O O |
40 +-+-------------------------------------------------------------------+
aim7.time.elapsed_time.max
220 +-+-------------------------------------------------------------------+
|..+..+...+..+..+..+..+..+...+..+. +. +..+. +..+..+...+..+..|
200 +-+ |
180 +-+ |
| |
160 +-+ |
140 +-+ |
| |
120 +-+ |
100 +-+ |
| |
80 +-+ |
60 O-+O O O O O O O O O O O O O O O O O O O O
| O O |
40 +-+-------------------------------------------------------------------+
aim7.time.minor_page_faults
550000 +-+----------------------------------------------------------------+
| + .+ + +..+.. .+ |
500000 +-+ + +. + .. + + +..+..+. + +.. |
450000 +-+ + .+.. .. + . + + + .. |
| +. +..+..+ +..+ + + +..|
400000 +-+ |
350000 +-+ |
| |
300000 +-+ |
250000 +-+ |
| |
200000 +-+ |
150000 O-+ O O O O O O O O O O O O O O O O
| O O O O O O |
100000 +-+----------------------------------------------------------------+
aim7.time.voluntary_context_switches
1.1e+06 +-+---------------------------------------------------------------+
| + .. : + +..+.. .+.. .. : |
1e+06 +-+ + +.. + : + + + +. + : +.. |
|+ + .. .. : + + + : .. |
| + +..+..+ +..+ + + +..|
900000 +-+ |
| |
800000 +-+ |
| |
700000 +-+ |
| |
| O O |
600000 O-+O O O O O O O O O O O O O O O O O O
| O O |
500000 +-+---------------------------------------------------------------+
aim7.time.involuntary_context_switches
600000 +-+----------------------------------------------------------------+
550000 +-+ + +.. + |
| .+ .. + + +.. .. : |
500000 +-++.. +.. +. + . + + +..+..+ : |
450000 +-+ .. .+.. .. + .+ + : .+.. .|
400000 +-+ + +. + +. +. +. |
350000 +-+ |
| |
300000 +-+ |
250000 +-+ |
200000 +-+ |
150000 +-+ |
| |
100000 O-+O O O O O O O O O O O O O O O O
50000 +-+---------------------O--O--O------------O--------O-----------O--+
[*] bisect-good sample
[O] bisect-bad sample
***************************************************************************************************
lkp-ivb-ep01: 40 threads Intel(R) Xeon(R) CPU E5-2690 v2 @ 3.00GHz with 384G memory
=========================================================================================
compiler/cpufreq_governor/disk/fs/kconfig/load/md/rootfs/tbox_group/test/testcase:
gcc-7/performance/4BRD_12G/xfs/x86_64-rhel-7.2/3000/RAID1/debian-x86_64-2016-08-31.cgz/lkp-ivb-ep01/disk_rr/aim7
commit:
269503cada ("btrfs: only dirty the inode in btrfs_update_time if something was changed")
62231fd3ed ("fs: handle inode->i_version more efficiently")
269503cada9f3e17 62231fd3ed94c9d40b4e58ab14
---------------- --------------------------
%stddev %change %stddev
\ | \
82623 ± 2% +202.6% 250001 aim7.jobs-per-min
218.41 ± 2% -66.8% 72.47 aim7.time.elapsed_time
218.41 ± 2% -66.8% 72.47 aim7.time.elapsed_time.max
515283 ± 8% -97.6% 12328 ± 8% aim7.time.involuntary_context_switches
566723 ± 11% -73.9% 148022 aim7.time.minor_page_faults
7813 -79.1% 1636 aim7.time.system_time
40.87 ± 4% +99.6% 81.56 aim7.time.user_time
1386479 ± 11% -27.6% 1003760 aim7.time.voluntary_context_switches
382548 ± 10% -90.2% 37413 interrupts.CAL:Function_call_interrupts
7108 ± 16% -72.5% 1956 ±108% numa-numastat.node0.other_node
11615 ± 16% -47.3% 6123 ± 8% softirqs.NET_RX
626099 ± 7% -32.1% 425130 softirqs.RCU
3142806 -66.7% 1045757 softirqs.TIMER
67.00 ± 9% -66.4% 22.50 ± 2% vmstat.procs.r
12932 ± 9% +116.7% 28024 vmstat.system.cs
46440 -7.9% 42761 vmstat.system.in
311541 -13.5% 269562 ± 2% meminfo.Active
251665 -19.5% 202511 ± 3% meminfo.Active(anon)
59875 +12.0% 67050 meminfo.Active(file)
10442 -9.7% 9433 meminfo.Inactive(anon)
58919 ± 5% -69.1% 18229 ± 9% meminfo.Shmem
8.04 ± 7% +32.2 40.28 mpstat.cpu.idle%
2.61 ± 16% -0.9 1.71 ± 19% mpstat.cpu.iowait%
0.00 ± 36% +0.0 0.01 ± 22% mpstat.cpu.soft%
88.85 -33.7 55.19 mpstat.cpu.sys%
0.50 ± 4% +2.3 2.81 mpstat.cpu.usr%
75242891 ± 18% +24.9% 94005590 ± 6% cpuidle.C1.time
343278 ± 13% +33.5% 458346 ± 2% cpuidle.C1.usage
1.51e+08 ± 15% -67.6% 48861073 ± 5% cpuidle.C1E.time
366639 ± 13% -47.4% 192734 cpuidle.C1E.usage
30899136 ± 11% +63.6% 50545366 cpuidle.C3.time
100741 ± 8% +170.7% 272682 cpuidle.C3.usage
6.368e+08 ± 9% +58.6% 1.01e+09 ± 2% cpuidle.C6.time
756348 ± 8% +80.6% 1365829 cpuidle.C6.usage
4897473 ± 15% -45.7% 2660288 ± 35% cpuidle.POLL.time
7396 ± 15% -84.6% 1141 ± 16% cpuidle.POLL.usage
29984 ± 4% +13.5% 34036 ± 3% numa-meminfo.node0.Active(file)
38389 ± 50% -60.1% 15312 ±124% numa-meminfo.node0.KernelStack
3002292 ± 42% -35.9% 1925514 ± 25% numa-meminfo.node0.MemUsed
77571 ± 41% -64.4% 27645 ±117% numa-meminfo.node0.PageTables
100325 ± 34% -41.9% 58259 ± 57% numa-meminfo.node0.SUnreclaim
3702 ± 89% +248.2% 12890 ± 42% numa-meminfo.node0.Shmem
29838 ± 5% +14.1% 34046 ± 4% numa-meminfo.node1.Active(file)
7604 ± 42% -65.7% 2609 ±138% numa-meminfo.node1.Inactive(anon)
12832 ± 14% -20.5% 10195 ± 18% numa-meminfo.node1.Mapped
29986 ± 99% +139.1% 71688 ± 43% numa-meminfo.node1.PageTables
70770 ± 7% -26.2% 52216 ± 11% numa-meminfo.node1.SReclaimable
55205 ± 2% -90.3% 5341 ± 85% numa-meminfo.node1.Shmem
62909 -19.6% 50569 ± 3% proc-vmstat.nr_active_anon
14978 +14.6% 17167 ± 5% proc-vmstat.nr_active_file
2616 -10.6% 2338 proc-vmstat.nr_inactive_anon
14729 ± 5% -69.0% 4561 ± 9% proc-vmstat.nr_shmem
62909 -19.6% 50569 ± 3% proc-vmstat.nr_zone_active_anon
14978 +14.6% 17168 ± 5% proc-vmstat.nr_zone_active_file
2616 -10.6% 2338 proc-vmstat.nr_zone_inactive_anon
18863 ± 5% -99.9% 22.25 ± 59% proc-vmstat.numa_hint_faults
4076 ± 7% -99.8% 7.50 ±135% proc-vmstat.numa_hint_faults_local
9057 ± 2% -13.9% 7798 proc-vmstat.numa_other
12294 ± 5% -99.9% 13.75 ± 81% proc-vmstat.numa_pages_migrated
120145 ± 4% -100.0% 54.00 ± 58% proc-vmstat.numa_pte_updates
1067004 ± 6% -69.0% 330915 proc-vmstat.pgfault
12294 ± 5% -99.9% 13.75 ± 81% proc-vmstat.pgmigrate_success
4485126 ± 36% -78.3% 975433 ± 19% proc-vmstat.pgpgout
38415 ± 50% -60.4% 15220 ±124% numa-vmstat.node0.nr_kernel_stack
19399 ± 41% -64.6% 6872 ±116% numa-vmstat.node0.nr_page_table_pages
925.25 ± 89% +249.1% 3229 ± 42% numa-vmstat.node0.nr_shmem
25092 ± 34% -42.0% 14550 ± 57% numa-vmstat.node0.nr_slab_unreclaimable
238193 ± 35% -75.6% 58231 ± 31% numa-vmstat.node0.nr_written
18962193 -10.2% 17025019 ± 2% numa-vmstat.node0.numa_hit
18955274 -10.2% 17022869 numa-vmstat.node0.numa_local
6922 ± 17% -68.6% 2171 ± 93% numa-vmstat.node0.numa_other
18003465 -9.7% 16256991 ± 2% numa-vmstat.node1.nr_dirtied
1911 ± 42% -65.9% 652.00 ±138% numa-vmstat.node1.nr_inactive_anon
3249 ± 15% -20.8% 2574 ± 20% numa-vmstat.node1.nr_mapped
7478 ± 98% +140.9% 18018 ± 43% numa-vmstat.node1.nr_page_table_pages
13799 ± 2% -90.3% 1343 ± 85% numa-vmstat.node1.nr_shmem
17690 ± 7% -26.2% 13064 ± 11% numa-vmstat.node1.nr_slab_reclaimable
239974 ± 35% -75.4% 59045 ± 34% numa-vmstat.node1.nr_written
1911 ± 42% -65.9% 652.00 ±138% numa-vmstat.node1.nr_zone_inactive_anon
19048270 -10.5% 17042096 numa-vmstat.node1.numa_hit
18873726 -10.6% 16863698 numa-vmstat.node1.numa_local
2662 -73.0% 720.25 turbostat.Avg_MHz
89.91 -30.0 59.87 turbostat.Busy%
2961 -59.4% 1203 turbostat.Bzy_MHz
334518 ± 13% +34.3% 449342 ± 2% turbostat.C1
0.86 ± 19% +2.3 3.15 ± 6% turbostat.C1%
366561 ± 13% -47.4% 192676 turbostat.C1E
100631 ± 8% +170.8% 272551 turbostat.C3
0.35 ± 10% +1.3 1.70 turbostat.C3%
755296 ± 8% +80.7% 1364504 turbostat.C6
7.22 ± 7% +26.7 33.88 ± 2% turbostat.C6%
5.19 ± 11% +356.9% 23.69 ± 2% turbostat.CPU%c1
0.04 ± 31% +1007.1% 0.39 ± 3% turbostat.CPU%c3
4.87 ± 10% +229.9% 16.05 ± 3% turbostat.CPU%c6
125.05 -66.7% 41.68 turbostat.CorWatt
10544150 ± 2% -69.5% 3212728 turbostat.IRQ
2.78 ± 18% +145.7% 6.84 ± 28% turbostat.Pkg%pc2
1.09 ± 48% +372.9% 5.14 ± 42% turbostat.Pkg%pc6
152.52 -55.4% 67.98 turbostat.PkgWatt
39.95 -11.1% 35.51 turbostat.RAMWatt
16200 ± 6% -63.3% 5950 turbostat.SMI
1.387e+12 -74.5% 3.531e+11 perf-stat.branch-instructions
0.28 +0.2 0.48 perf-stat.branch-miss-rate%
3.906e+09 -57.0% 1.68e+09 perf-stat.branch-misses
30.69 -11.3 19.41 perf-stat.cache-miss-rate%
1.13e+10 ± 2% -71.7% 3.202e+09 perf-stat.cache-misses
3.683e+10 -55.2% 1.65e+10 perf-stat.cache-references
2858743 ± 9% -25.7% 2122810 perf-stat.context-switches
3.76 -69.2% 1.16 perf-stat.cpi
2.331e+13 -90.9% 2.111e+12 perf-stat.cpu-cycles
662934 ± 8% -75.4% 162861 perf-stat.cpu-migrations
0.78 ± 14% +0.4 1.15 ± 23% perf-stat.dTLB-load-miss-rate%
1.35e+10 ± 14% -50.9% 6.625e+09 ± 23% perf-stat.dTLB-load-misses
1.712e+12 -66.7% 5.694e+11 perf-stat.dTLB-loads
0.22 ± 15% -0.2 0.07 ± 15% perf-stat.dTLB-store-miss-rate%
1.254e+09 ± 15% -80.3% 2.466e+08 ± 16% perf-stat.dTLB-store-misses
5.743e+11 ± 2% -34.4% 3.767e+11 perf-stat.dTLB-stores
18.06 ± 48% +14.6 32.62 ± 11% perf-stat.iTLB-load-miss-rate%
1.092e+08 ± 20% -28.0% 78621807 ± 13% perf-stat.iTLB-load-misses
6.911e+08 ± 61% -76.5% 1.622e+08 ± 6% perf-stat.iTLB-loads
6.205e+12 -70.6% 1.823e+12 perf-stat.instructions
59120 ± 18% -60.1% 23560 ± 12% perf-stat.instructions-per-iTLB-miss
0.27 +224.3% 0.86 perf-stat.ipc
1050580 ± 6% -69.6% 319027 perf-stat.minor-faults
39.90 -21.3 18.55 perf-stat.node-load-miss-rate%
4.32e+09 -92.2% 3.364e+08 perf-stat.node-load-misses
6.506e+09 -77.3% 1.477e+09 perf-stat.node-loads
40.82 -25.5 15.30 ± 2% perf-stat.node-store-miss-rate%
3.077e+09 ± 2% -91.1% 2.729e+08 ± 2% perf-stat.node-store-misses
4.46e+09 ± 2% -66.1% 1.51e+09 perf-stat.node-stores
1050570 ± 6% -69.6% 319033 perf-stat.page-faults
1472 -11.7% 1300 slabinfo.Acpi-ParseExt.active_objs
1472 -11.7% 1300 slabinfo.Acpi-ParseExt.num_objs
1959 ± 34% -65.7% 671.75 ± 22% slabinfo.bio-3.active_objs
1959 ± 34% -65.7% 671.75 ± 22% slabinfo.bio-3.num_objs
3120 ± 19% -45.8% 1692 ± 9% slabinfo.bsg_cmd.active_objs
3120 ± 19% -45.8% 1692 ± 9% slabinfo.bsg_cmd.num_objs
1646 ± 6% -22.9% 1269 ± 8% slabinfo.btrfs_ordered_extent.active_objs
1786 ± 3% -28.9% 1269 ± 8% slabinfo.btrfs_ordered_extent.num_objs
9210 -9.4% 8344 slabinfo.buffer_head.active_slabs
359223 -9.4% 325443 slabinfo.buffer_head.num_objs
9210 -9.4% 8344 slabinfo.buffer_head.num_slabs
840.00 +15.5% 970.00 ± 6% slabinfo.file_lock_cache.active_objs
840.00 +15.5% 970.00 ± 6% slabinfo.file_lock_cache.num_objs
3914 ± 6% -10.6% 3498 ± 5% slabinfo.kmalloc-1024.num_objs
5033 ± 6% -22.9% 3879 ± 4% slabinfo.kmalloc-128.active_objs
5267 ± 6% -25.1% 3947 ± 4% slabinfo.kmalloc-128.num_objs
10390 ± 16% -33.5% 6909 ± 6% slabinfo.kmalloc-192.active_objs
248.75 ± 16% -34.1% 164.00 ± 6% slabinfo.kmalloc-192.active_slabs
10453 ± 16% -33.9% 6910 ± 6% slabinfo.kmalloc-192.num_objs
248.75 ± 16% -34.1% 164.00 ± 6% slabinfo.kmalloc-192.num_slabs
10509 ± 2% -46.6% 5607 ± 13% slabinfo.kmalloc-256.active_objs
331.00 ± 2% -45.2% 181.25 ± 13% slabinfo.kmalloc-256.active_slabs
10607 ± 2% -45.2% 5807 ± 13% slabinfo.kmalloc-256.num_objs
331.00 ± 2% -45.2% 181.25 ± 13% slabinfo.kmalloc-256.num_slabs
1638 -10.8% 1461 ± 2% slabinfo.mnt_cache.active_objs
1638 -10.8% 1461 ± 2% slabinfo.mnt_cache.num_objs
1329 -9.8% 1199 slabinfo.posix_timers_cache.active_objs
1329 -9.8% 1199 slabinfo.posix_timers_cache.num_objs
25748 -14.0% 22154 slabinfo.proc_inode_cache.active_objs
26108 -13.9% 22489 slabinfo.proc_inode_cache.num_objs
34378 ± 3% -21.6% 26968 ± 6% slabinfo.radix_tree_node.active_objs
1248 ± 3% -21.8% 976.75 ± 6% slabinfo.radix_tree_node.active_slabs
34982 ± 3% -21.8% 27358 ± 6% slabinfo.radix_tree_node.num_objs
1248 ± 3% -21.8% 976.75 ± 6% slabinfo.radix_tree_node.num_slabs
938.00 ± 4% +13.9% 1068 ± 5% slabinfo.task_group.active_objs
938.00 ± 4% +13.9% 1068 ± 5% slabinfo.task_group.num_objs
1278 -10.8% 1139 slabinfo.xfs_buf_item.active_objs
1278 -10.8% 1139 slabinfo.xfs_buf_item.num_objs
1277 -10.7% 1140 slabinfo.xfs_da_state.active_objs
1277 -10.7% 1140 slabinfo.xfs_da_state.num_objs
3069 ± 3% -15.1% 2606 ± 4% slabinfo.xfs_ili.num_objs
2537 ± 2% -11.2% 2253 ± 3% slabinfo.xfs_inode.num_objs
3375 ± 18% -42.7% 1935 ± 7% slabinfo.xfs_log_ticket.active_objs
3375 ± 18% -42.7% 1935 ± 7% slabinfo.xfs_log_ticket.num_objs
76837 -77.9% 16951 ± 2% sched_debug.cfs_rq:/.exec_clock.avg
77254 -77.8% 17169 ± 2% sched_debug.cfs_rq:/.exec_clock.max
76371 -78.3% 16601 ± 2% sched_debug.cfs_rq:/.exec_clock.min
158.47 ± 11% -27.0% 115.68 ± 7% sched_debug.cfs_rq:/.exec_clock.stddev
2948116 -82.7% 511186 ± 3% sched_debug.cfs_rq:/.min_vruntime.avg
3037745 -81.2% 570721 ± 3% sched_debug.cfs_rq:/.min_vruntime.max
2883559 -82.8% 494832 ± 3% sched_debug.cfs_rq:/.min_vruntime.min
30847 ± 19% -59.3% 12564 ± 6% sched_debug.cfs_rq:/.min_vruntime.stddev
0.81 -34.1% 0.53 ± 11% sched_debug.cfs_rq:/.nr_running.avg
0.23 +92.8% 0.45 sched_debug.cfs_rq:/.nr_running.stddev
0.96 ± 3% -47.2% 0.51 sched_debug.cfs_rq:/.nr_spread_over.avg
2.81 ± 34% -68.9% 0.88 ± 24% sched_debug.cfs_rq:/.nr_spread_over.max
0.75 -33.3% 0.50 sched_debug.cfs_rq:/.nr_spread_over.min
0.49 ± 17% -88.0% 0.06 ± 57% sched_debug.cfs_rq:/.nr_spread_over.stddev
17.13 ± 9% -31.6% 11.71 ± 9% sched_debug.cfs_rq:/.runnable_load_avg.avg
51.19 ± 12% -40.2% 30.62 ± 2% sched_debug.cfs_rq:/.runnable_load_avg.max
-61362 -78.6% -13132 sched_debug.cfs_rq:/.spread0.min
30836 ± 20% -59.3% 12544 ± 6% sched_debug.cfs_rq:/.spread0.stddev
914.38 ± 2% -22.1% 712.63 ± 2% sched_debug.cfs_rq:/.util_avg.avg
1901 ± 7% -20.5% 1512 ± 4% sched_debug.cfs_rq:/.util_avg.max
369.90 ± 6% -31.3% 253.99 ± 4% sched_debug.cfs_rq:/.util_avg.stddev
511341 ± 5% -17.1% 423760 ± 5% sched_debug.cpu.avg_idle.avg
148496 ± 17% -59.4% 60266 ± 31% sched_debug.cpu.avg_idle.min
117128 ± 2% -51.7% 56592 sched_debug.cpu.clock.avg
117149 ± 2% -51.7% 56613 sched_debug.cpu.clock.max
117103 ± 2% -51.7% 56570 sched_debug.cpu.clock.min
117128 ± 2% -51.7% 56592 sched_debug.cpu.clock_task.avg
117149 ± 2% -51.7% 56613 sched_debug.cpu.clock_task.max
117103 ± 2% -51.7% 56570 sched_debug.cpu.clock_task.min
17.48 ± 10% -31.7% 11.94 ± 4% sched_debug.cpu.cpu_load[0].avg
48.25 ± 15% -33.2% 32.25 ± 7% sched_debug.cpu.cpu_load[0].max
1.75 ± 67% -100.0% 0.00 sched_debug.cpu.cpu_load[0].min
3.75 ± 23% -93.3% 0.25 ±173% sched_debug.cpu.cpu_load[1].min
4.94 ± 9% -74.7% 1.25 ±128% sched_debug.cpu.cpu_load[2].min
18.25 ± 11% -14.3% 15.63 ± 10% sched_debug.cpu.cpu_load[4].avg
6.12 ± 8% -42.9% 3.50 ± 39% sched_debug.cpu.cpu_load[4].min
2664 ± 7% -42.7% 1527 ± 15% sched_debug.cpu.curr->pid.avg
5071 ± 10% -22.0% 3954 sched_debug.cpu.curr->pid.max
85741 -66.1% 29073 sched_debug.cpu.nr_load_updates.avg
90853 -62.3% 34267 ± 2% sched_debug.cpu.nr_load_updates.max
84665 -66.7% 28171 sched_debug.cpu.nr_load_updates.min
1.05 ± 15% -49.6% 0.53 ± 12% sched_debug.cpu.nr_running.avg
2.75 ± 23% -54.5% 1.25 ± 20% sched_debug.cpu.nr_running.max
31415 ± 5% -27.7% 22706 sched_debug.cpu.nr_switches.avg
40139 ± 5% -24.1% 30467 ± 2% sched_debug.cpu.nr_switches.max
28596 ± 6% -28.4% 20479 sched_debug.cpu.nr_switches.min
2571 ± 6% -15.6% 2169 ± 4% sched_debug.cpu.nr_switches.stddev
54.60 -33.1% 36.55 sched_debug.cpu.nr_uninterruptible.avg
308.62 ± 7% -78.0% 67.75 ± 6% sched_debug.cpu.nr_uninterruptible.max
-190.62 -102.5% 4.75 ± 67% sched_debug.cpu.nr_uninterruptible.min
118.54 ± 10% -87.5% 14.85 ± 11% sched_debug.cpu.nr_uninterruptible.stddev
29880 ± 5% -30.2% 20845 sched_debug.cpu.sched_count.avg
34021 ± 7% -33.4% 22650 sched_debug.cpu.sched_count.max
28141 ± 6% -29.8% 19763 sched_debug.cpu.sched_count.min
1287 ± 10% -49.8% 646.28 ± 5% sched_debug.cpu.sched_count.stddev
6708 ± 4% +51.9% 10190 sched_debug.cpu.sched_goidle.avg
8099 ± 3% +36.9% 11085 sched_debug.cpu.sched_goidle.max
6190 ± 4% +56.0% 9655 sched_debug.cpu.sched_goidle.min
450.38 ± 3% -28.6% 321.43 ± 5% sched_debug.cpu.sched_goidle.stddev
16726 ± 6% -37.5% 10448 sched_debug.cpu.ttwu_count.avg
21484 ± 5% -41.2% 12623 ± 6% sched_debug.cpu.ttwu_count.max
13123 ± 9% -24.3% 9937 sched_debug.cpu.ttwu_count.min
2232 ± 5% -72.4% 617.19 ± 8% sched_debug.cpu.ttwu_count.stddev
2892 ± 3% -91.6% 242.14 sched_debug.cpu.ttwu_local.avg
4724 ± 5% -83.4% 785.12 ± 11% sched_debug.cpu.ttwu_local.max
2305 ± 4% -92.2% 179.00 ± 5% sched_debug.cpu.ttwu_local.min
476.08 ± 10% -80.7% 91.88 ± 14% sched_debug.cpu.ttwu_local.stddev
117103 ± 2% -51.7% 56570 sched_debug.cpu_clk
117103 ± 2% -51.7% 56570 sched_debug.ktime
0.04 ± 6% +55.9% 0.07 ± 5% sched_debug.rt_rq:/.rt_time.avg
1.42 ± 6% +77.4% 2.53 ± 5% sched_debug.rt_rq:/.rt_time.max
0.00 ± 26% -100.0% 0.00 sched_debug.rt_rq:/.rt_time.min
0.22 ± 6% +76.6% 0.39 ± 5% sched_debug.rt_rq:/.rt_time.stddev
117509 ± 2% -51.5% 56978 sched_debug.sched_clk
88.63 -88.6 0.00 perf-profile.calltrace.cycles-pp.xfs_vn_update_time.file_update_time.xfs_file_aio_write_checks.xfs_file_buffered_aio_write.xfs_file_write_iter
88.81 -87.8 1.05 ± 7% perf-profile.calltrace.cycles-pp.file_update_time.xfs_file_aio_write_checks.xfs_file_buffered_aio_write.xfs_file_write_iter.__vfs_write
88.91 -87.0 1.94 ± 5% perf-profile.calltrace.cycles-pp.xfs_file_aio_write_checks.xfs_file_buffered_aio_write.xfs_file_write_iter.__vfs_write.vfs_write
85.42 -85.4 0.00 perf-profile.calltrace.cycles-pp.__xfs_trans_commit.xfs_vn_update_time.file_update_time.xfs_file_aio_write_checks.xfs_file_buffered_aio_write
84.97 -85.0 0.00 perf-profile.calltrace.cycles-pp.xfs_log_commit_cil.__xfs_trans_commit.xfs_vn_update_time.file_update_time.xfs_file_aio_write_checks
79.71 -79.7 0.00 perf-profile.calltrace.cycles-pp._raw_spin_lock.xfs_log_commit_cil.__xfs_trans_commit.xfs_vn_update_time.file_update_time
79.19 -79.2 0.00 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.xfs_log_commit_cil.__xfs_trans_commit.xfs_vn_update_time
92.00 -63.2 28.80 perf-profile.calltrace.cycles-pp.xfs_file_buffered_aio_write.xfs_file_write_iter.__vfs_write.vfs_write.sys_write
92.12 -62.4 29.71 perf-profile.calltrace.cycles-pp.xfs_file_write_iter.__vfs_write.vfs_write.sys_write.entry_SYSCALL_64_fastpath
92.19 -61.9 30.30 perf-profile.calltrace.cycles-pp.__vfs_write.vfs_write.sys_write.entry_SYSCALL_64_fastpath
92.54 -59.2 33.35 ± 2% perf-profile.calltrace.cycles-pp.vfs_write.sys_write.entry_SYSCALL_64_fastpath
92.61 -58.7 33.87 ± 2% perf-profile.calltrace.cycles-pp.sys_write.entry_SYSCALL_64_fastpath
96.53 -33.0 63.58 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_fastpath
0.68 ± 2% +4.5 5.21 ± 2% perf-profile.calltrace.cycles-pp.copy_page_to_iter.generic_file_read_iter.xfs_file_buffered_aio_read.xfs_file_read_iter.__vfs_read
0.53 ± 4% +4.9 5.48 perf-profile.calltrace.cycles-pp.truncate_inode_pages_range.evict.__dentry_kill.dput.__fput
0.53 ± 3% +5.0 5.50 ± 2% perf-profile.calltrace.cycles-pp.evict.__dentry_kill.dput.__fput.task_work_run
0.91 ± 3% +5.5 6.41 perf-profile.calltrace.cycles-pp.__dentry_kill.dput.__fput.task_work_run.exit_to_usermode_loop
0.92 ± 3% +5.5 6.44 perf-profile.calltrace.cycles-pp.dput.__fput.task_work_run.exit_to_usermode_loop.syscall_return_slowpath
1.10 ± 2% +5.6 6.65 perf-profile.calltrace.cycles-pp.__fput.task_work_run.exit_to_usermode_loop.syscall_return_slowpath.entry_SYSCALL_64_fastpath
1.10 ± 2% +5.6 6.67 perf-profile.calltrace.cycles-pp.task_work_run.exit_to_usermode_loop.syscall_return_slowpath.entry_SYSCALL_64_fastpath
1.10 ± 2% +5.6 6.67 perf-profile.calltrace.cycles-pp.exit_to_usermode_loop.syscall_return_slowpath.entry_SYSCALL_64_fastpath
1.10 ± 2% +5.6 6.68 perf-profile.calltrace.cycles-pp.syscall_return_slowpath.entry_SYSCALL_64_fastpath
0.72 ± 2% +5.8 6.52 perf-profile.calltrace.cycles-pp.pagecache_get_page.grab_cache_page_write_begin.iomap_write_begin.iomap_write_actor.iomap_apply
0.76 ± 3% +6.1 6.84 ± 2% perf-profile.calltrace.cycles-pp.grab_cache_page_write_begin.iomap_write_begin.iomap_write_actor.iomap_apply.iomap_file_buffered_write
1.27 ± 2% +10.3 11.54 perf-profile.calltrace.cycles-pp.iomap_write_begin.iomap_write_actor.iomap_apply.iomap_file_buffered_write.xfs_file_buffered_aio_write
1.57 ± 2% +11.2 12.74 perf-profile.calltrace.cycles-pp.generic_file_read_iter.xfs_file_buffered_aio_read.xfs_file_read_iter.__vfs_read.vfs_read
1.70 ± 2% +12.2 13.94 perf-profile.calltrace.cycles-pp.xfs_file_buffered_aio_read.xfs_file_read_iter.__vfs_read.vfs_read.sys_read
1.81 ± 2% +13.0 14.82 perf-profile.calltrace.cycles-pp.xfs_file_read_iter.__vfs_read.vfs_read.sys_read.entry_SYSCALL_64_fastpath
1.86 ± 2% +13.5 15.38 perf-profile.calltrace.cycles-pp.__vfs_read.vfs_read.sys_read.entry_SYSCALL_64_fastpath
2.08 ± 2% +15.4 17.43 perf-profile.calltrace.cycles-pp.vfs_read.sys_read.entry_SYSCALL_64_fastpath
2.13 ± 2% +15.7 17.84 perf-profile.calltrace.cycles-pp.sys_read.entry_SYSCALL_64_fastpath
2.26 +18.7 20.92 perf-profile.calltrace.cycles-pp.iomap_write_actor.iomap_apply.iomap_file_buffered_write.xfs_file_buffered_aio_write.xfs_file_write_iter
0.93 ± 10% +19.4 20.37 ± 3% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
0.98 ± 12% +20.2 21.15 ± 4% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
1.00 ± 11% +20.7 21.70 ± 4% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
1.00 ± 11% +20.7 21.70 ± 4% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
1.00 ± 11% +20.7 21.70 ± 4% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
1.02 ± 12% +21.1 22.14 ± 4% perf-profile.calltrace.cycles-pp.secondary_startup_64
2.90 +22.2 25.11 perf-profile.calltrace.cycles-pp.iomap_apply.iomap_file_buffered_write.xfs_file_buffered_aio_write.xfs_file_write_iter.__vfs_write
2.92 +22.3 25.27 perf-profile.calltrace.cycles-pp.iomap_file_buffered_write.xfs_file_buffered_aio_write.xfs_file_write_iter.__vfs_write.vfs_write
88.81 -88.3 0.54 ± 4% perf-profile.children.cycles-pp.xfs_vn_update_time
88.81 -87.7 1.10 ± 7% perf-profile.children.cycles-pp.file_update_time
88.92 -86.8 2.10 ± 5% perf-profile.children.cycles-pp.xfs_file_aio_write_checks
86.30 -84.9 1.37 ± 3% perf-profile.children.cycles-pp.__xfs_trans_commit
85.86 -84.5 1.33 ± 3% perf-profile.children.cycles-pp.xfs_log_commit_cil
81.64 -80.5 1.10 ± 6% perf-profile.children.cycles-pp._raw_spin_lock
81.03 -79.3 1.75 ± 4% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
92.01 -63.1 28.91 perf-profile.children.cycles-pp.xfs_file_buffered_aio_write
92.13 -62.4 29.77 perf-profile.children.cycles-pp.xfs_file_write_iter
92.23 -61.8 30.42 perf-profile.children.cycles-pp.__vfs_write
92.58 -59.1 33.49 ± 2% perf-profile.children.cycles-pp.vfs_write
92.65 -58.6 34.02 ± 2% perf-profile.children.cycles-pp.sys_write
96.57 -32.9 63.68 perf-profile.children.cycles-pp.entry_SYSCALL_64_fastpath
0.70 ± 3% +4.7 5.35 ± 2% perf-profile.children.cycles-pp.copy_page_to_iter
0.53 ± 4% +5.0 5.49 perf-profile.children.cycles-pp.truncate_inode_pages_range
0.53 ± 3% +5.0 5.50 ± 2% perf-profile.children.cycles-pp.evict
0.91 ± 3% +5.5 6.41 perf-profile.children.cycles-pp.__dentry_kill
0.92 ± 3% +5.5 6.46 perf-profile.children.cycles-pp.dput
1.10 ± 2% +5.6 6.66 perf-profile.children.cycles-pp.__fput
1.10 ± 2% +5.6 6.67 perf-profile.children.cycles-pp.task_work_run
1.10 ± 2% +5.6 6.68 perf-profile.children.cycles-pp.syscall_return_slowpath
1.10 ± 2% +5.6 6.68 perf-profile.children.cycles-pp.exit_to_usermode_loop
0.82 ± 5% +5.7 6.50 ± 2% perf-profile.children.cycles-pp.copy_user_enhanced_fast_string
0.76 ± 2% +6.1 6.89 ± 2% perf-profile.children.cycles-pp.grab_cache_page_write_begin
0.98 +8.0 8.99 perf-profile.children.cycles-pp.pagecache_get_page
1.29 ± 2% +10.4 11.72 perf-profile.children.cycles-pp.iomap_write_begin
1.59 ± 2% +11.3 12.94 perf-profile.children.cycles-pp.generic_file_read_iter
1.71 ± 2% +12.4 14.07 perf-profile.children.cycles-pp.xfs_file_buffered_aio_read
1.81 ± 2% +13.0 14.84 perf-profile.children.cycles-pp.xfs_file_read_iter
1.89 ± 2% +13.6 15.54 perf-profile.children.cycles-pp.__vfs_read
2.12 ± 2% +15.5 17.59 perf-profile.children.cycles-pp.vfs_read
2.16 ± 2% +15.9 18.03 perf-profile.children.cycles-pp.sys_read
2.29 +18.9 21.20 perf-profile.children.cycles-pp.iomap_write_actor
0.96 ± 11% +19.8 20.75 ± 3% perf-profile.children.cycles-pp.intel_idle
1.00 ± 12% +20.6 21.60 ± 4% perf-profile.children.cycles-pp.cpuidle_enter_state
1.00 ± 11% +20.7 21.70 ± 4% perf-profile.children.cycles-pp.start_secondary
1.02 ± 12% +21.1 22.14 ± 4% perf-profile.children.cycles-pp.secondary_startup_64
1.02 ± 12% +21.1 22.14 ± 4% perf-profile.children.cycles-pp.cpu_startup_entry
1.02 ± 12% +21.1 22.14 ± 4% perf-profile.children.cycles-pp.do_idle
2.92 +22.4 25.29 perf-profile.children.cycles-pp.iomap_apply
2.94 +22.5 25.40 perf-profile.children.cycles-pp.iomap_file_buffered_write
80.69 -78.9 1.75 ± 4% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
0.81 ± 5% +5.6 6.43 ± 2% perf-profile.self.cycles-pp.copy_user_enhanced_fast_string
0.96 ± 11% +19.8 20.74 ± 3% perf-profile.self.cycles-pp.intel_idle
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Xiaolong
View attachment "config-4.15.0-rc3-00022-g62231fd" of type "text/plain" (163796 bytes)
View attachment "job-script" of type "text/plain" (7675 bytes)
View attachment "job.yaml" of type "text/plain" (5270 bytes)
View attachment "reproduce" of type "text/plain" (1018 bytes)
Powered by blists - more mailing lists