[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20171225061759.GJ31543@yexl-desktop>
Date: Mon, 25 Dec 2017 14:17:59 +0800
From: kernel test robot <xiaolong.ye@...el.com>
To: Jeff Layton <jlayton@...hat.com>
Cc: LKML <linux-kernel@...r.kernel.org>,
Jeff Layton <jlayton@...hat.com>, lkp@...org
Subject: [lkp-robot] [fs] 8e4a22b1c4: fio.read_bw_MBps +244.4% improvement
Greeting,
FYI, we noticed a +244.4% improvement of fio.read_bw_MBps due to commit:
commit: 8e4a22b1c4cfd0cf20b11c4d7e2003516a8fb198 ("fs: handle inode->i_version more efficiently")
https://git.kernel.org/cgit/linux/kernel/git/jlayton/linux.git iversion
in testcase: fio-basic
on test machine: 56 threads Intel(R) Xeon(R) CPU E5-2695 v3 @ 2.30GHz with 256G memory
with following parameters:
disk: 2pmem
fs: xfs
mount_option: dax
runtime: 200s
nr_task: 50%
time_based: tb
rw: rw
bs: 4k
ioengine: sync
test_size: 200G
cpufreq_governor: performance
test-description: Fio is a tool that will spawn a number of threads or processes doing a particular type of I/O action as specified by the user.
test-url: https://github.com/axboe/fio
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
bs/compiler/cpufreq_governor/disk/fs/ioengine/kconfig/mount_option/nr_task/rootfs/runtime/rw/tbox_group/test_size/testcase/time_based:
4k/gcc-7/performance/2pmem/xfs/sync/x86_64-rhel-7.2/dax/50%/debian-x86_64-2016-08-31.cgz/200s/rw/lkp-hsw-ep6/200G/fio-basic/tb
commit:
fb2cd41ff5 ("btrfs: only dirty the inode in btrfs_update_time if something was changed")
8e4a22b1c4 ("fs: handle inode->i_version more efficiently")
fb2cd41ff55762ae 8e4a22b1c4cfd0cf20b11c4d7e
---------------- --------------------------
%stddev %change %stddev
\ | \
28.45 ± 9% -28.0 0.43 ± 14% fio.latency_10us%
21.94 ± 12% -21.7 0.23 ± 2% fio.latency_20us%
40.09 ± 3% +31.5 71.56 ± 4% fio.latency_2us%
9.26 ± 13% +18.5 27.76 ± 11% fio.latency_4us%
0.26 ± 25% -0.2 0.02 fio.latency_50us%
9605 +244.4% 33078 ± 2% fio.read_bw_MBps
7.25 ± 5% -58.6% 3.00 fio.read_clat_99%_us
1.26 ± 2% +6.4% 1.34 ± 3% fio.read_clat_mean_us
2459003 +244.4% 8468007 ± 2% fio.read_iops
40805 +4.3% 42567 fio.time.minor_page_faults
5184 -18.5% 4226 fio.time.system_time
422.71 ± 2% +226.7% 1380 ± 2% fio.time.user_time
9607 +244.4% 33085 ± 2% fio.write_bw_MBps
11.75 ± 3% -83.0% 2.00 fio.write_clat_90%_us
12.75 ± 3% -79.1% 2.67 ± 17% fio.write_clat_95%_us
17.50 ± 2% -82.9% 3.00 fio.write_clat_99%_us
9.43 ± 2% -86.2% 1.30 ± 3% fio.write_clat_mean_us
2459557 +244.4% 8469963 ± 2% fio.write_iops
1430 ± 2% -2.3% 1398 boot-time.idle
117883 +2.2% 120435 interrupts.CAL:Function_call_interrupts
5189 ± 6% +12.1% 5818 proc-vmstat.numa_hint_faults
101108 -36.8% 63904 ± 6% cpuidle.C1.usage
2126 ± 2% -39.1% 1294 ± 6% cpuidle.POLL.usage
45.77 -8.4 37.40 mpstat.cpu.sys%
3.75 ± 2% +8.5 12.22 ± 2% mpstat.cpu.usr%
84.25 ± 2% +15.1% 97.00 ± 2% vmstat.io.bo
2005 -16.5% 1674 vmstat.system.cs
983.00 ± 35% -51.2% 479.67 ± 67% numa-meminfo.node1.Mlocked
4083 ± 49% -51.9% 1963 ± 13% numa-meminfo.node1.PageTables
993.00 ± 35% -50.8% 489.00 ± 65% numa-meminfo.node1.Unevictable
261271 +95.6% 510975 ± 3% softirqs.RCU
408363 -22.4% 316959 softirqs.SCHED
4012938 -8.5% 3671838 softirqs.TIMER
245.75 ± 35% -51.6% 119.00 ± 68% numa-vmstat.node1.nr_mlock
1015 ± 49% -51.9% 488.67 ± 13% numa-vmstat.node1.nr_page_table_pages
248.25 ± 35% -51.1% 121.33 ± 66% numa-vmstat.node1.nr_unevictable
248.25 ± 35% -51.1% 121.33 ± 66% numa-vmstat.node1.nr_zone_unevictable
1382 -3.9% 1328 turbostat.Avg_MHz
94552 ± 2% -37.2% 59417 ± 4% turbostat.C1
216.64 +9.1% 236.38 turbostat.PkgWatt
34.23 +30.9% 44.81 turbostat.RAMWatt
1.678e+12 ± 4% +99.8% 3.352e+12 ± 5% perf-stat.branch-instructions
0.38 ± 2% -0.1 0.23 perf-stat.branch-miss-rate%
6.38e+09 ± 5% +22.1% 7.79e+09 ± 4% perf-stat.branch-misses
38.55 +3.9 42.43 perf-stat.cache-miss-rate%
2.752e+10 ± 6% +72.2% 4.738e+10 ± 4% perf-stat.cache-misses
7.135e+10 ± 5% +56.5% 1.117e+11 ± 5% perf-stat.cache-references
404574 -17.1% 335235 perf-stat.context-switches
2.10 -55.7% 0.93 ± 2% perf-stat.cpi
1.756e+13 ± 5% -8.0% 1.616e+13 ± 2% perf-stat.cpu-cycles
0.00 ± 30% -0.0 0.00 ± 2% perf-stat.dTLB-load-miss-rate%
2.376e+12 ± 14% +118.4% 5.19e+12 ± 2% perf-stat.dTLB-loads
0.00 ± 10% -0.0 0.00 ± 6% perf-stat.dTLB-store-miss-rate%
17948274 ± 15% -53.0% 8439900 ± 4% perf-stat.dTLB-store-misses
1.183e+12 ± 6% +166.6% 3.154e+12 ± 2% perf-stat.dTLB-stores
3.02 ± 11% +0.7 3.69 ± 3% perf-stat.iTLB-load-miss-rate%
8.356e+12 ± 4% +107.9% 1.737e+13 ± 5% perf-stat.instructions
379199 ± 9% +79.0% 678757 ± 4% perf-stat.instructions-per-iTLB-miss
0.48 +125.8% 1.07 ± 2% perf-stat.ipc
70.88 ± 4% -31.3 39.59 ± 7% perf-stat.node-load-miss-rate%
1.412e+10 ± 8% +24.9% 1.763e+10 ± 4% perf-stat.node-load-misses
5.789e+09 ± 11% +366.2% 2.699e+10 ± 7% perf-stat.node-loads
67.95 -8.4 59.58 perf-stat.node-store-miss-rate%
3.175e+09 ± 7% -96.2% 1.221e+08 ± 11% perf-stat.node-store-misses
1.494e+09 ± 3% -94.4% 83102289 ± 14% perf-stat.node-stores
921349 ± 13% -16.8% 766310 ± 5% sched_debug.cfs_rq:/.load.max
2.62 ± 14% -40.7% 1.56 ± 42% sched_debug.cfs_rq:/.nr_spread_over.max
0.36 ± 15% -53.2% 0.17 ± 73% sched_debug.cfs_rq:/.nr_spread_over.stddev
311476 ± 2% -12.6% 272380 ± 4% sched_debug.cpu.clock.avg
311479 ± 2% -12.6% 272383 ± 4% sched_debug.cpu.clock.max
311471 ± 2% -12.6% 272375 ± 4% sched_debug.cpu.clock.min
311476 ± 2% -12.6% 272380 ± 4% sched_debug.cpu.clock_task.avg
311479 ± 2% -12.6% 272383 ± 4% sched_debug.cpu.clock_task.max
311471 ± 2% -12.6% 272375 ± 4% sched_debug.cpu.clock_task.min
921349 ± 13% -16.8% 766234 ± 5% sched_debug.cpu.load.max
37654 ± 11% -31.0% 25986 ± 7% sched_debug.cpu.nr_load_updates.min
22549 ± 13% -24.0% 17133 ± 5% sched_debug.cpu.nr_switches.max
5104 ± 10% -25.3% 3815 ± 9% sched_debug.cpu.nr_switches.stddev
3239 -24.9% 2432 ± 19% sched_debug.cpu.sched_count.avg
17859 ± 16% -28.3% 12809 ± 10% sched_debug.cpu.sched_count.max
4257 ± 10% -32.8% 2860 ± 13% sched_debug.cpu.sched_count.stddev
1557 -25.4% 1161 ± 19% sched_debug.cpu.sched_goidle.avg
8863 ± 16% -28.3% 6358 ± 10% sched_debug.cpu.sched_goidle.max
2124 ± 10% -32.9% 1424 ± 13% sched_debug.cpu.sched_goidle.stddev
1598 -25.1% 1197 ± 19% sched_debug.cpu.ttwu_count.avg
7867 ± 9% -31.4% 5395 ± 28% sched_debug.cpu.ttwu_count.max
81.56 ± 7% +53.8% 125.44 ± 27% sched_debug.cpu.ttwu_count.min
1991 ± 2% -34.8% 1297 ± 23% sched_debug.cpu.ttwu_count.stddev
311473 ± 2% -12.6% 272377 ± 4% sched_debug.cpu_clk
311473 ± 2% -12.6% 272377 ± 4% sched_debug.ktime
312094 ± 2% -12.5% 272998 ± 4% sched_debug.sched_clk
53.32 ± 4% -53.2 0.17 ±141% perf-profile.calltrace.cycles-pp.xfs_vn_update_time.file_update_time.xfs_file_aio_write_checks.xfs_file_dax_write.xfs_file_write_iter
53.43 ± 4% -52.7 0.71 ± 6% perf-profile.calltrace.cycles-pp.file_update_time.xfs_file_aio_write_checks.xfs_file_dax_write.xfs_file_write_iter.__vfs_write
53.54 ± 4% -52.4 1.11 ± 5% perf-profile.calltrace.cycles-pp.xfs_file_aio_write_checks.xfs_file_dax_write.xfs_file_write_iter.__vfs_write.vfs_write
46.27 ± 5% -46.3 0.00 perf-profile.calltrace.cycles-pp.__xfs_trans_commit.xfs_vn_update_time.file_update_time.xfs_file_aio_write_checks.xfs_file_dax_write
45.26 ± 4% -45.3 0.00 perf-profile.calltrace.cycles-pp.xfs_log_commit_cil.__xfs_trans_commit.xfs_vn_update_time.file_update_time.xfs_file_aio_write_checks
59.15 ± 4% -33.3 25.86 ± 7% perf-profile.calltrace.cycles-pp.xfs_file_dax_write.xfs_file_write_iter.__vfs_write.vfs_write.sys_write
59.32 ± 4% -33.2 26.16 ± 7% perf-profile.calltrace.cycles-pp.xfs_file_write_iter.__vfs_write.vfs_write.sys_write.entry_SYSCALL_64_fastpath
59.37 ± 4% -33.0 26.34 ± 7% perf-profile.calltrace.cycles-pp.__vfs_write.vfs_write.sys_write.entry_SYSCALL_64_fastpath
59.66 ± 4% -32.2 27.50 ± 7% perf-profile.calltrace.cycles-pp.vfs_write.sys_write.entry_SYSCALL_64_fastpath
59.72 ± 4% -32.0 27.70 ± 7% perf-profile.calltrace.cycles-pp.sys_write.entry_SYSCALL_64_fastpath
31.88 ± 5% -31.9 0.00 perf-profile.calltrace.cycles-pp._raw_spin_lock.xfs_log_commit_cil.__xfs_trans_commit.xfs_vn_update_time.file_update_time
30.25 ± 5% -30.2 0.00 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.xfs_log_commit_cil.__xfs_trans_commit.xfs_vn_update_time
66.91 ± 4% -9.5 57.44 ± 7% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_fastpath
6.05 ± 7% -6.1 0.00 perf-profile.calltrace.cycles-pp.xfs_trans_alloc.xfs_vn_update_time.file_update_time.xfs_file_aio_write_checks.xfs_file_dax_write
4.72 ± 8% -4.7 0.00 perf-profile.calltrace.cycles-pp.xfs_trans_reserve.xfs_trans_alloc.xfs_vn_update_time.file_update_time.xfs_file_aio_write_checks
31.16 ± 9% +5.6 36.80 ± 11% perf-profile.calltrace.cycles-pp.secondary_startup_64
30.32 ± 10% +5.7 36.01 ± 13% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
30.32 ± 10% +5.7 36.01 ± 13% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
30.32 ± 10% +5.7 36.01 ± 13% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
30.32 ± 10% +5.7 36.01 ± 13% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
28.86 ± 10% +6.5 35.41 ± 13% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
1.78 ± 12% +6.9 8.67 ± 9% perf-profile.calltrace.cycles-pp.__srcu_read_unlock.dax_iomap_actor.iomap_apply.dax_iomap_rw.xfs_file_dax_write
2.17 ± 8% +9.7 11.85 ± 8% perf-profile.calltrace.cycles-pp.__copy_user_nocache.__copy_user_flushcache._copy_from_iter_flushcache.dax_iomap_actor.iomap_apply
2.20 ± 8% +9.7 11.89 ± 8% perf-profile.calltrace.cycles-pp.__copy_user_flushcache._copy_from_iter_flushcache.dax_iomap_actor.iomap_apply.dax_iomap_rw
2.25 ± 8% +10.0 12.20 ± 8% perf-profile.calltrace.cycles-pp._copy_from_iter_flushcache.dax_iomap_actor.iomap_apply.dax_iomap_rw.xfs_file_dax_write
4.27 ± 9% +17.5 21.81 ± 8% perf-profile.calltrace.cycles-pp.dax_iomap_actor.iomap_apply.dax_iomap_rw.xfs_file_dax_write.xfs_file_write_iter
4.14 ± 10% +18.0 22.12 ± 7% perf-profile.calltrace.cycles-pp.copy_user_enhanced_fast_string.copyout._copy_to_iter.dax_iomap_actor.iomap_apply
4.14 ± 10% +18.0 22.15 ± 7% perf-profile.calltrace.cycles-pp.copyout._copy_to_iter.dax_iomap_actor.iomap_apply.dax_iomap_rw
4.23 ± 10% +18.3 22.52 ± 7% perf-profile.calltrace.cycles-pp._copy_to_iter.dax_iomap_actor.iomap_apply.dax_iomap_rw.xfs_file_dax_read
5.41 ± 8% +18.5 23.95 ± 8% perf-profile.calltrace.cycles-pp.iomap_apply.dax_iomap_rw.xfs_file_dax_write.xfs_file_write_iter.__vfs_write
5.43 ± 8% +18.6 24.06 ± 8% perf-profile.calltrace.cycles-pp.dax_iomap_rw.xfs_file_dax_write.xfs_file_write_iter.__vfs_write.vfs_write
4.49 ± 9% +19.2 23.67 ± 7% perf-profile.calltrace.cycles-pp.dax_iomap_actor.iomap_apply.dax_iomap_rw.xfs_file_dax_read.xfs_file_read_iter
5.59 ± 9% +20.2 25.80 ± 7% perf-profile.calltrace.cycles-pp.iomap_apply.dax_iomap_rw.xfs_file_dax_read.xfs_file_read_iter.__vfs_read
5.62 ± 9% +20.3 25.92 ± 7% perf-profile.calltrace.cycles-pp.dax_iomap_rw.xfs_file_dax_read.xfs_file_read_iter.__vfs_read.vfs_read
6.39 ± 8% +21.0 27.42 ± 6% perf-profile.calltrace.cycles-pp.xfs_file_dax_read.xfs_file_read_iter.__vfs_read.vfs_read.sys_read
6.55 ± 8% +21.1 27.66 ± 6% perf-profile.calltrace.cycles-pp.xfs_file_read_iter.__vfs_read.vfs_read.sys_read.entry_SYSCALL_64_fastpath
6.62 ± 8% +21.3 27.94 ± 6% perf-profile.calltrace.cycles-pp.__vfs_read.vfs_read.sys_read.entry_SYSCALL_64_fastpath
6.83 ± 8% +21.9 28.77 ± 6% perf-profile.calltrace.cycles-pp.vfs_read.sys_read.entry_SYSCALL_64_fastpath
6.88 ± 8% +22.1 28.97 ± 6% perf-profile.calltrace.cycles-pp.sys_read.entry_SYSCALL_64_fastpath
53.83 ± 4% -52.9 0.91 ± 7% perf-profile.children.cycles-pp.xfs_vn_update_time
53.44 ± 4% -52.7 0.72 ± 6% perf-profile.children.cycles-pp.file_update_time
53.56 ± 4% -52.4 1.16 ± 4% perf-profile.children.cycles-pp.xfs_file_aio_write_checks
46.71 ± 5% -46.0 0.72 ± 7% perf-profile.children.cycles-pp.__xfs_trans_commit
45.71 ± 4% -45.0 0.70 ± 8% perf-profile.children.cycles-pp.xfs_log_commit_cil
59.17 ± 4% -33.2 25.93 ± 7% perf-profile.children.cycles-pp.xfs_file_dax_write
59.33 ± 4% -33.1 26.19 ± 7% perf-profile.children.cycles-pp.xfs_file_write_iter
59.39 ± 4% -33.0 26.40 ± 7% perf-profile.children.cycles-pp.__vfs_write
59.69 ± 4% -32.1 27.56 ± 7% perf-profile.children.cycles-pp.vfs_write
59.74 ± 4% -32.0 27.76 ± 7% perf-profile.children.cycles-pp.sys_write
32.23 ± 5% -31.6 0.63 ± 6% perf-profile.children.cycles-pp._raw_spin_lock
30.51 ± 5% -30.1 0.36 ± 7% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
66.94 ± 4% -9.5 57.47 ± 7% perf-profile.children.cycles-pp.entry_SYSCALL_64_fastpath
6.14 ± 7% -6.0 0.17 ± 11% perf-profile.children.cycles-pp.xfs_trans_alloc
4.78 ± 8% -4.7 0.13 ± 10% perf-profile.children.cycles-pp.xfs_trans_reserve
31.16 ± 9% +5.6 36.79 ± 11% perf-profile.children.cycles-pp.cpuidle_enter_state
31.16 ± 9% +5.6 36.80 ± 11% perf-profile.children.cycles-pp.secondary_startup_64
31.16 ± 9% +5.6 36.80 ± 11% perf-profile.children.cycles-pp.cpu_startup_entry
31.16 ± 9% +5.6 36.80 ± 11% perf-profile.children.cycles-pp.do_idle
30.32 ± 10% +5.7 36.01 ± 13% perf-profile.children.cycles-pp.start_secondary
29.70 ± 9% +6.5 36.20 ± 11% perf-profile.children.cycles-pp.intel_idle
1.84 ± 12% +7.1 8.90 ± 9% perf-profile.children.cycles-pp.__srcu_read_unlock
2.17 ± 8% +9.7 11.86 ± 8% perf-profile.children.cycles-pp.__copy_user_nocache
2.21 ± 8% +9.8 11.96 ± 8% perf-profile.children.cycles-pp.__copy_user_flushcache
2.26 ± 8% +10.0 12.26 ± 8% perf-profile.children.cycles-pp._copy_from_iter_flushcache
4.15 ± 10% +18.0 22.16 ± 7% perf-profile.children.cycles-pp.copy_user_enhanced_fast_string
4.14 ± 10% +18.0 22.16 ± 7% perf-profile.children.cycles-pp.copyout
4.25 ± 10% +18.3 22.59 ± 7% perf-profile.children.cycles-pp._copy_to_iter
6.40 ± 8% +21.1 27.48 ± 6% perf-profile.children.cycles-pp.xfs_file_dax_read
6.55 ± 8% +21.1 27.69 ± 6% perf-profile.children.cycles-pp.xfs_file_read_iter
6.64 ± 8% +21.4 28.00 ± 6% perf-profile.children.cycles-pp.__vfs_read
6.85 ± 8% +22.0 28.84 ± 6% perf-profile.children.cycles-pp.vfs_read
6.90 ± 8% +22.1 29.04 ± 6% perf-profile.children.cycles-pp.sys_read
8.82 ± 9% +36.9 45.70 ± 8% perf-profile.children.cycles-pp.dax_iomap_actor
11.02 ± 8% +38.8 49.85 ± 7% perf-profile.children.cycles-pp.iomap_apply
11.07 ± 8% +39.0 50.12 ± 7% perf-profile.children.cycles-pp.dax_iomap_rw
30.41 ± 5% -30.0 0.36 ± 7% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
29.70 ± 9% +6.5 36.20 ± 11% perf-profile.self.cycles-pp.intel_idle
1.84 ± 12% +7.0 8.87 ± 9% perf-profile.self.cycles-pp.__srcu_read_unlock
2.17 ± 8% +9.7 11.82 ± 8% perf-profile.self.cycles-pp.__copy_user_nocache
4.13 ± 10% +17.9 22.08 ± 7% perf-profile.self.cycles-pp.copy_user_enhanced_fast_string
fio.read_bw_MBps
40000 +-+-----------------------------------------------------------------+
| O |
35000 +-+ O O O
O O O O O O O O O O O O O O |
30000 +-+ O O |
| |
25000 +-+ |
| |
20000 +-+ |
| |
15000 +-+ |
| |
10000 +-+.+..+...+..+...+..+...+...+..+...+..+...+...+..+...+..+...+..+...|
| |
5000 +-+-----------------------------------------------------------------+
fio.read_iops
1e+07 +-+-----------------------------------------------------------------+
| O |
9e+06 +-+ O O O
8e+06 O-+ O O O O O O O O O O O O O |
| O O |
7e+06 +-+ |
| |
6e+06 +-+ |
| |
5e+06 +-+ |
4e+06 +-+ |
| |
3e+06 +-+ |
|...+..+...+..+...+..+...+...+..+...+..+...+...+..+...+..+...+..+...|
2e+06 +-+-----------------------------------------------------------------+
fio.read_clat_99__us
8 +-+---------------------------------------------------------------------+
| .. + .. + |
| . + . + |
7 +-+.+...+..+...+...+...+ +...+...+...+..+...+...+ +...+...|
| |
| |
6 +-+ |
| |
5 +-+ |
| |
| |
4 +-+ |
| |
| |
3 O-+-O---O--O---O---O---O---O--O---O---O---O--O---O---O---O---O--O---O---O
fio.write_bw_MBps
40000 +-+-----------------------------------------------------------------+
| O |
35000 +-+ O O O
O O O O O O O O O O O O O O |
30000 +-+ O O |
| |
25000 +-+ |
| |
20000 +-+ |
| |
15000 +-+ |
| |
10000 +-+.+..+...+..+...+..+...+...+..+...+..+...+...+..+...+..+...+..+...|
| |
5000 +-+-----------------------------------------------------------------+
fio.write_iops
1e+07 +-+-----------------------------------------------------------------+
| O |
9e+06 +-+ O O O
8e+06 O-+ O O O O O O O O O O O O O |
| O O |
7e+06 +-+ |
| |
6e+06 +-+ |
| |
5e+06 +-+ |
4e+06 +-+ |
| |
3e+06 +-+ |
|...+..+...+..+...+..+...+...+..+...+..+...+...+..+...+..+...+..+...|
2e+06 +-+-----------------------------------------------------------------+
fio.write_clat_mean_us
10 +-+--------------------------------------------------------------------+
|...+..+... ..+...+..+...+...+...+..+...+. +..+. +. +...|
9 +-+ +. |
8 +-+ |
| |
7 +-+ |
6 +-+ |
| |
5 +-+ |
4 +-+ |
| |
3 +-+ |
2 +-+ |
O O O O O O O O |
1 +-+----O---O-------O--O---O---O------O-------O------O-------O---O------O
fio.write_clat_90__us
12 +-+--------------------------------------------------------------------+
| +...+. +. +. |
| |
10 +-+ |
| |
| |
8 +-+ |
| |
6 +-+ |
| |
| |
4 +-+ |
| |
| O O |
2 O-+-O--O---O---O---O--O---O---O------O---O---O------O---O---O---O--O---O
fio.write_clat_95__us
14 +-+--------------------------------------------------------------------+
| ..+..+... ..+...+..+...+...+... ..+... .+...+...+...+.. ..|
12 +-+ +. +..+...+. +. +. |
| |
| |
10 +-+ |
| |
8 +-+ |
| |
6 +-+ |
| |
| |
4 +-+ |
O O O O O O O O O O O |
2 +-+----O---O-------O--O---O---O------O----------------------O----------O
fio.write_clat_99__us
20 +-+--------------------------------------------------------------------+
| ..+... ..+... |
18 +-+.+..+...+...+...+..+. +... ..+...+...+..+. +...+..+...|
16 +-+ +..+. |
| |
14 +-+ |
12 +-+ |
| |
10 +-+ |
8 +-+ |
| |
6 +-+ |
4 +-+ |
O O O O O O O O O O O O O O O O O O O O
2 +-+--------------------------------------------------------------------+
fio.latency_2us_
85 +-+-------------------O------------------------------------------------+
80 +-+ O |
| |
75 +-+ O O
70 +-+ O O O O O O O O O O |
O O O O |
65 +-+ O |
60 +-+ O |
55 +-+ |
| |
50 +-+ |
45 +-+ |
|...+..+... ..+...+..+...+...+...+..+...+...+...+..+...+...+...+.. ..|
40 +-+ +. +. |
35 +-+--------------------------------------------------------------------+
fio.latency_10us_
40 +-+--------------------------------------------------------------------+
| +.. |
35 +-+ + . +.. |
30 +-+ + +... ..+. + .|
|...+.. + +.. ..+...+...+..+. .. .+..+.. .+... + |
25 +-+ + +. .. . .. + |
| + + |
20 +-+ |
| |
15 +-+ |
10 +-+ |
| |
5 +-+ |
| |
0 O-+-O--O---O---O---O--O---O---O---O--O---O---O---O--O---O---O---O--O---O
fio.latency_20us_
30 +-+--------------------------------------------------------------------+
| +. +.. |
25 +-+ + + .. .. . ..+ |
| .. : .+... ..+...+..+.. + . +. : |
|...+ : .+. +. . + +..+ : .|
20 +-+ : .. + : .. |
| : .+ + |
15 +-+ : .. |
| + |
10 +-+ |
| |
| |
5 +-+ |
| |
0 O-+-O--O---O---O---O--O---O---O---O--O---O---O---O--O---O---O---O--O---O
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Xiaolong
View attachment "config-4.15.0-rc3-00022-g8e4a22b" of type "text/plain" (163796 bytes)
View attachment "job-script" of type "text/plain" (7393 bytes)
View attachment "job.yaml" of type "text/plain" (4951 bytes)
View attachment "reproduce" of type "text/plain" (860 bytes)
Powered by blists - more mailing lists