[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180620065243.GD11011@yexl-desktop>
Date: Wed, 20 Jun 2018 14:52:43 +0800
From: kernel test robot <xiaolong.ye@...el.com>
To: "J. Bruce Fields" <bfields@...hat.com>
Cc: LKML <linux-kernel@...r.kernel.org>,
Stephen Rothwell <sfr@...b.auug.org.au>, lkp@...org
Subject: [lkp-robot] [nfsd4] 517dc52baa: fsmark.files_per_sec 32.4%
improvement
Greeting,
FYI, we noticed a 32.4% improvement of fsmark.files_per_sec due to commit:
commit: 517dc52baa2a508c82f68bbc7219b48169e6b29f ("nfsd4: shortern default lease period")
https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git master
in testcase: fsmark
on test machine: 48 threads Intel(R) Xeon(R) CPU E5-2697 v2 @ 2.70GHz with 64G memory
with following parameters:
iterations: 1x
nr_threads: 1t
disk: 1BRD_48G
fs: f2fs
fs2: nfsv4
filesize: 4M
test_size: 40G
sync_method: fsyncBeforeClose
cpufreq_governor: performance
test-description: The fsmark is a file system benchmark to test synchronous write workloads, for example, mail servers workload.
test-url: https://sourceforge.net/projects/fsmark/
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/disk/filesize/fs2/fs/iterations/kconfig/nr_threads/rootfs/sync_method/tbox_group/test_size/testcase:
gcc-7/performance/1BRD_48G/4M/nfsv4/f2fs/1x/x86_64-rhel-7.2/1t/debian-x86_64-2016-08-31.cgz/fsyncBeforeClose/ivb44/40G/fsmark
commit:
c2993a1d7d ("nfsd4: extend reclaim period for reclaiming clients")
517dc52baa ("nfsd4: shortern default lease period")
c2993a1d7d6687fd 517dc52baa2a508c82f68bbc72
---------------- --------------------------
%stddev %change %stddev
\ | \
53.60 +32.4% 70.95 fsmark.files_per_sec
191.89 -24.4% 145.16 fsmark.time.elapsed_time
191.89 -24.4% 145.16 fsmark.time.elapsed_time.max
17.75 ± 2% +31.0% 23.25 ± 3% fsmark.time.percent_of_cpu_this_job_got
1.43 ± 2% +0.4 1.85 ± 3% mpstat.cpu.sys%
0.03 ± 3% +0.0 0.04 ± 3% mpstat.cpu.usr%
1333968 ± 3% -24.4% 1008280 ± 2% softirqs.SCHED
4580860 ± 3% -25.0% 3433796 ± 4% softirqs.TIMER
49621514 ± 50% -33.8% 32838257 ± 4% cpuidle.C3.time
8.87e+09 ± 3% -25.7% 6.588e+09 cpuidle.C6.time
9796946 ± 3% -24.8% 7369851 cpuidle.C6.usage
212766 ± 3% +33.3% 283568 vmstat.io.bo
13605317 +27.6% 17354458 vmstat.memory.cache
41139824 ± 2% -15.6% 34707711 vmstat.memory.free
0.00 +1e+102% 1.00 vmstat.procs.r
16158 ± 9% +33.0% 21495 ± 12% vmstat.system.cs
28279 ± 10% +23.0% 34796 ± 4% meminfo.Active(file)
13485862 +27.9% 17253726 meminfo.Cached
20655 ± 10% +24.7% 25748 meminfo.Dirty
12246598 +30.7% 16008540 meminfo.Inactive
12237146 +30.7% 15999087 meminfo.Inactive(file)
41246576 ± 2% -15.3% 34917557 meminfo.MemFree
123641 ± 2% +15.8% 143144 meminfo.SReclaimable
233101 +10.4% 257273 meminfo.Slab
13275 ± 28% +54.6% 20527 ± 15% numa-meminfo.node0.Active(file)
9394 ± 33% +76.6% 16592 ± 14% numa-meminfo.node0.Dirty
20060196 ± 8% -18.6% 16336481 ± 10% numa-meminfo.node0.MemFree
5768180 ± 2% +67.5% 9661137 ± 22% numa-meminfo.node1.FilePages
5162181 ± 3% +74.9% 9029558 ± 23% numa-meminfo.node1.Inactive
5158148 ± 3% +75.0% 9027345 ± 23% numa-meminfo.node1.Inactive(file)
21163215 ± 5% -12.3% 18564891 ± 6% numa-meminfo.node1.MemFree
367.00 ± 27% +82.6% 670.25 ± 40% numa-meminfo.node1.NFS_Unstable
624.00 ± 21% +95.3% 1218 ± 35% numa-meminfo.node1.Writeback
2.236e+09 ± 6% -21.0% 1.767e+09 ± 12% perf-stat.branch-misses
3.553e+09 ± 6% -12.3% 3.115e+09 ± 10% perf-stat.cache-misses
9.503e+09 ± 4% -15.0% 8.074e+09 ± 9% perf-stat.cache-references
8701 ± 13% -38.3% 5367 ± 26% perf-stat.cpu-migrations
1.037e+08 ± 5% -20.6% 82303955 ± 7% perf-stat.dTLB-store-misses
86.00 -1.2 84.80 perf-stat.iTLB-load-miss-rate%
1.33e+08 ± 5% -20.2% 1.062e+08 ± 11% perf-stat.iTLB-load-misses
543566 ± 3% -24.5% 410527 perf-stat.minor-faults
543567 ± 3% -24.5% 410533 perf-stat.page-faults
98.50 +15.0% 113.25 turbostat.Avg_MHz
2081 +10.2% 2292 ± 3% turbostat.Bzy_MHz
0.59 +0.1 0.73 ± 2% turbostat.C1%
0.15 ± 2% +0.1 0.20 ± 6% turbostat.C1E%
9795281 ± 3% -24.8% 7368291 turbostat.C6
58.70 +9.0% 64.01 ± 3% turbostat.CorWatt
9631901 ± 3% -24.9% 7237299 turbostat.IRQ
4.29 ± 3% -26.1% 3.17 ± 5% turbostat.Pkg%pc6
87.05 +6.4% 92.61 ± 2% turbostat.PkgWatt
8.29 +2.9% 8.54 turbostat.RAMWatt
3296 ± 29% +55.4% 5124 ± 16% numa-vmstat.node0.nr_active_file
2342 ± 33% +76.4% 4131 ± 15% numa-vmstat.node0.nr_dirty
5021740 ± 8% -18.7% 4085154 ± 9% numa-vmstat.node0.nr_free_pages
764.00 ± 57% +124.9% 1718 ± 28% numa-vmstat.node0.nr_vmscan_immediate_reclaim
3297 ± 29% +55.4% 5124 ± 16% numa-vmstat.node0.nr_zone_active_file
2472 ± 31% +72.1% 4254 ± 14% numa-vmstat.node0.nr_zone_write_pending
2418556 ± 8% +51.7% 3669187 ± 18% numa-vmstat.node1.nr_dirtied
1442200 ± 2% +67.4% 2413905 ± 22% numa-vmstat.node1.nr_file_pages
5299408 ± 5% -12.3% 4649246 ± 6% numa-vmstat.node1.nr_free_pages
1289719 ± 3% +74.9% 2255484 ± 23% numa-vmstat.node1.nr_inactive_file
85499 ± 21% +39.3% 119063 ± 22% numa-vmstat.node1.nr_indirectly_reclaimable
92.00 ± 23% +82.1% 167.50 ± 33% numa-vmstat.node1.nr_unstable
136.50 ± 25% +92.3% 262.50 ± 25% numa-vmstat.node1.nr_writeback
2415645 ± 8% +51.8% 3666735 ± 18% numa-vmstat.node1.nr_written
1289715 ± 3% +74.9% 2255489 ± 23% numa-vmstat.node1.nr_zone_inactive_file
7109 ± 10% +22.6% 8718 ± 4% proc-vmstat.nr_active_file
5188 ± 10% +24.3% 6450 proc-vmstat.nr_dirty
1326117 -4.8% 1262303 proc-vmstat.nr_dirty_background_threshold
2655481 -4.8% 2527699 proc-vmstat.nr_dirty_threshold
3372021 +27.9% 4311617 proc-vmstat.nr_file_pages
10302256 ± 2% -15.3% 8722920 proc-vmstat.nr_free_pages
3059627 +30.7% 3997741 proc-vmstat.nr_inactive_file
191536 ± 2% +31.6% 252107 proc-vmstat.nr_indirectly_reclaimable
3508 -6.4% 3283 proc-vmstat.nr_shmem
30984 ± 2% +15.6% 35811 proc-vmstat.nr_slab_reclaimable
27320 +4.5% 28553 ± 2% proc-vmstat.nr_slab_unreclaimable
275.25 ± 13% +63.5% 450.00 ± 20% proc-vmstat.nr_writeback
7109 ± 10% +22.6% 8716 ± 4% proc-vmstat.nr_zone_active_file
3059632 +30.7% 3997758 proc-vmstat.nr_zone_inactive_file
5414 ± 9% +25.8% 6812 proc-vmstat.nr_zone_write_pending
561847 ± 3% -24.2% 425887 ± 2% proc-vmstat.pgfault
951.53 ± 4% -15.3% 805.83 ± 7% sched_debug.cfs_rq:/.exec_clock.avg
195.00 ± 3% +37.1% 267.35 ± 7% sched_debug.cfs_rq:/.load_avg.avg
0.78 -11.5% 0.69 ± 2% sched_debug.cfs_rq:/.nr_spread_over.avg
0.75 -11.1% 0.67 sched_debug.cfs_rq:/.nr_spread_over.min
129.41 ± 10% -23.9% 98.49 ± 6% sched_debug.cfs_rq:/.runnable_load_avg.stddev
187.51 ± 4% +23.1% 230.86 sched_debug.cfs_rq:/.util_avg.avg
115822 -26.1% 85630 sched_debug.cpu.clock.avg
115824 -26.1% 85633 sched_debug.cpu.clock.max
115818 -26.1% 85627 sched_debug.cpu.clock.min
115822 -26.1% 85630 sched_debug.cpu.clock_task.avg
115824 -26.1% 85633 sched_debug.cpu.clock_task.max
115818 -26.1% 85627 sched_debug.cpu.clock_task.min
16.83 ± 13% +118.0% 36.67 ± 52% sched_debug.cpu.cpu_load[0].avg
18.88 ± 9% +71.7% 32.43 ± 20% sched_debug.cpu.cpu_load[1].avg
498.88 ± 9% +83.7% 916.50 ± 39% sched_debug.cpu.cpu_load[1].max
82.58 ± 7% +70.8% 141.02 ± 33% sched_debug.cpu.cpu_load[1].stddev
4240 -19.7% 3403 sched_debug.cpu.curr->pid.max
95563 -31.3% 65668 sched_debug.cpu.nr_load_updates.avg
100959 -30.6% 70040 sched_debug.cpu.nr_load_updates.max
93175 -32.1% 63230 sched_debug.cpu.nr_load_updates.min
600.39 ± 4% -16.6% 500.69 ± 2% sched_debug.cpu.ttwu_local.avg
115820 -26.1% 85628 sched_debug.cpu_clk
115820 -26.1% 85628 sched_debug.ktime
0.05 ± 10% +48.4% 0.08 ± 20% sched_debug.rt_rq:/.rt_time.avg
2.39 ± 10% +48.9% 3.56 ± 20% sched_debug.rt_rq:/.rt_time.max
0.34 ± 10% +48.9% 0.51 ± 20% sched_debug.rt_rq:/.rt_time.stddev
116302 -26.0% 86097 sched_debug.sched_clk
1036 ± 9% +27.7% 1322 ± 8% slabinfo.biovec-64.active_objs
1036 ± 9% +27.7% 1322 ± 8% slabinfo.biovec-64.num_objs
712.50 ± 8% +27.5% 908.50 ± 10% slabinfo.ext4_extent_status.active_objs
712.50 ± 8% +27.5% 908.50 ± 10% slabinfo.ext4_extent_status.num_objs
2640 ± 8% +25.5% 3313 ± 6% slabinfo.ext4_io_end.active_objs
2674 ± 8% +25.4% 3354 ± 6% slabinfo.ext4_io_end.num_objs
2709 ± 9% +34.4% 3641 ± 5% slabinfo.f2fs_extent_tree.active_objs
2750 ± 9% +32.8% 3651 ± 5% slabinfo.f2fs_extent_tree.num_objs
2255 ± 5% +31.6% 2969 ± 8% slabinfo.f2fs_inode_cache.active_objs
2286 ± 4% +31.2% 3000 ± 7% slabinfo.f2fs_inode_cache.num_objs
590.00 ± 8% +24.5% 734.75 ± 3% slabinfo.file_lock_cache.active_objs
590.00 ± 8% +24.5% 734.75 ± 3% slabinfo.file_lock_cache.num_objs
7627 +12.3% 8568 ± 5% slabinfo.free_nid.active_objs
7627 +12.3% 8568 ± 5% slabinfo.free_nid.num_objs
6251 ± 5% +26.0% 7875 ± 5% slabinfo.fscrypt_info.active_objs
6251 ± 5% +26.0% 7875 ± 5% slabinfo.fscrypt_info.num_objs
13894 ± 6% +16.3% 16158 slabinfo.kmalloc-128.active_objs
13901 ± 6% +16.3% 16161 slabinfo.kmalloc-128.num_objs
2184 ± 2% +30.5% 2851 slabinfo.nfs_inode_cache.active_objs
2209 ± 3% +30.1% 2874 ± 2% slabinfo.nfs_inode_cache.num_objs
636.50 ± 14% +46.5% 932.25 ± 24% slabinfo.nfsd4_stateids.active_objs
636.50 ± 14% +46.5% 932.25 ± 24% slabinfo.nfsd4_stateids.num_objs
10514 ± 8% +25.0% 13138 ± 2% slabinfo.nfsd_drc.active_objs
10514 ± 8% +25.0% 13138 ± 2% slabinfo.nfsd_drc.num_objs
984.00 ± 11% +38.0% 1358 ± 22% slabinfo.numa_policy.active_objs
984.00 ± 11% +38.0% 1358 ± 22% slabinfo.numa_policy.num_objs
2062 ± 15% +17.2% 2417 ± 8% slabinfo.pool_workqueue.active_objs
2062 ± 15% +18.1% 2435 ± 7% slabinfo.pool_workqueue.num_objs
121248 ± 4% +25.6% 152261 slabinfo.radix_tree_node.active_objs
2173 ± 4% +25.3% 2722 slabinfo.radix_tree_node.active_slabs
121725 ± 4% +25.3% 152514 slabinfo.radix_tree_node.num_objs
2173 ± 4% +25.3% 2722 slabinfo.radix_tree_node.num_slabs
2126 ± 6% +15.2% 2449 ± 4% slabinfo.sock_inode_cache.active_objs
2126 ± 6% +15.2% 2449 ± 4% slabinfo.sock_inode_cache.num_objs
18605 +19.2% 22180 ± 3% slabinfo.vm_area_struct.active_objs
18668 +18.9% 22193 ± 3% slabinfo.vm_area_struct.num_objs
7.75 ± 5% -22.6% 6.00 nfsstat.Client.nfs.v3.commit.percent
7.00 +14.3% 8.00 nfsstat.Client.nfs.v3.getattr.percent
3.00 -33.3% 2.00 nfsstat.Client.nfs.v3.write
8.00 -25.0% 6.00 nfsstat.Client.nfs.v3.write.percent
4.50 ± 11% -33.3% 3.00 nfsstat.Client.nfs.v4.access.percent
2546 ± 8% +24.2% 3164 nfsstat.Client.nfs.v4.close
5.00 +40.0% 7.00 nfsstat.Client.nfs.v4.close.percent
2546 ± 8% +24.3% 3164 nfsstat.Client.nfs.v4.commit
5.00 +40.0% 7.00 nfsstat.Client.nfs.v4.commit.percent
2.00 -50.0% 1.00 nfsstat.Client.nfs.v4.confirm.percent
3.00 -33.3% 2.00 nfsstat.Client.nfs.v4.getattr.percent
2551 ± 8% +24.2% 3169 nfsstat.Client.nfs.v4.lookup
1.00 -100.0% 0.00 nfsstat.Client.nfs.v4.lookup_root.percent
1.00 -100.0% 0.00 nfsstat.Client.nfs.v4.null.percent
2558 ± 8% +24.1% 3174 nfsstat.Client.nfs.v4.open
17.25 ± 2% -14.5% 14.75 ± 2% nfsstat.Client.nfs.v4.open.percent
2.00 -50.0% 1.00 nfsstat.Client.nfs.v4.pathconf.percent
6.00 -25.0% 4.50 ± 11% nfsstat.Client.nfs.v4.server_caps.percent
2.00 -50.0% 1.00 nfsstat.Client.nfs.v4.setclntid.percent
2.00 -50.0% 1.00 nfsstat.Client.nfs.v4.statfs.percent
10189 ± 8% +24.3% 12662 nfsstat.Client.nfs.v4.write
22.50 ± 3% +26.7% 28.50 nfsstat.Client.nfs.v4.write.percent
20460 ± 8% +24.1% 25401 nfsstat.Client.rpc.authrefrsh
20460 ± 8% +24.1% 25401 nfsstat.Client.rpc.calls
20422 ± 8% +24.2% 25365 nfsstat.Server.nfs.v4.compound
1.00 -100.0% 0.00 nfsstat.Server.nfs.v4.null.percent
2552 ± 8% +24.2% 3170 nfsstat.Server.nfs.v4.operations.access
2546 ± 8% +24.2% 3164 nfsstat.Server.nfs.v4.operations.close
2546 ± 8% +24.2% 3164 nfsstat.Server.nfs.v4.operations.commit
17858 ± 8% +24.2% 22185 nfsstat.Server.nfs.v4.operations.getattr
5101 ± 8% +24.2% 6337 nfsstat.Server.nfs.v4.operations.getfh
2551 ± 8% +24.2% 3169 nfsstat.Server.nfs.v4.operations.lookup
4.00 -25.0% 3.00 nfsstat.Server.nfs.v4.operations.lookup.percent
2558 ± 8% +24.1% 3174 nfsstat.Server.nfs.v4.operations.open
6.00 -16.7% 5.00 nfsstat.Server.nfs.v4.operations.open.percent
20417 ± 8% +24.2% 25359 nfsstat.Server.nfs.v4.operations.putfh
1.00 -100.0% 0.00 nfsstat.Server.nfs.v4.operations.setcltid.percent
1.00 -100.0% 0.00 nfsstat.Server.nfs.v4.operations.setcltidconf.percent
10189 ± 8% +24.3% 12662 nfsstat.Server.nfs.v4.operations.write
6.25 ± 6% +36.0% 8.50 ± 5% nfsstat.Server.nfs.v4.operations.write.percent
20423 ± 8% +24.2% 25366 nfsstat.Server.packet.packets
20423 ± 8% +24.2% 25366 nfsstat.Server.packet.tcp
10194 ± 8% +24.3% 12667 nfsstat.Server.reply.cache.misses
10229 ± 8% +24.1% 12699 nfsstat.Server.reply.cache.nocache
20423 ± 8% +24.2% 25366 nfsstat.Server.rpc.calls
19.63 ± 6% -1.8 17.83 ± 8% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.do_idle.cpu_startup_entry
10.22 ± 10% -1.8 8.43 ± 13% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.do_idle
7.83 ± 11% -1.5 6.33 ± 14% perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state
4.69 ± 14% -1.0 3.70 ± 19% perf-profile.calltrace.cycles-pp.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
4.20 ± 14% -0.9 3.29 ± 18% perf-profile.calltrace.cycles-pp.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt
1.23 ± 20% -0.3 0.89 ± 23% perf-profile.calltrace.cycles-pp.rcu_check_callbacks.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues
1.43 ± 8% -0.3 1.16 ± 10% perf-profile.calltrace.cycles-pp.perf_mux_hrtimer_handler.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
1.04 ± 7% -0.1 0.90 ± 8% perf-profile.calltrace.cycles-pp.clockevents_program_event.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state
1.08 ± 4% -0.1 0.94 ± 7% perf-profile.calltrace.cycles-pp.run_timer_softirq.__softirqentry_text_start.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt
1.19 ± 9% +0.2 1.38 ± 7% perf-profile.calltrace.cycles-pp.__next_timer_interrupt.get_next_timer_interrupt.tick_nohz_next_event.tick_nohz_get_sleep_length.menu_select
1.90 ± 5% +0.2 2.14 ± 6% perf-profile.calltrace.cycles-pp.get_next_timer_interrupt.tick_nohz_next_event.tick_nohz_get_sleep_length.menu_select.do_idle
0.99 ± 11% +0.2 1.23 ± 11% perf-profile.calltrace.cycles-pp.serial8250_console_putchar.uart_console_write.serial8250_console_write.console_unlock.vprintk_emit
0.99 ± 11% +0.2 1.23 ± 11% perf-profile.calltrace.cycles-pp.wait_for_xmitr.serial8250_console_putchar.uart_console_write.serial8250_console_write.console_unlock
0.93 ± 7% +0.3 1.18 ± 4% perf-profile.calltrace.cycles-pp._raw_spin_trylock.rebalance_domains.__softirqentry_text_start.irq_exit.smp_apic_timer_interrupt
1.00 ± 10% +0.3 1.26 ± 12% perf-profile.calltrace.cycles-pp.uart_console_write.serial8250_console_write.console_unlock.vprintk_emit.printk
1.00 ± 10% +0.3 1.26 ± 11% perf-profile.calltrace.cycles-pp.serial8250_console_write.console_unlock.vprintk_emit.printk.irq_work_run_list
1.03 ± 10% +0.3 1.29 ± 11% perf-profile.calltrace.cycles-pp.irq_work_run_list.irq_work_run.smp_irq_work_interrupt.irq_work_interrupt.cpuidle_enter_state
1.03 ± 10% +0.3 1.29 ± 11% perf-profile.calltrace.cycles-pp.irq_work_interrupt.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
1.03 ± 10% +0.3 1.29 ± 11% perf-profile.calltrace.cycles-pp.smp_irq_work_interrupt.irq_work_interrupt.cpuidle_enter_state.do_idle.cpu_startup_entry
1.03 ± 10% +0.3 1.29 ± 11% perf-profile.calltrace.cycles-pp.irq_work_run.smp_irq_work_interrupt.irq_work_interrupt.cpuidle_enter_state.do_idle
1.03 ± 10% +0.3 1.29 ± 11% perf-profile.calltrace.cycles-pp.printk.irq_work_run_list.irq_work_run.smp_irq_work_interrupt.irq_work_interrupt
1.03 ± 10% +0.3 1.29 ± 11% perf-profile.calltrace.cycles-pp.vprintk_emit.printk.irq_work_run_list.irq_work_run.smp_irq_work_interrupt
1.03 ± 10% +0.3 1.29 ± 11% perf-profile.calltrace.cycles-pp.console_unlock.vprintk_emit.printk.irq_work_run_list.irq_work_run
2.88 ± 3% +0.4 3.29 ± 4% perf-profile.calltrace.cycles-pp.tick_nohz_next_event.tick_nohz_get_sleep_length.menu_select.do_idle.cpu_startup_entry
3.66 ± 3% +0.4 4.11 ± 3% perf-profile.calltrace.cycles-pp.tick_nohz_get_sleep_length.menu_select.do_idle.cpu_startup_entry.start_secondary
3.05 ± 2% +0.5 3.54 ± 5% perf-profile.calltrace.cycles-pp.rebalance_domains.__softirqentry_text_start.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt
91.67 +0.7 92.33 perf-profile.calltrace.cycles-pp.secondary_startup_64
7.46 ± 5% +1.0 8.46 ± 4% perf-profile.calltrace.cycles-pp.menu_select.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
56.78 +1.7 58.52 perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
10.53 ± 10% -1.9 8.60 ± 13% perf-profile.children.cycles-pp.hrtimer_interrupt
8.14 ± 11% -1.6 6.51 ± 14% perf-profile.children.cycles-pp.__hrtimer_run_queues
4.88 ± 13% -1.1 3.79 ± 18% perf-profile.children.cycles-pp.tick_sched_timer
4.32 ± 13% -1.0 3.35 ± 18% perf-profile.children.cycles-pp.tick_sched_handle
1.82 ± 12% -0.4 1.44 ± 20% perf-profile.children.cycles-pp.scheduler_tick
1.30 ± 17% -0.4 0.93 ± 22% perf-profile.children.cycles-pp.rcu_check_callbacks
1.54 ± 9% -0.3 1.25 ± 9% perf-profile.children.cycles-pp.perf_mux_hrtimer_handler
1.25 ± 6% -0.2 1.03 ± 13% perf-profile.children.cycles-pp.ktime_get
1.15 ± 7% -0.2 0.97 ± 6% perf-profile.children.cycles-pp.clockevents_program_event
0.88 ± 6% -0.2 0.70 ± 6% perf-profile.children.cycles-pp.native_write_msr
1.17 ± 4% -0.2 1.01 ± 6% perf-profile.children.cycles-pp.run_timer_softirq
0.56 ± 9% -0.1 0.42 ± 18% perf-profile.children.cycles-pp.enqueue_hrtimer
0.16 ± 23% -0.1 0.08 ± 10% perf-profile.children.cycles-pp.__intel_pmu_enable_all
0.29 ± 15% -0.1 0.21 ± 29% perf-profile.children.cycles-pp.update_rq_clock
0.16 ± 18% -0.1 0.10 ± 27% perf-profile.children.cycles-pp.raise_softirq
0.08 ± 10% -0.0 0.04 ± 59% perf-profile.children.cycles-pp.calc_global_load_tick
0.22 ± 8% +0.1 0.27 ± 14% perf-profile.children.cycles-pp.pm_qos_read_value
1.42 ± 7% +0.2 1.59 ± 8% perf-profile.children.cycles-pp.__next_timer_interrupt
0.97 ± 6% +0.2 1.21 ± 3% perf-profile.children.cycles-pp._raw_spin_trylock
3.11 ± 5% +0.4 3.49 ± 4% perf-profile.children.cycles-pp.tick_nohz_next_event
3.81 ± 3% +0.4 4.23 ± 3% perf-profile.children.cycles-pp.tick_nohz_get_sleep_length
3.17 +0.5 3.62 ± 5% perf-profile.children.cycles-pp.rebalance_domains
91.87 +0.7 92.53 perf-profile.children.cycles-pp.do_idle
91.67 +0.7 92.33 perf-profile.children.cycles-pp.secondary_startup_64
91.67 +0.7 92.33 perf-profile.children.cycles-pp.cpu_startup_entry
7.71 ± 5% +0.9 8.65 ± 4% perf-profile.children.cycles-pp.menu_select
0.89 ± 16% -0.3 0.58 ± 23% perf-profile.self.cycles-pp.rcu_check_callbacks
0.88 ± 6% -0.2 0.70 ± 6% perf-profile.self.cycles-pp.native_write_msr
0.91 ± 6% -0.1 0.78 ± 5% perf-profile.self.cycles-pp.run_timer_softirq
0.42 ± 15% -0.1 0.29 ± 21% perf-profile.self.cycles-pp.timerqueue_add
0.39 ± 7% -0.1 0.28 ± 17% perf-profile.self.cycles-pp.scheduler_tick
0.29 ± 18% -0.1 0.23 ± 6% perf-profile.self.cycles-pp.clockevents_program_event
0.16 ± 18% -0.1 0.10 ± 27% perf-profile.self.cycles-pp.raise_softirq
0.08 ± 10% -0.0 0.04 ± 59% perf-profile.self.cycles-pp.calc_global_load_tick
0.10 ± 13% -0.0 0.06 ± 20% perf-profile.self.cycles-pp.rcu_bh_qs
0.07 ± 24% +0.0 0.11 ± 15% perf-profile.self.cycles-pp.rcu_nmi_enter
0.22 ± 8% +0.1 0.27 ± 14% perf-profile.self.cycles-pp.pm_qos_read_value
0.38 ± 3% +0.1 0.47 ± 6% perf-profile.self.cycles-pp.rebalance_domains
0.59 ± 11% +0.1 0.69 ± 5% perf-profile.self.cycles-pp.tick_nohz_next_event
0.78 ± 6% +0.1 0.92 ± 4% perf-profile.self.cycles-pp.__next_timer_interrupt
0.97 ± 6% +0.2 1.21 ± 3% perf-profile.self.cycles-pp._raw_spin_trylock
fsmark.files_per_sec
74 +-+--------------------------------------------------------------------+
72 +-O O O O O O O O O |
O O O O O |
70 +-+ O O O |
68 +-+ |
66 +-+ |
64 +-+ |
| |
62 +-+ |
60 +-+ |
58 +-+ |
56 +-+ |
|.+..+. .+..+. .+. .+.. .+. .+. |
54 +-+ +.+..+ +.+..+ +..+ + +..+.+.+..+.+.+. +..+.+.+..+.|
52 +-+--------------------------------------------------------------------+
fsmark.time.elapsed_time
200 +-+-------------------------------------------------------------------+
| .+.. |
190 +-+ .+.+..+. .+.+.+.. .+.+.. .+. .+.+.+..+.+.+..+. .+..+.+ +.|
| +..+ +.+. + + +. + |
| |
180 +-+ |
| |
170 +-+ |
| |
160 +-+ |
| |
| |
150 +-+ O O O O O O |
O O O O O O O O O |
140 +-O----------------O--------------------------------------------------+
fsmark.time.elapsed_time.max
200 +-+-------------------------------------------------------------------+
| .+.. |
190 +-+ .+.+..+. .+.+.+.. .+.+.. .+. .+.+.+..+.+.+..+. .+..+.+ +.|
| +..+ +.+. + + +. + |
| |
180 +-+ |
| |
170 +-+ |
| |
160 +-+ |
| |
| |
150 +-+ O O O O O O |
O O O O O O O O O |
140 +-O----------------O--------------------------------------------------+
nfsstat.Client.nfs.v4.write.percent
29 O-+--O-O----O-O------O--O-O-O----O--O----------------------------------+
| |
28 +-O O O O O O |
27 +-+ |
| |
26 +-+ |
| |
25 +-+ |
| |
24 +-+ |
23 +-+ :|
| :|
22 +-+ +.+..+ +.+..+ +..+ + +..+.+.+..+.+.+.. +..+.+.+..+ |
|+ + + + + + + .. + + + |
21 +-+--------------------------------------------------------------------+
nfsstat.Client.nfs.v4.commit.percent
7 O-O--O-O-O--O-O-O--O-O-O--O-O-O--O-O-O--------------------------------+
| |
| |
6.5 +-+ |
| |
| |
| |
6 +-+ |
| |
| |
5.5 +-+ |
| |
| |
| |
5 +-+-------------------------------------------------------------------+
nfsstat.Client.nfs.v4.open.percent
18 +-+------------------------------------------------------------------+
|: : : : : : : + : : : + : : |
17.5 +-+ : : : : : : + : : : + : : |
17 +-+ +.+..+ +..+.+ + + +.+..+.+.+..+.+ +.+ +..+.|
| |
16.5 +-+ |
| |
16 +-+ |
| |
15.5 +-+ |
15 O-O O O O O O O O O O O O O |
| |
14.5 +-+ |
| |
14 +-+--O--------O--------------------O---------------------------------+
nfsstat.Client.nfs.v4.close.percent
7 O-O--O-O-O--O-O-O--O-O-O--O-O-O--O-O-O--------------------------------+
| |
| |
6.5 +-+ |
| |
| |
| |
6 +-+ |
| |
| |
5.5 +-+ |
| |
| |
| |
5 +-+-------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Xiaolong
View attachment "config-4.17.0-rc4-00028-g517dc52" of type "text/plain" (164390 bytes)
View attachment "job-script" of type "text/plain" (7666 bytes)
View attachment "job.yaml" of type "text/plain" (5220 bytes)
View attachment "reproduce" of type "text/plain" (874 bytes)
Powered by blists - more mailing lists