[<prev] [next>] [day] [month] [year] [list]
Message-ID: <87fv2kjllw.fsf@yhuang-dev.intel.com>
Date: Sat, 12 Sep 2015 12:40:59 +0800
From: kernel test robot <ying.huang@...el.com>
TO: Jaegeuk Kim <jaegeuk@...nel.org>
CC: LKML <linux-kernel@...r.kernel.org>
Subject: [lkp] [f2fs] 2286c0205d: -7.2% fileio.requests_per_sec
FYI, we noticed the below changes on
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
commit 2286c0205d1478d4bece6e733cbaf15535fba09d ("f2fs: fix to cover lock_op for update_inode_page")
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/period/nr_threads/disk/fs/size/filenum/rwmode/iomode:
lkp-ws02/fileio/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/600s/100%/1HDD/f2fs/64G/1024f/rndwr/sync
commit:
268344664603706b6f156548f9d7482665222f87
2286c0205d1478d4bece6e733cbaf15535fba09d
268344664603706b 2286c0205d1478d4bece6e733c
---------------- --------------------------
%stddev %change %stddev
\ | \
699.08 ± 0% -7.2% 649.00 ± 5% fileio.requests_per_sec
19781972 ± 0% -8.7% 18061954 ± 5% fileio.time.file_system_outputs
65887 ± 0% +234.3% 220273 ± 3% fileio.time.involuntary_context_switches
15.00 ± 0% +283.3% 57.50 ± 6% fileio.time.percent_of_cpu_this_job_got
93.91 ± 0% +273.4% 350.70 ± 6% fileio.time.system_time
11889 ± 0% -29.3% 8410 ± 6% uptime.idle
15868 ± 0% -7.0% 14756 ± 5% vmstat.io.bo
22.75 ± 1% -17.6% 18.75 ± 10% vmstat.procs.b
65887 ± 0% +234.3% 220273 ± 3% time.involuntary_context_switches
15.00 ± 0% +283.3% 57.50 ± 6% time.percent_of_cpu_this_job_got
93.91 ± 0% +273.4% 350.70 ± 6% time.system_time
576945 ± 6% +62.3% 936106 ± 3% numa-numastat.node0.local_node
576946 ± 6% +62.3% 936108 ± 3% numa-numastat.node0.numa_hit
2341596 ± 1% -21.9% 1828172 ± 4% numa-numastat.node1.local_node
2341598 ± 1% -21.9% 1828174 ± 4% numa-numastat.node1.numa_hit
665.00 ± 0% -69.5% 202.50 ± 0% slabinfo.kmalloc-8192.active_objs
171.25 ± 0% -70.7% 50.25 ± 0% slabinfo.kmalloc-8192.active_slabs
686.50 ± 0% -70.5% 202.50 ± 0% slabinfo.kmalloc-8192.num_objs
171.25 ± 0% -70.7% 50.25 ± 0% slabinfo.kmalloc-8192.num_slabs
404426 ± 0% -10.0% 363943 ± 5% softirqs.BLOCK
72716 ± 1% +50.2% 109219 ± 4% softirqs.RCU
107842 ± 1% +57.8% 170224 ± 2% softirqs.SCHED
170570 ± 0% +71.5% 292500 ± 3% softirqs.TIMER
478.50 ± 4% +11.7% 534.25 ± 1% proc-vmstat.nr_alloc_batch
1325 ± 7% +72.8% 2290 ± 18% proc-vmstat.numa_hint_faults
1272 ± 8% +37.5% 1749 ± 16% proc-vmstat.numa_hint_faults_local
1430 ± 7% +69.2% 2421 ± 17% proc-vmstat.numa_pte_updates
80007 ± 0% -16.1% 67115 ± 10% proc-vmstat.pgactivate
217896 ± 6% +63.9% 357161 ± 4% proc-vmstat.pgalloc_dma32
1.16 ± 0% +144.4% 2.83 ± 6% turbostat.%Busy
22.00 ± 0% +227.3% 72.00 ± 6% turbostat.Avg_MHz
1883 ± 0% +34.6% 2535 ± 0% turbostat.Bzy_MHz
4.18 ± 4% +35.9% 5.68 ± 7% turbostat.CPU%c1
43.17 ± 1% -47.9% 22.49 ± 13% turbostat.CPU%c3
51.49 ± 0% +34.0% 68.99 ± 5% turbostat.CPU%c6
51565242 ± 21% +69.2% 87229163 ± 5% cpuidle.C1-NHM.time
170174 ± 2% +247.0% 590538 ± 4% cpuidle.C1-NHM.usage
1.3e+08 ± 5% -27.1% 94768923 ± 16% cpuidle.C1E-NHM.time
3.608e+09 ± 1% -49.6% 1.82e+09 ± 12% cpuidle.C3-NHM.time
1240566 ± 0% -40.9% 733339 ± 10% cpuidle.C3-NHM.usage
1.095e+10 ± 0% +11.6% 1.222e+10 ± 3% cpuidle.C6-NHM.time
849672 ± 0% +61.4% 1371364 ± 2% cpuidle.C6-NHM.usage
258.50 ± 4% +25.0% 323.25 ± 13% cpuidle.POLL.usage
101334 ± 5% +165.4% 268986 ± 5% numa-vmstat.node0.nr_file_pages
73777 ± 7% +228.0% 242006 ± 3% numa-vmstat.node0.nr_inactive_file
6493 ± 1% +68.5% 10942 ± 4% numa-vmstat.node0.nr_slab_reclaimable
514529 ± 3% +36.2% 700771 ± 5% numa-vmstat.node0.numa_hit
453701 ± 3% +41.0% 639782 ± 5% numa-vmstat.node0.numa_local
855116 ± 0% -25.0% 641216 ± 5% numa-vmstat.node1.nr_file_pages
1158679 ± 0% +18.8% 1376964 ± 2% numa-vmstat.node1.nr_free_pages
807542 ± 0% -26.1% 596561 ± 4% numa-vmstat.node1.nr_inactive_file
25802 ± 0% -20.3% 20565 ± 4% numa-vmstat.node1.nr_slab_reclaimable
1372279 ± 1% -17.6% 1130200 ± 2% numa-vmstat.node1.numa_hit
1368512 ± 1% -17.7% 1126414 ± 2% numa-vmstat.node1.numa_local
405342 ± 5% +165.4% 1075959 ± 5% numa-meminfo.node0.FilePages
301627 ± 7% +223.1% 974619 ± 3% numa-meminfo.node0.Inactive
295111 ± 7% +228.0% 968042 ± 3% numa-meminfo.node0.Inactive(file)
532154 ± 2% +130.0% 1224000 ± 5% numa-meminfo.node0.MemUsed
25976 ± 1% +68.5% 43769 ± 4% numa-meminfo.node0.SReclaimable
52874 ± 12% +35.5% 71658 ± 12% numa-meminfo.node0.Slab
3420463 ± 0% -25.0% 2564919 ± 5% numa-meminfo.node1.FilePages
3232635 ± 0% -26.1% 2388706 ± 4% numa-meminfo.node1.Inactive
3230169 ± 0% -26.1% 2386296 ± 4% numa-meminfo.node1.Inactive(file)
4634679 ± 0% +18.8% 5507699 ± 2% numa-meminfo.node1.MemFree
3604024 ± 0% -24.2% 2730987 ± 5% numa-meminfo.node1.MemUsed
103213 ± 0% -20.3% 82262 ± 4% numa-meminfo.node1.SReclaimable
135440 ± 5% -14.8% 115352 ± 8% numa-meminfo.node1.Slab
43978 ±109% -89.8% 4494 ±173% latency_stats.avg.blk_execute_rq.scsi_execute.scsi_execute_req_flags.ses_recv_diag.[ses].ses_get_page2_descriptor.[ses].ses_get_power_status.[ses].ses_enclosure_data_process.[ses].ses_match_to_enclosure.[ses].ses_intf_add.[ses].class_interface_register.scsi_register_interface.0xffffffffa0006013
10579 ± 71% -100.0% 0.00 ± -1% latency_stats.avg.wait_on_page_bit.f2fs_wait_on_page_writeback.[f2fs].f2fs_wait_on_page_writeback.[f2fs].update_inode.[f2fs].update_inode_page.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 15476 ± 1% latency_stats.avg.wait_on_page_bit.f2fs_wait_on_page_writeback.[f2fs].f2fs_wait_on_page_writeback.[f2fs].update_inode.[f2fs].update_inode_page.[f2fs].f2fs_write_inode.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
129.00 ± 17% +6922.7% 9059 ± 10% latency_stats.hits.futex_wait_queue_me.futex_wait.do_futex.SyS_futex.entry_SYSCALL_64_fastpath
131013 ± 4% +288.9% 509526 ± 3% latency_stats.hits.pipe_wait.pipe_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
11058 ± 70% -100.0% 0.00 ± -1% latency_stats.max.wait_on_page_bit.f2fs_wait_on_page_writeback.[f2fs].f2fs_wait_on_page_writeback.[f2fs].update_inode.[f2fs].update_inode_page.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 20874 ± 3% latency_stats.max.wait_on_page_bit.f2fs_wait_on_page_writeback.[f2fs].f2fs_wait_on_page_writeback.[f2fs].update_inode.[f2fs].update_inode_page.[f2fs].f2fs_write_inode.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
81215 ± 24% -36.3% 51703 ± 4% latency_stats.max.wait_on_page_bit.filemap_fdatawait_range.filemap_write_and_wait_range.f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
989.25 ± 18% +1411.6% 14953 ± 12% latency_stats.sum.call_rwsem_down_read_failed.f2fs_wait_on_page_writeback.[f2fs].f2fs_wait_on_page_writeback.[f2fs].wait_on_node_pages_writeback.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
1949 ± 13% +562.2% 12911 ± 8% latency_stats.sum.call_rwsem_down_write_failed.f2fs_submit_page_mbio.[f2fs].do_write_page.[f2fs].write_node_page.[f2fs].f2fs_write_node_page.[f2fs].sync_node_pages.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
523.00 ± 56% +1349.3% 7579 ± 34% latency_stats.sum.call_rwsem_down_write_failed.set_node_addr.[f2fs].f2fs_write_node_page.[f2fs].sync_node_pages.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
38283 ± 7% +445.2% 208718 ± 8% latency_stats.sum.f2fs_write_data_pages.[f2fs].do_writepages.__filemap_fdatawrite_range.filemap_write_and_wait_range.f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
29017 ± 22% +169.7% 78269 ± 12% latency_stats.sum.futex_wait_queue_me.futex_wait.do_futex.SyS_futex.entry_SYSCALL_64_fastpath
15583 ± 78% -100.0% 0.00 ± -1% latency_stats.sum.wait_on_page_bit.f2fs_wait_on_page_writeback.[f2fs].f2fs_wait_on_page_writeback.[f2fs].update_inode.[f2fs].update_inode_page.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 123338 ± 25% latency_stats.sum.wait_on_page_bit.f2fs_wait_on_page_writeback.[f2fs].f2fs_wait_on_page_writeback.[f2fs].update_inode.[f2fs].update_inode_page.[f2fs].f2fs_write_inode.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
1985002 ±137% -80.2% 393404 ± 21% latency_stats.sum.wait_on_page_bit.filemap_fdatawait_range.filemap_write_and_wait_range.nfs4_file_fsync.[nfsv4].vfs_fsync_range.vfs_fsync.nfs_file_flush.filp_close.do_dup2.SyS_dup2.entry_SYSCALL_64_fastpath
472.25 ± 40% +530.4% 2977 ± 7% sched_debug.cfs_rq[0]:/.avg->runnable_avg_sum
4131 ± 6% +344.6% 18367 ± 6% sched_debug.cfs_rq[0]:/.exec_clock
8836 ± 14% +318.3% 36966 ± 5% sched_debug.cfs_rq[0]:/.min_vruntime
9.50 ± 26% -44.7% 5.25 ± 20% sched_debug.cfs_rq[0]:/.nr_spread_over
675.50 ± 20% +44.5% 976.25 ± 4% sched_debug.cfs_rq[0]:/.tg->runnable_avg
9.50 ± 46% +568.4% 63.50 ± 8% sched_debug.cfs_rq[0]:/.tg_runnable_contrib
1877 ± 28% +59.4% 2992 ± 7% sched_debug.cfs_rq[10]:/.avg->runnable_avg_sum
9519 ± 29% +84.4% 17554 ± 7% sched_debug.cfs_rq[10]:/.exec_clock
22051 ± 12% +71.1% 37739 ± 6% sched_debug.cfs_rq[10]:/.min_vruntime
13214 ± 26% -94.2% 773.06 ±170% sched_debug.cfs_rq[10]:/.spread0
665.75 ± 18% +46.6% 976.25 ± 4% sched_debug.cfs_rq[10]:/.tg->runnable_avg
39.75 ± 29% +61.6% 64.25 ± 7% sched_debug.cfs_rq[10]:/.tg_runnable_contrib
1890 ± 20% +68.8% 3190 ± 7% sched_debug.cfs_rq[11]:/.avg->runnable_avg_sum
9359 ± 20% +108.3% 19500 ± 11% sched_debug.cfs_rq[11]:/.exec_clock
22503 ± 10% +75.1% 39410 ± 4% sched_debug.cfs_rq[11]:/.min_vruntime
13667 ± 8% -82.1% 2444 ± 57% sched_debug.cfs_rq[11]:/.spread0
670.00 ± 18% +46.3% 980.00 ± 4% sched_debug.cfs_rq[11]:/.tg->runnable_avg
40.25 ± 22% +68.9% 68.00 ± 8% sched_debug.cfs_rq[11]:/.tg_runnable_contrib
-6514 ±-18% +423.6% -34108 ± -4% sched_debug.cfs_rq[12]:/.spread0
666.00 ± 18% +47.2% 980.50 ± 4% sched_debug.cfs_rq[12]:/.tg->runnable_avg
4831 ± 10% -33.8% 3196 ± 35% sched_debug.cfs_rq[13]:/.exec_clock
2.75 ± 39% +136.4% 6.50 ± 44% sched_debug.cfs_rq[13]:/.nr_spread_over
7506 ± 27% -434.1% -25078 ±-18% sched_debug.cfs_rq[13]:/.spread0
645.25 ± 18% +52.6% 984.50 ± 4% sched_debug.cfs_rq[13]:/.tg->runnable_avg
-3911 ±-45% +702.5% -31395 ± -8% sched_debug.cfs_rq[14]:/.spread0
635.75 ± 15% +55.2% 986.75 ± 4% sched_debug.cfs_rq[14]:/.tg->runnable_avg
-4356 ±-47% +660.4% -33129 ± -6% sched_debug.cfs_rq[15]:/.spread0
639.25 ± 15% +55.1% 991.25 ± 4% sched_debug.cfs_rq[15]:/.tg->runnable_avg
440.96 ± 22% +55.9% 687.40 ± 10% sched_debug.cfs_rq[16]:/.exec_clock
-5972 ±-12% +467.7% -33901 ± -5% sched_debug.cfs_rq[16]:/.spread0
641.75 ± 15% +54.7% 992.75 ± 3% sched_debug.cfs_rq[16]:/.tg->runnable_avg
304.00 ± 33% +46.4% 445.00 ± 30% sched_debug.cfs_rq[17]:/.avg->runnable_avg_sum
386.59 ± 23% +57.8% 610.08 ± 8% sched_debug.cfs_rq[17]:/.exec_clock
2213 ± 17% +55.2% 3435 ± 13% sched_debug.cfs_rq[17]:/.min_vruntime
-6623 ±-18% +406.3% -33532 ± -4% sched_debug.cfs_rq[17]:/.spread0
643.75 ± 15% +54.4% 994.25 ± 3% sched_debug.cfs_rq[17]:/.tg->runnable_avg
1794 ± 34% +232.8% 5969 ± 7% sched_debug.cfs_rq[18]:/.exec_clock
7335 ± 16% +114.2% 15711 ± 6% sched_debug.cfs_rq[18]:/.min_vruntime
-1501 ±-30% +1315.4% -21255 ± -5% sched_debug.cfs_rq[18]:/.spread0
644.75 ± 15% +54.6% 997.00 ± 3% sched_debug.cfs_rq[18]:/.tg->runnable_avg
513.00 ± 36% -63.3% 188.50 ± 85% sched_debug.cfs_rq[19]:/.blocked_load_avg
1702 ± 26% +242.2% 5826 ± 9% sched_debug.cfs_rq[19]:/.exec_clock
5779 ± 6% +159.0% 14969 ± 11% sched_debug.cfs_rq[19]:/.min_vruntime
-3056 ±-46% +619.6% -21997 ± -8% sched_debug.cfs_rq[19]:/.spread0
650.00 ± 15% +54.2% 1002 ± 3% sched_debug.cfs_rq[19]:/.tg->runnable_avg
539.00 ± 36% -62.2% 204.00 ± 73% sched_debug.cfs_rq[19]:/.tg_load_contrib
4460 ± 6% +206.3% 13663 ± 12% sched_debug.cfs_rq[1]:/.exec_clock
13459 ± 3% +126.3% 30453 ± 10% sched_debug.cfs_rq[1]:/.min_vruntime
4623 ± 33% -240.9% -6513 ±-59% sched_debug.cfs_rq[1]:/.spread0
678.25 ± 20% +44.3% 978.75 ± 4% sched_debug.cfs_rq[1]:/.tg->runnable_avg
497.75 ± 48% +276.3% 1873 ± 43% sched_debug.cfs_rq[20]:/.avg->runnable_avg_sum
1677 ± 31% +268.7% 6185 ± 9% sched_debug.cfs_rq[20]:/.exec_clock
6148 ± 18% +159.2% 15935 ± 9% sched_debug.cfs_rq[20]:/.min_vruntime
-2688 ±-69% +682.4% -21031 ± -5% sched_debug.cfs_rq[20]:/.spread0
653.25 ± 15% +54.4% 1008 ± 3% sched_debug.cfs_rq[20]:/.tg->runnable_avg
10.00 ± 50% +300.0% 40.00 ± 43% sched_debug.cfs_rq[20]:/.tg_runnable_contrib
1366 ± 5% +345.8% 6092 ± 3% sched_debug.cfs_rq[21]:/.exec_clock
5854 ± 12% +186.1% 16750 ± 3% sched_debug.cfs_rq[21]:/.min_vruntime
6.67 ±141% +526.2% 41.75 ± 78% sched_debug.cfs_rq[21]:/.runnable_load_avg
-2982 ±-28% +577.8% -20216 ± -9% sched_debug.cfs_rq[21]:/.spread0
658.75 ± 15% +52.1% 1001 ± 3% sched_debug.cfs_rq[21]:/.tg->runnable_avg
31.00 ±141% +355.6% 141.25 ± 55% sched_debug.cfs_rq[21]:/.utilization_load_avg
1593 ± 8% +266.0% 5832 ± 8% sched_debug.cfs_rq[22]:/.exec_clock
6959 ± 12% +123.6% 15557 ± 8% sched_debug.cfs_rq[22]:/.min_vruntime
-1877 ±-106% +1040.2% -21409 ± -3% sched_debug.cfs_rq[22]:/.spread0
662.75 ± 15% +51.5% 1004 ± 3% sched_debug.cfs_rq[22]:/.tg->runnable_avg
407.50 ± 37% -57.2% 174.25 ± 64% sched_debug.cfs_rq[23]:/.blocked_load_avg
1972 ± 12% +211.4% 6141 ± 11% sched_debug.cfs_rq[23]:/.exec_clock
7501 ± 9% +111.7% 15883 ± 12% sched_debug.cfs_rq[23]:/.min_vruntime
6.50 ± 35% -73.1% 1.75 ± 84% sched_debug.cfs_rq[23]:/.nr_spread_over
-1335 ±-88% +1478.8% -21083 ± -5% sched_debug.cfs_rq[23]:/.spread0
667.25 ± 15% +51.0% 1007 ± 3% sched_debug.cfs_rq[23]:/.tg->runnable_avg
424.00 ± 33% -51.9% 203.75 ± 52% sched_debug.cfs_rq[23]:/.tg_load_contrib
994.75 ± 14% +92.5% 1914 ± 17% sched_debug.cfs_rq[2]:/.avg->runnable_avg_sum
2973 ± 25% +244.5% 10245 ± 6% sched_debug.cfs_rq[2]:/.exec_clock
7000 ± 19% +225.4% 22775 ± 7% sched_debug.cfs_rq[2]:/.min_vruntime
-1836 ±-118% +672.8% -14191 ±-20% sched_debug.cfs_rq[2]:/.spread0
681.00 ± 20% +44.2% 982.25 ± 4% sched_debug.cfs_rq[2]:/.tg->runnable_avg
20.75 ± 14% +96.4% 40.75 ± 16% sched_debug.cfs_rq[2]:/.tg_runnable_contrib
826.50 ± 25% +110.6% 1741 ± 24% sched_debug.cfs_rq[3]:/.avg->runnable_avg_sum
2765 ± 16% +228.6% 9086 ± 11% sched_debug.cfs_rq[3]:/.exec_clock
5354 ± 20% +251.7% 18828 ± 8% sched_debug.cfs_rq[3]:/.min_vruntime
-3482 ±-65% +420.9% -18138 ±-16% sched_debug.cfs_rq[3]:/.spread0
670.25 ± 21% +46.8% 984.25 ± 4% sched_debug.cfs_rq[3]:/.tg->runnable_avg
17.00 ± 24% +116.2% 36.75 ± 25% sched_debug.cfs_rq[3]:/.tg_runnable_contrib
1175 ± 9% +431.6% 6251 ± 11% sched_debug.cfs_rq[4]:/.exec_clock
3802 ± 16% +277.6% 14358 ± 11% sched_debug.cfs_rq[4]:/.min_vruntime
-5034 ±-26% +349.1% -22607 ±-13% sched_debug.cfs_rq[4]:/.spread0
669.00 ± 21% +47.1% 984.25 ± 4% sched_debug.cfs_rq[4]:/.tg->runnable_avg
483.75 ± 32% +155.0% 1233 ± 14% sched_debug.cfs_rq[5]:/.avg->runnable_avg_sum
977.63 ± 8% +373.1% 4625 ± 13% sched_debug.cfs_rq[5]:/.exec_clock
3096 ± 18% +250.4% 10851 ± 10% sched_debug.cfs_rq[5]:/.min_vruntime
-5739 ±-27% +355.0% -26115 ±-11% sched_debug.cfs_rq[5]:/.spread0
659.75 ± 19% +48.4% 979.00 ± 3% sched_debug.cfs_rq[5]:/.tg->runnable_avg
9.75 ± 33% +164.1% 25.75 ± 15% sched_debug.cfs_rq[5]:/.tg_runnable_contrib
1679 ± 9% +94.3% 3263 ± 7% sched_debug.cfs_rq[6]:/.avg->runnable_avg_sum
8528 ± 6% +108.1% 17746 ± 4% sched_debug.cfs_rq[6]:/.exec_clock
78.75 ± 43% -65.4% 27.25 ± 71% sched_debug.cfs_rq[6]:/.load
21660 ± 4% +73.1% 37488 ± 4% sched_debug.cfs_rq[6]:/.min_vruntime
12823 ± 6% -95.9% 522.02 ±196% sched_debug.cfs_rq[6]:/.spread0
658.25 ± 19% +49.0% 980.50 ± 3% sched_debug.cfs_rq[6]:/.tg->runnable_avg
36.00 ± 9% +94.4% 70.00 ± 8% sched_debug.cfs_rq[6]:/.tg_runnable_contrib
1506 ± 14% +121.6% 3338 ± 14% sched_debug.cfs_rq[7]:/.avg->runnable_avg_sum
158.75 ± 53% -72.0% 44.50 ± 61% sched_debug.cfs_rq[7]:/.blocked_load_avg
7851 ± 0% +126.1% 17752 ± 4% sched_debug.cfs_rq[7]:/.exec_clock
19974 ± 1% +90.1% 37978 ± 3% sched_debug.cfs_rq[7]:/.min_vruntime
11137 ± 12% -90.9% 1011 ±106% sched_debug.cfs_rq[7]:/.spread0
655.25 ± 19% +48.7% 974.50 ± 3% sched_debug.cfs_rq[7]:/.tg->runnable_avg
31.75 ± 15% +126.8% 72.00 ± 14% sched_debug.cfs_rq[7]:/.tg_runnable_contrib
1923 ± 27% +91.3% 3680 ± 10% sched_debug.cfs_rq[8]:/.avg->runnable_avg_sum
9291 ± 29% +125.0% 20907 ± 12% sched_debug.cfs_rq[8]:/.exec_clock
21507 ± 11% +90.4% 40943 ± 6% sched_debug.cfs_rq[8]:/.min_vruntime
12670 ± 21% -68.6% 3976 ± 92% sched_debug.cfs_rq[8]:/.spread0
657.25 ± 19% +47.3% 968.25 ± 4% sched_debug.cfs_rq[8]:/.tg->runnable_avg
40.75 ± 27% +93.3% 78.75 ± 10% sched_debug.cfs_rq[8]:/.tg_runnable_contrib
1301 ± 9% +159.5% 3377 ± 14% sched_debug.cfs_rq[9]:/.avg->runnable_avg_sum
8568 ± 21% +124.3% 19216 ± 19% sched_debug.cfs_rq[9]:/.exec_clock
21.00 ±100% +403.6% 105.75 ± 72% sched_debug.cfs_rq[9]:/.load
20751 ± 11% +87.1% 38829 ± 11% sched_debug.cfs_rq[9]:/.min_vruntime
11915 ± 26% -84.4% 1863 ±188% sched_debug.cfs_rq[9]:/.spread0
658.75 ± 19% +47.4% 971.00 ± 4% sched_debug.cfs_rq[9]:/.tg->runnable_avg
27.50 ± 10% +163.6% 72.50 ± 14% sched_debug.cfs_rq[9]:/.tg_runnable_contrib
2.33 ± 72% +296.4% 9.25 ± 40% sched_debug.cpu#0.cpu_load[0]
1.67 ± 28% +275.0% 6.25 ± 38% sched_debug.cpu#0.cpu_load[1]
0.75 ± 57% +466.7% 4.25 ± 34% sched_debug.cpu#0.cpu_load[2]
0.25 ±173% +900.0% 2.50 ± 44% sched_debug.cpu#0.cpu_load[3]
33174 ± 9% +85.6% 61571 ± 5% sched_debug.cpu#0.nr_load_updates
94470 ± 10% +59.4% 150596 ± 10% sched_debug.cpu#0.nr_switches
-533.75 ± -3% -55.3% -238.50 ±-94% sched_debug.cpu#0.nr_uninterruptible
97265 ± 9% +57.7% 153392 ± 10% sched_debug.cpu#0.sched_count
44229 ± 11% +44.4% 63868 ± 11% sched_debug.cpu#0.sched_goidle
49887 ± 14% +168.1% 133726 ± 5% sched_debug.cpu#0.ttwu_count
29737 ± 12% +81.6% 54010 ± 4% sched_debug.cpu#0.ttwu_local
24295 ± 7% +118.2% 53014 ± 4% sched_debug.cpu#1.nr_load_updates
60856 ± 18% +64.5% 100098 ± 15% sched_debug.cpu#1.nr_switches
60896 ± 18% +64.4% 100143 ± 15% sched_debug.cpu#1.sched_count
27601 ± 20% +52.1% 41976 ± 18% sched_debug.cpu#1.sched_goidle
10.00 ± 74% -180.0% -8.00 ±-137% sched_debug.cpu#10.nr_uninterruptible
85442 ± 5% -22.4% 66307 ± 12% sched_debug.cpu#10.ttwu_count
71405 ± 1% -33.8% 47238 ± 6% sched_debug.cpu#10.ttwu_local
20.50 ± 29% -181.7% -16.75 ±-54% sched_debug.cpu#11.nr_uninterruptible
87543 ± 5% -22.5% 67884 ± 5% sched_debug.cpu#11.ttwu_count
69444 ± 1% -32.4% 46938 ± 8% sched_debug.cpu#11.ttwu_local
7031 ± 3% +179.8% 19676 ± 2% sched_debug.cpu#12.nr_load_updates
2882 ± 13% +117.5% 6269 ± 11% sched_debug.cpu#12.nr_switches
2911 ± 13% +116.4% 6298 ± 11% sched_debug.cpu#12.sched_count
1253 ± 13% +110.7% 2640 ± 10% sched_debug.cpu#12.sched_goidle
1065 ± 13% +250.7% 3737 ± 83% sched_debug.cpu#12.ttwu_count
539.50 ± 8% +94.3% 1048 ± 13% sched_debug.cpu#12.ttwu_local
14942 ± 1% +62.2% 24234 ± 4% sched_debug.cpu#13.nr_load_updates
245.00 ± 9% -53.4% 114.25 ± 84% sched_debug.cpu#13.nr_uninterruptible
8825 ± 2% +136.2% 20846 ± 1% sched_debug.cpu#14.nr_load_updates
1372 ± 5% +44.5% 1983 ± 9% sched_debug.cpu#14.ttwu_local
8287 ± 5% +139.9% 19880 ± 3% sched_debug.cpu#15.nr_load_updates
5370 ± 20% +75.4% 9422 ± 40% sched_debug.cpu#15.nr_switches
5399 ± 20% +75.0% 9451 ± 40% sched_debug.cpu#15.sched_count
2339 ± 21% +78.2% 4168 ± 46% sched_debug.cpu#15.sched_goidle
8015 ± 5% +146.3% 19742 ± 4% sched_debug.cpu#16.nr_load_updates
7838 ± 10% +146.1% 19286 ± 3% sched_debug.cpu#17.nr_load_updates
11080 ± 6% +197.3% 32940 ± 4% sched_debug.cpu#18.nr_load_updates
15374 ± 26% +137.1% 36453 ± 10% sched_debug.cpu#18.nr_switches
15407 ± 26% +136.8% 36483 ± 10% sched_debug.cpu#18.sched_count
6539 ± 31% +115.9% 14122 ± 12% sched_debug.cpu#18.sched_goidle
3691 ± 20% +287.0% 14284 ± 6% sched_debug.cpu#18.ttwu_local
10910 ± 7% +197.0% 32401 ± 5% sched_debug.cpu#19.nr_load_updates
14473 ± 31% +178.9% 40372 ± 15% sched_debug.cpu#19.nr_switches
14504 ± 31% +178.6% 40403 ± 15% sched_debug.cpu#19.sched_count
6056 ± 36% +166.9% 16162 ± 20% sched_debug.cpu#19.sched_goidle
3809 ± 18% +264.6% 13890 ± 10% sched_debug.cpu#19.ttwu_local
17549 ± 5% +176.9% 48592 ± 5% sched_debug.cpu#2.nr_load_updates
4616 ± 28% +116.1% 9977 ± 8% sched_debug.cpu#2.ttwu_local
1.00 ±173% +875.0% 9.75 ± 54% sched_debug.cpu#20.cpu_load[3]
0.50 ±173% +1000.0% 5.50 ± 48% sched_debug.cpu#20.cpu_load[4]
10289 ± 7% +215.1% 32420 ± 4% sched_debug.cpu#20.nr_load_updates
13451 ± 25% +199.7% 40307 ± 25% sched_debug.cpu#20.nr_switches
13482 ± 25% +199.2% 40338 ± 24% sched_debug.cpu#20.sched_count
5661 ± 30% +183.9% 16070 ± 30% sched_debug.cpu#20.sched_goidle
3056 ± 23% +355.9% 13936 ± 5% sched_debug.cpu#20.ttwu_local
6.67 ±141% +526.2% 41.75 ± 78% sched_debug.cpu#21.cpu_load[0]
3.25 ±173% +1053.8% 37.50 ± 81% sched_debug.cpu#21.cpu_load[1]
2.00 ±173% +1200.0% 26.00 ± 83% sched_debug.cpu#21.cpu_load[2]
1.00 ±173% +1450.0% 15.50 ± 83% sched_debug.cpu#21.cpu_load[3]
0.50 ±173% +1600.0% 8.50 ± 81% sched_debug.cpu#21.cpu_load[4]
10448 ± 3% +209.5% 32339 ± 4% sched_debug.cpu#21.nr_load_updates
22993 ± 39% +82.3% 41915 ± 11% sched_debug.cpu#21.nr_switches
23024 ± 39% +82.2% 41951 ± 11% sched_debug.cpu#21.sched_count
3465 ± 16% +301.8% 13924 ± 7% sched_debug.cpu#21.ttwu_local
11042 ± 3% +195.4% 32622 ± 5% sched_debug.cpu#22.nr_load_updates
16043 ± 25% +133.2% 37417 ± 10% sched_debug.cpu#22.nr_switches
62.75 ± 14% -66.9% 20.75 ± 67% sched_debug.cpu#22.nr_uninterruptible
16073 ± 24% +133.0% 37448 ± 10% sched_debug.cpu#22.sched_count
6704 ± 29% +117.2% 14560 ± 12% sched_debug.cpu#22.sched_goidle
4136 ± 8% +248.0% 14395 ± 7% sched_debug.cpu#22.ttwu_local
11379 ± 4% +184.5% 32372 ± 6% sched_debug.cpu#23.nr_load_updates
14354 ± 15% +144.4% 35083 ± 13% sched_debug.cpu#23.nr_switches
72.00 ± 36% -72.9% 19.50 ±120% sched_debug.cpu#23.nr_uninterruptible
14383 ± 15% +144.1% 35113 ± 13% sched_debug.cpu#23.sched_count
5850 ± 18% +132.7% 13614 ± 15% sched_debug.cpu#23.sched_goidle
4041 ± 6% +239.4% 13714 ± 11% sched_debug.cpu#23.ttwu_local
18034 ± 1% +156.8% 46318 ± 5% sched_debug.cpu#3.nr_load_updates
3851 ± 4% +140.7% 9271 ± 10% sched_debug.cpu#3.ttwu_local
16506 ± 2% +151.3% 41479 ± 5% sched_debug.cpu#4.nr_load_updates
11950 ± 22% +48.9% 17796 ± 12% sched_debug.cpu#4.ttwu_count
3584 ± 5% +117.7% 7803 ± 9% sched_debug.cpu#4.ttwu_local
17189 ± 12% +119.2% 37680 ± 7% sched_debug.cpu#5.nr_load_updates
82636 ± 1% -18.9% 67000 ± 6% sched_debug.cpu#6.ttwu_count
72105 ± 1% -34.8% 47002 ± 4% sched_debug.cpu#6.ttwu_local
1.00 ±141% +225.0% 3.25 ± 59% sched_debug.cpu#7.cpu_load[3]
0.33 ±141% +575.0% 2.25 ± 57% sched_debug.cpu#7.cpu_load[4]
217272 ± 13% -19.8% 174287 ± 2% sched_debug.cpu#7.nr_switches
217327 ± 13% -19.8% 174344 ± 2% sched_debug.cpu#7.sched_count
103235 ± 13% -25.5% 76891 ± 2% sched_debug.cpu#7.sched_goidle
81833 ± 3% -19.6% 65831 ± 8% sched_debug.cpu#7.ttwu_count
71336 ± 3% -34.1% 47033 ± 9% sched_debug.cpu#7.ttwu_local
0.00 ± 0% +Inf% 18.00 ± 48% sched_debug.cpu#8.cpu_load[1]
0.00 ± 0% +Inf% 10.75 ± 40% sched_debug.cpu#8.cpu_load[2]
0.67 ± 70% +837.5% 6.25 ± 41% sched_debug.cpu#8.cpu_load[3]
0.67 ±141% +387.5% 3.25 ± 59% sched_debug.cpu#8.cpu_load[4]
68999 ± 2% -33.0% 46255 ± 5% sched_debug.cpu#8.ttwu_local
1.67 ±141% +1475.0% 26.25 ± 87% sched_debug.cpu#9.cpu_load[0]
0.67 ±141% +2262.5% 15.75 ± 85% sched_debug.cpu#9.cpu_load[1]
0.33 ±141% +2600.0% 9.00 ± 84% sched_debug.cpu#9.cpu_load[2]
0.00 ± 0% +Inf% 5.00 ± 78% sched_debug.cpu#9.cpu_load[3]
0.00 ± 0% +Inf% 2.75 ± 74% sched_debug.cpu#9.cpu_load[4]
114.00 ±141% +1252.9% 1542 ± 91% sched_debug.cpu#9.curr->pid
5.33 ±141% +1882.8% 105.75 ± 72% sched_debug.cpu#9.load
68882 ± 2% -33.8% 45624 ± 6% sched_debug.cpu#9.ttwu_local
lkp-ws02: Westmere-EP
Memory: 16G
fileio.time.system_time
450 ++--------------------------------------------------------------------+
| O O O |
400 ++ O |
350 ++ O O O O O O O O O O O O O |
| O O O O O |
300 O+ O O |
250 ++ |
| |
200 ++ |
150 ++ |
| |
100 ++ *..*.*..*..*.*..*.*..*..*.*..*.*..*.*..*..*.*..*.*..*..*.*..*.*..*
50 ++ : |
| : |
0 *+-*------------------------------------------------------------------+
fileio.time.percent_of_cpu_this_job_got
70 ++-----------------------------------O----O--O-------------------------+
| |
60 O+ O O O O O O O O |
| O O O O O O O O |
50 ++ O O O O |
| O |
40 ++ |
| |
30 ++ |
| |
20 ++ |
| *..*..*.*..*.*..*..*.*..*..*.*..*.*..*..*.*..*..*.*..*.*..*..*.*..*
10 ++ : |
| : |
0 *+-*-------------------------------------------------------------------+
fileio.time.involuntary_context_switches
300000 ++-----------------------------------------------------------------+
| |
250000 ++ O O O |
| O O |
| O O O O O O O O O O O O O O O O O |
200000 O+ O O |
| |
150000 ++ |
| |
100000 ++ |
| |
| *.*..*.*..*.*..*.*..*.*..*.*..*.*..*.*..*.*..*.*..*.*..*.*..*.*
50000 ++ + |
| + |
0 *+*----------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Ying Huang
View attachment "job.yaml" of type "text/plain" (3470 bytes)
View attachment "reproduce" of type "text/plain" (37012 bytes)
Powered by blists - more mailing lists