[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160311020316.GB13081@yexl-desktop>
Date: Fri, 11 Mar 2016 10:03:16 +0800
From: kernel test robot <xiaolong.ye@...el.com>
To: Shaohua Li <shli@...com>
Cc: Yuanhan Liu <yuanhan.liu@...ux.intel.com>,
NeilBrown <neilb@...e.de>, LKML <linux-kernel@...r.kernel.org>,
lkp@...org
Subject: [lkp] [RAID5] 6ab2a4b806: 91.9% fsmark.app_overhead
FYI, we noticed the below changes on
https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
commit 6ab2a4b806ae21b6c3e47c5ff1285ec06d505325 ("RAID5: revert e9e4c377e2f563 to fix a livelock")
=========================================================================================
compiler/cpufreq_governor/disk/filesize/fs/iterations/kconfig/md/nr_threads/rootfs/sync_method/tbox_group/test_size/testcase:
gcc-4.9/performance/8BRD_12G/4M/xfs/1x/x86_64-rhel/RAID5/64t/debian-x86_64-2015-02-07.cgz/fsyncBeforeClose/lkp-hsx02/60G/fsmark
commit:
27a353c026a879a1001e5eac4bda75b16262c44a
6ab2a4b806ae21b6c3e47c5ff1285ec06d505325
27a353c026a879a1 6ab2a4b806ae21b6c3e47c5ff1
---------------- --------------------------
%stddev %change %stddev
\ | \
592411 ± 2% +91.9% 1136931 ± 11% fsmark.app_overhead
98.37 ± 1% -21.9% 76.80 ± 2% fsmark.files_per_sec
188.95 ± 0% +59.9% 302.16 ± 3% fsmark.time.elapsed_time
188.95 ± 0% +59.9% 302.16 ± 3% fsmark.time.elapsed_time.max
20813 ± 4% +290.1% 81196 ± 4% fsmark.time.minor_page_faults
165.00 ± 2% +553.8% 1078 ± 7% fsmark.time.percent_of_cpu_this_job_got
312.06 ± 3% +947.5% 3269 ± 10% fsmark.time.system_time
10132990 ± 2% +465.7% 57325950 ± 7% fsmark.time.voluntary_context_switches
171460 ± 2% -24.4% 129549 ± 6% meminfo.Writeback
830948 ± 85% +339.1% 3649060 ± 89% numa-numastat.node3.numa_foreign
2814 ± 1% -10.4% 2523 ± 0% slabinfo.kmalloc-4096.active_objs
2870 ± 1% -12.1% 2523 ± 0% slabinfo.kmalloc-4096.num_objs
285.82 ± 0% +39.4% 398.30 ± 2% uptime.boot
40413 ± 0% +33.4% 53898 ± 2% uptime.idle
1.92 ± 1% +345.0% 8.53 ± 6% turbostat.%Busy
55.25 ± 1% +347.1% 247.00 ± 6% turbostat.Avg_MHz
356.24 ± 0% +5.5% 375.77 ± 0% turbostat.PkgWatt
7994 ± 3% +26.5% 10115 ± 5% softirqs.NET_RX
308842 ± 48% +222.4% 995686 ± 12% softirqs.RCU
227016 ± 26% +477.0% 1309870 ± 16% softirqs.SCHED
743051 ± 31% +195.6% 2196488 ± 8% softirqs.TIMER
329656 ± 0% -37.0% 207541 ± 3% vmstat.io.bo
1.75 ± 24% +500.0% 10.50 ± 4% vmstat.procs.r
108684 ± 1% +249.1% 379378 ± 3% vmstat.system.cs
3836 ± 2% +257.4% 13711 ± 5% vmstat.system.in
188.95 ± 0% +59.9% 302.16 ± 3% time.elapsed_time
188.95 ± 0% +59.9% 302.16 ± 3% time.elapsed_time.max
20813 ± 4% +290.1% 81196 ± 4% time.minor_page_faults
165.00 ± 2% +553.8% 1078 ± 7% time.percent_of_cpu_this_job_got
312.06 ± 3% +947.5% 3269 ± 10% time.system_time
10132990 ± 2% +465.7% 57325950 ± 7% time.voluntary_context_switches
2497 ±142% -87.3% 316.75 ± 15% numa-meminfo.node1.Inactive(anon)
2591 ±137% -84.5% 401.50 ± 11% numa-meminfo.node1.Shmem
16695 ± 1% -16.9% 13875 ± 9% numa-meminfo.node1.Unevictable
16915 ± 1% -17.4% 13969 ± 8% numa-meminfo.node2.Unevictable
5698 ± 22% +90.1% 10831 ± 24% numa-meminfo.node3.Active(anon)
5660 ± 22% +89.2% 10707 ± 24% numa-meminfo.node3.AnonPages
550.25 ± 22% +1135.0% 6795 ± 50% numa-meminfo.node3.Inactive(anon)
626.25 ± 19% +1002.1% 6902 ± 50% numa-meminfo.node3.Shmem
383.75 ± 3% +11.7% 428.75 ± 2% numa-vmstat.node1.nr_alloc_batch
623.75 ±142% -87.4% 78.75 ± 16% numa-vmstat.node1.nr_inactive_anon
647.25 ±137% -84.6% 100.00 ± 11% numa-vmstat.node1.nr_shmem
4173 ± 1% -16.9% 3468 ± 9% numa-vmstat.node1.nr_unevictable
4228 ± 1% -17.4% 3492 ± 8% numa-vmstat.node2.nr_unevictable
1424 ± 22% +90.1% 2708 ± 24% numa-vmstat.node3.nr_active_anon
1415 ± 22% +89.2% 2677 ± 24% numa-vmstat.node3.nr_anon_pages
137.00 ± 22% +1139.8% 1698 ± 50% numa-vmstat.node3.nr_inactive_anon
156.00 ± 19% +1005.8% 1725 ± 50% numa-vmstat.node3.nr_shmem
36.67 ±130% +52543.9% 19302 ±139% proc-vmstat.kswapd_high_wmark_hit_quickly
3.75 ± 11% +6480.0% 246.75 ±101% proc-vmstat.nr_pages_scanned
42860 ± 2% -24.4% 32415 ± 6% proc-vmstat.nr_writeback
10144 ± 8% +593.7% 70377 ± 4% proc-vmstat.numa_hint_faults
5990 ± 14% +608.3% 42426 ± 4% proc-vmstat.numa_hint_faults_local
1310 ± 14% +365.9% 6105 ± 11% proc-vmstat.numa_pages_migrated
13530 ± 6% +452.3% 74730 ± 4% proc-vmstat.numa_pte_updates
4444 ± 9% +476.1% 25606 ±101% proc-vmstat.pageoutrun
523627 ± 1% +67.6% 877482 ± 3% proc-vmstat.pgfault
1310 ± 14% +365.9% 6105 ± 11% proc-vmstat.pgmigrate_success
16384 ± 4% +72.5% 28257 ± 15% proc-vmstat.slabs_scanned
2452516 ± 62% +165.0% 6499322 ±123% latency_stats.avg.down.xfs_buf_lock._xfs_buf_find.xfs_buf_get_map.xfs_buf_read_map.xfs_trans_read_buf_map.xfs_alloc_read_agfl.xfs_alloc_fix_freelist.xfs_alloc_vextent.xfs_ialloc_ag_alloc.xfs_dialloc.xfs_ialloc
619509 ±123% -100.0% 0.00 ± -1% latency_stats.avg.down.xfs_buf_lock._xfs_buf_find.xfs_buf_get_map.xfs_buf_read_map.xfs_trans_read_buf_map.xfs_btree_read_buf_block.xfs_btree_lookup_get_block.xfs_btree_lookup.xfs_alloc_lookup_eq.xfs_alloc_fixup_trees.xfs_alloc_ag_vextent_size
668825 ±167% -74.7% 169025 ± 94% latency_stats.avg.down.xfs_buf_lock._xfs_buf_find.xfs_buf_get_map.xfs_buf_read_map.xfs_trans_read_buf_map.xfs_btree_read_buf_block.xfs_btree_lookup_get_block.xfs_btree_lookup.xfs_inobt_insert.xfs_ialloc_ag_alloc.xfs_dialloc
0.00 ± -1% +Inf% 10533842 ±100% latency_stats.avg.down.xfs_buf_lock._xfs_buf_find.xfs_buf_get_map.xfs_buf_read_map.xfs_trans_read_buf_map.xfs_da_read_buf.xfs_dir3_block_read.xfs_dir2_block_addname.xfs_dir_createname.xfs_create.xfs_generic_create
1434223 ± 33% +205.1% 4375501 ± 39% latency_stats.avg.down.xfs_buf_lock._xfs_buf_find.xfs_buf_get_map.xfs_buf_read_map.xfs_trans_read_buf_map.xfs_imap_to_bp.xfs_iread.xfs_iget.xfs_ialloc.xfs_dir_ialloc.xfs_create
1134 ±173% +1.6e+05% 1818032 ±153% latency_stats.avg.down.xfs_buf_lock._xfs_buf_find.xfs_buf_get_map.xfs_buf_read_map.xfs_trans_read_buf_map.xfs_read_agf.xfs_alloc_read_agf.xfs_alloc_fix_freelist.xfs_alloc_vextent.xfs_ialloc_ag_alloc.xfs_dialloc
35260 ± 6% +280.2% 134055 ± 12% latency_stats.hits.raid5_get_active_stripe.[raid456].raid5_make_request.[raid456].md_make_request.generic_make_request.submit_bio._xfs_buf_ioapply.xfs_buf_submit.xlog_bdstrat.xlog_sync.xlog_state_release_iclog._xfs_log_force_lsn.xfs_file_fsync
9385649 ± 3% +505.1% 56794939 ± 7% latency_stats.hits.raid5_get_active_stripe.[raid456].raid5_make_request.[raid456].md_make_request.generic_make_request.submit_bio.xfs_submit_ioend_bio.xfs_submit_ioend.xfs_vm_writepage.__writepage.write_cache_pages.generic_writepages.xfs_vm_writepages
3807550 ± 25% +155.1% 9711263 ± 73% latency_stats.max.down.xfs_buf_lock._xfs_buf_find.xfs_buf_get_map.xfs_buf_read_map.xfs_trans_read_buf_map.xfs_alloc_read_agfl.xfs_alloc_fix_freelist.xfs_alloc_vextent.xfs_bmap_btalloc.xfs_bmap_alloc.xfs_bmapi_write
2688649 ± 59% +187.4% 7727816 ±117% latency_stats.max.down.xfs_buf_lock._xfs_buf_find.xfs_buf_get_map.xfs_buf_read_map.xfs_trans_read_buf_map.xfs_alloc_read_agfl.xfs_alloc_fix_freelist.xfs_alloc_vextent.xfs_ialloc_ag_alloc.xfs_dialloc.xfs_ialloc
673991 ±108% -100.0% 0.00 ± -1% latency_stats.max.down.xfs_buf_lock._xfs_buf_find.xfs_buf_get_map.xfs_buf_read_map.xfs_trans_read_buf_map.xfs_btree_read_buf_block.xfs_btree_lookup_get_block.xfs_btree_lookup.xfs_alloc_lookup_eq.xfs_alloc_fixup_trees.xfs_alloc_ag_vextent_size
668825 ±167% -74.6% 169683 ± 94% latency_stats.max.down.xfs_buf_lock._xfs_buf_find.xfs_buf_get_map.xfs_buf_read_map.xfs_trans_read_buf_map.xfs_btree_read_buf_block.xfs_btree_lookup_get_block.xfs_btree_lookup.xfs_inobt_insert.xfs_ialloc_ag_alloc.xfs_dialloc
0.00 ± -1% +Inf% 14261726 ± 73% latency_stats.max.down.xfs_buf_lock._xfs_buf_find.xfs_buf_get_map.xfs_buf_read_map.xfs_trans_read_buf_map.xfs_da_read_buf.xfs_dir3_block_read.xfs_dir2_block_addname.xfs_dir_createname.xfs_create.xfs_generic_create
1260860 ± 73% +328.3% 5400244 ±169% latency_stats.max.down.xfs_buf_lock._xfs_buf_find.xfs_buf_get_map.xfs_buf_read_map.xfs_trans_read_buf_map.xfs_da_read_buf.xfs_dir3_leaf_read.xfs_dir2_leaf_addname.xfs_dir_createname.xfs_create.xfs_generic_create
5058489 ± 54% +148.5% 12570469 ± 76% latency_stats.max.down.xfs_buf_lock._xfs_buf_find.xfs_buf_get_map.xfs_buf_read_map.xfs_trans_read_buf_map.xfs_da_read_buf.xfs_dir3_leaf_read.xfs_dir2_leaf_lookup_int.xfs_dir2_leaf_lookup.xfs_dir_lookup.xfs_lookup
1134 ±173% +4.8e+05% 5434961 ±154% latency_stats.max.down.xfs_buf_lock._xfs_buf_find.xfs_buf_get_map.xfs_buf_read_map.xfs_trans_read_buf_map.xfs_read_agf.xfs_alloc_read_agf.xfs_alloc_fix_freelist.xfs_alloc_vextent.xfs_ialloc_ag_alloc.xfs_dialloc
6967127 ± 59% +82.8% 12733733 ±127% latency_stats.sum.down.xfs_buf_lock._xfs_buf_find.xfs_buf_get_map.xfs_buf_read_map.xfs_trans_read_buf_map.xfs_alloc_read_agfl.xfs_alloc_fix_freelist.xfs_alloc_vextent.xfs_ialloc_ag_alloc.xfs_dialloc.xfs_ialloc
11217182 ± 27% +199.8% 33628397 ± 72% latency_stats.sum.down.xfs_buf_lock._xfs_buf_find.xfs_buf_get_map.xfs_buf_read_map.xfs_trans_read_buf_map.xfs_alloc_read_agfl.xfs_alloc_fix_freelist.xfs_free_extent.xfs_trans_free_extent.xfs_bmap_finish.xfs_itruncate_extents
674368 ±108% -100.0% 0.00 ± -1% latency_stats.sum.down.xfs_buf_lock._xfs_buf_find.xfs_buf_get_map.xfs_buf_read_map.xfs_trans_read_buf_map.xfs_btree_read_buf_block.xfs_btree_lookup_get_block.xfs_btree_lookup.xfs_alloc_lookup_eq.xfs_alloc_fixup_trees.xfs_alloc_ag_vextent_size
668825 ±167% -50.1% 333621 ± 97% latency_stats.sum.down.xfs_buf_lock._xfs_buf_find.xfs_buf_get_map.xfs_buf_read_map.xfs_trans_read_buf_map.xfs_btree_read_buf_block.xfs_btree_lookup_get_block.xfs_btree_lookup.xfs_inobt_insert.xfs_ialloc_ag_alloc.xfs_dialloc
0.00 ± -1% +Inf% 51551059 ± 88% latency_stats.sum.down.xfs_buf_lock._xfs_buf_find.xfs_buf_get_map.xfs_buf_read_map.xfs_trans_read_buf_map.xfs_da_read_buf.xfs_dir3_block_read.xfs_dir2_block_addname.xfs_dir_createname.xfs_create.xfs_generic_create
87014211 ± 98% +420.2% 4.527e+08 ± 36% latency_stats.sum.down.xfs_buf_lock._xfs_buf_find.xfs_buf_get_map.xfs_buf_read_map.xfs_trans_read_buf_map.xfs_da_read_buf.xfs_dir3_block_read.xfs_dir2_block_lookup_int.xfs_dir2_block_lookup.xfs_dir_lookup.xfs_lookup
1318029 ± 69% +573.2% 8872791 ±169% latency_stats.sum.down.xfs_buf_lock._xfs_buf_find.xfs_buf_get_map.xfs_buf_read_map.xfs_trans_read_buf_map.xfs_da_read_buf.xfs_dir3_leaf_read.xfs_dir2_leaf_addname.xfs_dir_createname.xfs_create.xfs_generic_create
60188444 ± 39% +368.4% 2.819e+08 ± 29% latency_stats.sum.down.xfs_buf_lock._xfs_buf_find.xfs_buf_get_map.xfs_buf_read_map.xfs_trans_read_buf_map.xfs_imap_to_bp.xfs_iread.xfs_iget.xfs_ialloc.xfs_dir_ialloc.xfs_create
81840534 ± 50% +295.9% 3.24e+08 ± 29% latency_stats.sum.down.xfs_buf_lock._xfs_buf_find.xfs_buf_get_map.xfs_buf_read_map.xfs_trans_read_buf_map.xfs_read_agf.xfs_alloc_read_agf.xfs_alloc_fix_freelist.xfs_alloc_vextent.xfs_bmap_btalloc.xfs_bmap_alloc
1134 ±173% +4.8e+05% 5435006 ±154% latency_stats.sum.down.xfs_buf_lock._xfs_buf_find.xfs_buf_get_map.xfs_buf_read_map.xfs_trans_read_buf_map.xfs_read_agf.xfs_alloc_read_agf.xfs_alloc_fix_freelist.xfs_alloc_vextent.xfs_ialloc_ag_alloc.xfs_dialloc
1.69e+08 ± 42% +381.8% 8.144e+08 ± 23% latency_stats.sum.down.xfs_buf_lock._xfs_buf_find.xfs_buf_get_map.xfs_buf_read_map.xfs_trans_read_buf_map.xfs_read_agi.xfs_ialloc_read_agi.xfs_dialloc.xfs_ialloc.xfs_dir_ialloc.xfs_create
411.75 ±115% +3043.3% 12942 ± 21% latency_stats.sum.wait_on_page_bit.__migration_entry_wait.migration_entry_wait.do_swap_page.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
70443 ± 38% +284.3% 270746 ± 59% latency_stats.sum.xlog_cil_force_lsn._xfs_log_force_lsn.xfs_file_fsync.vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
lkp-hsx02: Brickland Haswell-EX
Memory: 128G
uptime.boot
450 ++--------------------------------------------------------------------+
O O O O O O O O O O |
400 ++ O O O O O O O O O O O |
350 ++ |
| |
300 *+.*..*...*..*..*..*..* *..*..*..*..*...*..*..*..*..*..*...*..*..*
250 ++ : : |
| : : |
200 ++ : : |
150 ++ : : |
| : : |
100 ++ : : |
50 ++ : : |
| : |
0 ++-----------------------*--------------------------------------------+
uptime.idle
60000 ++-----------------------O------------------------------------------+
O O O O O O O O O O O O O |
50000 ++ O O O O O O O |
| |
| .*.. .*.. .*.. |
40000 *+.*..*..*. *...*..* *..*..*..*. *. *...*..*..*..*..*..*
| : : |
30000 ++ : : |
| : : |
20000 ++ : : |
| : : |
| : : |
10000 ++ : : |
| : |
0 ++-----------------------*------------------------------------------+
turbostat.Avg_MHz
300 ++--------------------------------------------------------------------+
O O O O |
250 ++ O |
| O O O O O O O O O O O O O O |
| O O |
200 ++ |
| |
150 ++ |
| |
100 ++ |
| |
*..*..*...*..*.. .*...*..*..*..*.. .*...*..*..*
50 ++ *..*..*.. .*..*..*..*. *. |
| .. |
0 ++-----------------------*--------------------------------------------+
turbostat._Busy
10 ++---------------------------------------------------------------------+
9 O+ O O O |
| O O O O O O |
8 ++ O O O O O O O O O O |
7 ++ O |
| |
6 ++ |
5 ++ |
4 ++ |
| |
3 ++ |
2 *+.*..*...*..*.. .*... *.. .*..*..*..*...*..*..*..*...*..*..*
| *. *.. + *...*. |
1 ++ + |
0 ++------------------------*--------------------------------------------+
turbostat.PkgWatt
400 ++--------------------------------------------------------------------+
O O O O O O O O O O O O O O O O O O O O O |
350 *+.*..*...*..*..*..*..* *..*..*..*..*...*..*..*..*..*..*...*..*..*
300 ++ : : |
| : : |
250 ++ : : |
| : : |
200 ++ : : |
| : : |
150 ++ : : |
100 ++ : : |
| : : |
50 ++ :: |
| : |
0 ++-----------------------*--------------------------------------------+
fsmark.app_overhead
1.4e+06 ++----------------------------O--------------------------O--------+
| |
1.2e+06 ++ O O O O |
O O O O O |
1e+06 ++ O O O O O O O O |
| O O |
800000 ++ |
| |
600000 *+.*..*..*..*..*..*..* *..*..*..*..*..*..*..*..*..*..*..*..*..*
| : : |
400000 ++ : : |
| : : |
200000 ++ : : |
| :: |
0 ++----------------------*-----------------------------------------+
fsmark.time.system_time
4500 ++-------------------------------------------------------------------+
| O |
4000 O+ O O |
3500 ++ O |
| O O O O O O O |
3000 ++ O O O O O O O |
2500 ++ O O |
| |
2000 ++ |
1500 ++ |
| |
1000 ++ |
500 ++ |
*..*..*..*...*..*..*..*.. .*..*...*..*..*..*..*..*..*..*...*..*..*..*
0 ++-----------------------*-------------------------------------------+
fsmark.time.percent_of_cpu_this_job_got
1400 ++-------------------------------------------------------------------+
| |
1200 O+ O O O |
| O O O O O O |
1000 ++ O O O O O O O O O |
| O O |
800 ++ |
| |
600 ++ |
| |
400 ++ |
| |
200 *+.*..*..*...*.. .*.. .*..*..*..*..*..*...*..*..*..*
| *. *.. .*..*...*..*. |
0 ++-----------------------*-------------------------------------------+
fsmark.time.elapsed_time
350 ++-----------------------O--------------------------------------------+
O O O O |
300 ++ O O O O O O O O O O O O O O O O |
| |
250 ++ |
| |
200 *+.*..*...*..*.. ..*.. .*.. .*...*..*..|
| *..*..* *..*..*..*..*. *. *..*. *
150 ++ : : |
| : : |
100 ++ : : |
| : : |
50 ++ : : |
| :: |
0 ++-----------------------*--------------------------------------------+
fsmark.time.elapsed_time.max
350 ++-----------------------O--------------------------------------------+
O O O O |
300 ++ O O O O O O O O O O O O O O O O |
| |
250 ++ |
| |
200 *+.*..*...*..*.. ..*.. .*.. .*...*..*..|
| *..*..* *..*..*..*..*. *. *..*. *
150 ++ : : |
| : : |
100 ++ : : |
| : : |
50 ++ : : |
| :: |
0 ++-----------------------*--------------------------------------------+
fsmark.time.minor_page_faults
100000 ++----------------------O-----O------------------------------------+
90000 ++ O |
O O O O O O O O |
80000 ++ O O O O O O |
70000 ++ O O O |
| O |
60000 ++ |
50000 ++ |
40000 ++ |
| |
30000 ++ |
20000 *+.*..*..*..*..*..*..* *..*...*..*..*..*..*..*..*..*..*..*..*..*
| + + |
10000 ++ + + |
0 ++----------------------*------------------------------------------+
fsmark.time.voluntary_context_switches
8e+07 ++------------------------------------------------------------------+
| O |
7e+07 O+ O O |
6e+07 ++ O O O O O |
| O O O O O O O O |
5e+07 ++ O O O O |
| |
4e+07 ++ |
| |
3e+07 ++ |
2e+07 ++ |
| |
1e+07 *+.*..*..*..*..*...*..*.. *..*..*..*..*..*..*..*...*..*..*..*..*..*
| .. |
0 ++-----------------------*------------------------------------------+
time.system_time
4500 ++-------------------------------------------------------------------+
| O |
4000 O+ O O |
3500 ++ O |
| O O O O O O O |
3000 ++ O O O O O O O |
2500 ++ O O |
| |
2000 ++ |
1500 ++ |
| |
1000 ++ |
500 ++ |
*..*..*..*...*..*..*..*.. .*..*...*..*..*..*..*..*..*..*...*..*..*..*
0 ++-----------------------*-------------------------------------------+
time.percent_of_cpu_this_job_got
1400 ++-------------------------------------------------------------------+
| |
1200 O+ O O O |
| O O O O O O |
1000 ++ O O O O O O O O O |
| O O |
800 ++ |
| |
600 ++ |
| |
400 ++ |
| |
200 *+.*..*..*...*.. .*.. .*..*..*..*..*..*...*..*..*..*
| *. *.. .*..*...*..*. |
0 ++-----------------------*-------------------------------------------+
time.elapsed_time
350 ++-----------------------O--------------------------------------------+
O O O O |
300 ++ O O O O O O O O O O O O O O O O |
| |
250 ++ |
| |
200 *+.*..*...*..*.. ..*.. .*.. .*...*..*..|
| *..*..* *..*..*..*..*. *. *..*. *
150 ++ : : |
| : : |
100 ++ : : |
| : : |
50 ++ : : |
| :: |
0 ++-----------------------*--------------------------------------------+
time.elapsed_time.max
350 ++-----------------------O--------------------------------------------+
O O O O |
300 ++ O O O O O O O O O O O O O O O O |
| |
250 ++ |
| |
200 *+.*..*...*..*.. ..*.. .*.. .*...*..*..|
| *..*..* *..*..*..*..*. *. *..*. *
150 ++ : : |
| : : |
100 ++ : : |
| : : |
50 ++ : : |
| :: |
0 ++-----------------------*--------------------------------------------+
time.minor_page_faults
100000 ++----------------------O-----O------------------------------------+
90000 ++ O |
O O O O O O O O |
80000 ++ O O O O O O |
70000 ++ O O O |
| O |
60000 ++ |
50000 ++ |
40000 ++ |
| |
30000 ++ |
20000 *+.*..*..*..*..*..*..* *..*...*..*..*..*..*..*..*..*..*..*..*..*
| + + |
10000 ++ + + |
0 ++----------------------*------------------------------------------+
time.voluntary_context_switches
8e+07 ++------------------------------------------------------------------+
| O |
7e+07 O+ O O |
6e+07 ++ O O O O O |
| O O O O O O O O |
5e+07 ++ O O O O |
| |
4e+07 ++ |
| |
3e+07 ++ |
2e+07 ++ |
| |
1e+07 *+.*..*..*..*..*...*..*.. *..*..*..*..*..*..*..*...*..*..*..*..*..*
| .. |
0 ++-----------------------*------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Xiaolong Ye
View attachment "job.yaml" of type "text/plain" (3676 bytes)
View attachment "reproduce" of type "text/plain" (14821 bytes)
Powered by blists - more mailing lists