[<prev] [next>] [day] [month] [year] [list]
Message-ID: <1429490659.7977.23.camel@intel.com>
Date: Mon, 20 Apr 2015 08:44:19 +0800
From: Huang Ying <ying.huang@...el.com>
To: Wanpeng Li <wanpeng.li@...ux.intel.com>
Cc: Jaegeuk Kim <jaegeuk@...nel.org>,
LKML <linux-kernel@...r.kernel.org>, LKP ML <lkp@...org>
Subject: [LKP] [f2fs] 465a05fecc2: +147.3% fsmark.files_per_sec
FYI, we noticed the below changes on
git://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
commit 465a05fecc2c7921d50f33cd10621abc049cbabf ("f2fs: enable inline data by default")
testbox/testcase/testparams: lkp-st02/fsmark/1x-32t-1HDD-f2fs-9B-400M-fsyncBeforeClose-16d-256fpd
6a2de6eb3f267ba6 465a05fecc2c7921d50f33cd10
---------------- --------------------------
%stddev %change %stddev
\ | \
679 ± 1% +147.3% 1681 ± 1% fsmark.files_per_sec
15 ± 3% +177.4% 43 ± 0% fsmark.time.percent_of_cpu_this_job_got
150.90 ± 2% -59.5% 61.13 ± 1% fsmark.time.elapsed_time.max
150.90 ± 2% -59.5% 61.13 ± 1% fsmark.time.elapsed_time
1771558 ± 2% +6.2% 1881064 ± 0% fsmark.time.file_system_outputs
949761 ± 0% +22.2% 1160509 ± 0% fsmark.time.voluntary_context_switches
8240 ± 3% +80.1% 14840 ± 4% fsmark.time.involuntary_context_switches
184.27 ± 1% -48.5% 94.88 ± 0% uptime.boot
525 ± 2% -12.3% 460 ± 1% uptime.idle
89898 ± 0% -60.6% 35417 ± 0% softirqs.BLOCK
30856 ± 3% -19.7% 24774 ± 2% softirqs.RCU
34393 ± 0% -9.4% 31172 ± 0% softirqs.SCHED
5775 ± 1% +44.3% 8335 ± 0% vmstat.io.bo
18 ± 3% -47.2% 9 ± 5% vmstat.procs.b
11960 ± 1% +141.8% 28918 ± 0% vmstat.system.in
21443 ± 1% +152.8% 54201 ± 0% vmstat.system.cs
3093 ± 2% +9.5% 3387 ± 4% slabinfo.kmalloc-192.active_objs
3245 ± 3% +4.4% 3388 ± 4% slabinfo.kmalloc-192.num_objs
3627 ± 1% +16.7% 4233 ± 5% slabinfo.vm_area_struct.active_objs
3681 ± 1% +16.9% 4302 ± 5% slabinfo.vm_area_struct.num_objs
32653 ± 2% +10.9% 36223 ± 2% meminfo.Active(anon)
32637 ± 2% +10.4% 36030 ± 2% meminfo.AnonPages
1517 ± 0% -68.3% 481 ± 4% meminfo.Mlocked
3562 ± 1% +15.9% 4127 ± 0% meminfo.PageTables
1517 ± 0% -68.3% 481 ± 4% meminfo.Unevictable
150.90 ± 2% -59.5% 61.13 ± 1% time.elapsed_time.max
150.90 ± 2% -59.5% 61.13 ± 1% time.elapsed_time
8240 ± 3% +80.1% 14840 ± 4% time.involuntary_context_switches
15 ± 3% +177.4% 43 ± 0% time.percent_of_cpu_this_job_got
949761 ± 0% +22.2% 1160509 ± 0% time.voluntary_context_switches
8162 ± 2% +10.9% 9050 ± 2% proc-vmstat.nr_active_anon
8157 ± 2% +10.4% 9004 ± 2% proc-vmstat.nr_anon_pages
150 ± 6% -46.8% 80 ± 4% proc-vmstat.nr_dirty
379 ± 0% -68.4% 119 ± 4% proc-vmstat.nr_mlock
889 ± 1% +15.8% 1029 ± 0% proc-vmstat.nr_page_table_pages
379 ± 0% -68.4% 119 ± 4% proc-vmstat.nr_unevictable
220865 ± 2% -40.2% 131982 ± 1% proc-vmstat.nr_written
411249 ± 0% -23.1% 316279 ± 0% proc-vmstat.numa_hit
411249 ± 0% -23.1% 316279 ± 0% proc-vmstat.numa_local
26883 ± 1% +45.7% 39168 ± 2% proc-vmstat.pgactivate
169935 ± 1% -22.0% 132534 ± 1% proc-vmstat.pgalloc_dma32
264529 ± 1% -23.3% 202778 ± 1% proc-vmstat.pgalloc_normal
231722 ± 1% -51.0% 113542 ± 0% proc-vmstat.pgfault
193885 ± 2% -49.8% 97263 ± 3% proc-vmstat.pgfree
883675 ± 2% -40.2% 528402 ± 1% proc-vmstat.pgpgout
396 ± 6% +72.6% 683 ± 5% sched_debug.cfs_rq[0]:/.tg->runnable_avg
5121 ± 6% +106.0% 10549 ± 7% sched_debug.cfs_rq[0]:/.tg_load_avg
50 ± 12% +66.8% 84 ± 4% sched_debug.cfs_rq[0]:/.tg_runnable_contrib
3813 ± 10% -18.1% 3122 ± 1% sched_debug.cfs_rq[0]:/.exec_clock
2357 ± 12% +65.8% 3907 ± 4% sched_debug.cfs_rq[0]:/.avg->runnable_avg_sum
5066 ± 7% +107.6% 10516 ± 8% sched_debug.cfs_rq[1]:/.tg_load_avg
402 ± 6% +71.6% 690 ± 4% sched_debug.cfs_rq[1]:/.tg->runnable_avg
47 ± 2% +73.8% 83 ± 3% sched_debug.cfs_rq[1]:/.tg_runnable_contrib
2234 ± 2% +71.6% 3832 ± 2% sched_debug.cfs_rq[1]:/.avg->runnable_avg_sum
144 ± 15% -34.3% 94 ± 16% sched_debug.cfs_rq[2]:/.load
2213 ± 8% +94.6% 4307 ± 3% sched_debug.cfs_rq[2]:/.avg->runnable_avg_sum
5026 ± 7% +105.0% 10306 ± 9% sched_debug.cfs_rq[2]:/.tg_load_avg
47 ± 9% +96.3% 93 ± 4% sched_debug.cfs_rq[2]:/.tg_runnable_contrib
404 ± 6% +70.6% 690 ± 4% sched_debug.cfs_rq[2]:/.tg->runnable_avg
25 ± 10% -33.7% 16 ± 12% sched_debug.cfs_rq[2]:/.runnable_load_avg
591 ± 33% +176.1% 1632 ± 31% sched_debug.cfs_rq[3]:/.blocked_load_avg
620 ± 32% +170.6% 1678 ± 31% sched_debug.cfs_rq[3]:/.tg_load_contrib
5004 ± 7% +105.0% 10259 ± 9% sched_debug.cfs_rq[3]:/.tg_load_avg
408 ± 5% +69.8% 693 ± 4% sched_debug.cfs_rq[3]:/.tg->runnable_avg
13 ± 12% -25.0% 9 ± 15% sched_debug.cfs_rq[3]:/.nr_spread_over
2289 ± 4% +64.4% 3763 ± 4% sched_debug.cfs_rq[3]:/.avg->runnable_avg_sum
48 ± 4% +67.5% 81 ± 4% sched_debug.cfs_rq[3]:/.tg_runnable_contrib
60 ± 27% +50.8% 90 ± 10% sched_debug.cfs_rq[4]:/.tg_runnable_contrib
412 ± 5% +68.9% 696 ± 4% sched_debug.cfs_rq[4]:/.tg->runnable_avg
2784 ± 26% +49.6% 4166 ± 10% sched_debug.cfs_rq[4]:/.avg->runnable_avg_sum
4946 ± 8% +106.3% 10206 ± 8% sched_debug.cfs_rq[4]:/.tg_load_avg
55 ± 8% +55.9% 85 ± 7% sched_debug.cfs_rq[5]:/.tg_runnable_contrib
4907 ± 8% +106.6% 10141 ± 9% sched_debug.cfs_rq[5]:/.tg_load_avg
415 ± 6% +68.1% 698 ± 4% sched_debug.cfs_rq[5]:/.tg->runnable_avg
2564 ± 8% +53.9% 3947 ± 7% sched_debug.cfs_rq[5]:/.avg->runnable_avg_sum
191 ± 13% -33.9% 126 ± 31% sched_debug.cfs_rq[5]:/.load
2337 ± 6% +66.3% 3888 ± 8% sched_debug.cfs_rq[6]:/.avg->runnable_avg_sum
419 ± 6% +67.2% 700 ± 4% sched_debug.cfs_rq[6]:/.tg->runnable_avg
4900 ± 8% +106.2% 10103 ± 9% sched_debug.cfs_rq[6]:/.tg_load_avg
50 ± 7% +69.5% 84 ± 8% sched_debug.cfs_rq[6]:/.tg_runnable_contrib
4844 ± 7% +106.1% 9983 ± 9% sched_debug.cfs_rq[7]:/.tg_load_avg
2327 ± 7% +79.9% 4185 ± 15% sched_debug.cfs_rq[7]:/.avg->runnable_avg_sum
50 ± 7% +81.5% 90 ± 16% sched_debug.cfs_rq[7]:/.tg_runnable_contrib
423 ± 5% +66.7% 705 ± 4% sched_debug.cfs_rq[7]:/.tg->runnable_avg
295623 ± 17% -18.7% 240304 ± 6% sched_debug.cpu#0.nr_switches
107733 ± 24% -37.6% 67259 ± 10% sched_debug.cpu#0.ttwu_local
106260 ± 1% -40.8% 62862 ± 0% sched_debug.cpu#0.clock_task
135495 ± 18% -23.9% 103117 ± 7% sched_debug.cpu#0.sched_goidle
26384 ± 7% -20.4% 20995 ± 11% sched_debug.cpu#0.nr_load_updates
106260 ± 1% -40.8% 62862 ± 0% sched_debug.cpu#0.clock
296932 ± 17% -18.7% 241501 ± 6% sched_debug.cpu#0.sched_count
30451 ± 4% -26.1% 22493 ± 12% sched_debug.cpu#1.nr_load_updates
106260 ± 1% -40.8% 62862 ± 0% sched_debug.cpu#1.clock_task
106260 ± 1% -40.8% 62862 ± 0% sched_debug.cpu#1.clock
3 ± 25% +153.8% 8 ± 48% sched_debug.cpu#1.cpu_load[4]
715589 ± 5% -19.1% 578901 ± 11% sched_debug.cpu#1.avg_idle
142 ± 16% -33.3% 94 ± 16% sched_debug.cpu#2.load
5 ± 31% +114.3% 11 ± 15% sched_debug.cpu#2.cpu_load[3]
126380 ± 10% -18.8% 102610 ± 3% sched_debug.cpu#2.sched_goidle
418 ± 19% -41.2% 245 ± 12% sched_debug.cpu#2.curr->pid
277074 ± 9% -13.8% 238909 ± 3% sched_debug.cpu#2.sched_count
106260 ± 1% -40.8% 62862 ± 0% sched_debug.cpu#2.clock_task
277012 ± 9% -13.8% 238854 ± 3% sched_debug.cpu#2.nr_switches
9 ± 28% +89.5% 18 ± 27% sched_debug.cpu#2.cpu_load[2]
98388 ± 14% -32.3% 66641 ± 5% sched_debug.cpu#2.ttwu_local
740677 ± 2% -21.0% 584865 ± 9% sched_debug.cpu#2.avg_idle
106260 ± 1% -40.8% 62862 ± 0% sched_debug.cpu#2.clock
3 ± 31% +142.9% 8 ± 31% sched_debug.cpu#2.cpu_load[4]
25773 ± 9% -19.9% 20654 ± 8% sched_debug.cpu#2.nr_load_updates
690785 ± 3% -14.3% 591985 ± 6% sched_debug.cpu#3.avg_idle
106260 ± 1% -40.8% 62862 ± 0% sched_debug.cpu#3.clock_task
30028 ± 5% -30.5% 20877 ± 13% sched_debug.cpu#3.nr_load_updates
106260 ± 1% -40.8% 62862 ± 0% sched_debug.cpu#3.clock
113746 ± 18% -37.1% 71517 ± 18% sched_debug.cpu#4.ttwu_local
106260 ± 1% -40.8% 62862 ± 0% sched_debug.cpu#4.clock
27593 ± 7% -36.0% 17656 ± 4% sched_debug.cpu#4.nr_load_updates
106260 ± 1% -40.8% 62862 ± 0% sched_debug.cpu#4.clock_task
141639 ± 14% -24.8% 106524 ± 12% sched_debug.cpu#4.sched_goidle
184 ± 13% -31.3% 126 ± 31% sched_debug.cpu#5.load
28115 ± 9% -26.4% 20695 ± 15% sched_debug.cpu#5.nr_load_updates
106261 ± 1% -40.8% 62863 ± 0% sched_debug.cpu#5.clock_task
106261 ± 1% -40.8% 62863 ± 0% sched_debug.cpu#5.clock
710881 ± 5% -13.0% 618742 ± 11% sched_debug.cpu#5.avg_idle
151901 ± 11% -29.7% 106778 ± 9% sched_debug.cpu#6.sched_goidle
106261 ± 1% -40.8% 62861 ± 0% sched_debug.cpu#6.clock_task
106261 ± 1% -40.8% 62861 ± 0% sched_debug.cpu#6.clock
124231 ± 14% -42.0% 72021 ± 14% sched_debug.cpu#6.ttwu_local
27846 ± 4% -38.4% 17152 ± 3% sched_debug.cpu#6.nr_load_updates
175655 ± 9% -21.3% 138225 ± 7% sched_debug.cpu#6.ttwu_count
328538 ± 10% -25.1% 246230 ± 8% sched_debug.cpu#6.nr_switches
328606 ± 10% -25.1% 246282 ± 8% sched_debug.cpu#6.sched_count
106261 ± 1% -40.8% 62863 ± 0% sched_debug.cpu#7.clock
27346 ± 9% -26.6% 20061 ± 15% sched_debug.cpu#7.nr_load_updates
106261 ± 1% -40.8% 62863 ± 0% sched_debug.cpu#7.clock_task
106261 ± 1% -40.8% 62863 ± 0% sched_debug.cpu_clk
106261 ± 1% -40.8% 62863 ± 0% sched_debug.ktime
107044 ± 1% -40.5% 63646 ± 0% sched_debug.sched_clk
testbox/testcase/testparams: nhm4/fsmark/performance-1x-32t-1HDD-f2fs-9B-400M-fsyncBeforeClose-16d-256fpd
6a2de6eb3f267ba6 465a05fecc2c7921d50f33cd10
---------------- --------------------------
9381393 ± 2% -52.3% 4475975 ± 15% fsmark.app_overhead
559 ± 0% +134.7% 1313 ± 0% fsmark.files_per_sec
12 ± 0% +214.6% 37 ± 1% fsmark.time.percent_of_cpu_this_job_got
183.94 ± 0% -57.3% 78.57 ± 0% fsmark.time.elapsed_time.max
183.94 ± 0% -57.3% 78.57 ± 0% fsmark.time.elapsed_time
1839298 ± 0% +4.3% 1918608 ± 0% fsmark.time.file_system_outputs
22.50 ± 0% +30.0% 29.25 ± 0% fsmark.time.system_time
761103 ± 0% +37.7% 1048153 ± 0% fsmark.time.voluntary_context_switches
19670 ± 0% -30.8% 13615 ± 3% fsmark.time.involuntary_context_switches
206.26 ± 1% -51.7% 99.57 ± 0% uptime.boot
367 ± 5% +42.1% 522 ± 1% uptime.idle
47369 ± 0% -65.4% 16377 ± 0% softirqs.BLOCK
29294 ± 2% -13.9% 25233 ± 2% softirqs.RCU
2.29 ± 0% +151.2% 5.76 ± 0% turbostat.%Busy
68 ± 0% +174.0% 187 ± 0% turbostat.Avg_MHz
40.65 ± 1% +82.9% 74.36 ± 0% turbostat.CPU%c1
48.36 ± 0% -76.3% 11.44 ± 4% turbostat.CPU%c3
4943 ± 0% +37.1% 6778 ± 0% vmstat.io.bo
14 ± 2% -33.9% 9 ± 11% vmstat.procs.b
5720 ± 0% +112.1% 12131 ± 0% vmstat.system.in
18895 ± 0% +135.5% 44500 ± 0% vmstat.system.cs
58365 ± 2% -8.8% 53258 ± 0% meminfo.DirectMap4k
1541 ± 0% -60.0% 616 ± 0% meminfo.Mlocked
1541 ± 0% -60.0% 616 ± 0% meminfo.Unevictable
183.94 ± 0% -57.3% 78.57 ± 0% time.elapsed_time.max
183.94 ± 0% -57.3% 78.57 ± 0% time.elapsed_time
19670 ± 0% -30.8% 13615 ± 3% time.involuntary_context_switches
12 ± 0% +214.6% 37 ± 1% time.percent_of_cpu_this_job_got
22.50 ± 0% +30.0% 29.25 ± 0% time.system_time
1.13 ± 2% -28.6% 0.80 ± 3% time.user_time
761103 ± 0% +37.7% 1048153 ± 0% time.voluntary_context_switches
982895 ± 0% -28.3% 704285 ± 0% cpuidle.C1-NHM.usage
3.274e+08 ± 1% -14.2% 2.81e+08 ± 1% cpuidle.C1-NHM.time
180656 ± 3% +254.8% 641058 ± 0% cpuidle.C1E-NHM.usage
42872379 ± 2% +96.4% 84210125 ± 0% cpuidle.C1E-NHM.time
373512 ± 0% -60.1% 149080 ± 1% cpuidle.C3-NHM.usage
7.652e+08 ± 0% -85.6% 1.099e+08 ± 2% cpuidle.C3-NHM.time
3.122e+08 ± 3% -59.8% 1.254e+08 ± 1% cpuidle.C6-NHM.time
134366 ± 2% -25.7% 99891 ± 1% cpuidle.C6-NHM.usage
2400 ± 1% +80.8% 4339 ± 1% cpuidle.POLL.usage
143 ± 2% -48.3% 74 ± 6% proc-vmstat.nr_dirty
385 ± 0% -60.0% 154 ± 0% proc-vmstat.nr_mlock
385 ± 0% -60.0% 154 ± 0% proc-vmstat.nr_unevictable
230004 ± 0% -40.3% 137234 ± 0% proc-vmstat.nr_written
436867 ± 0% -25.6% 325170 ± 0% proc-vmstat.numa_hit
436867 ± 0% -25.6% 325170 ± 0% proc-vmstat.numa_local
24976 ± 0% +79.6% 44865 ± 0% proc-vmstat.pgactivate
467392 ± 0% -24.9% 350901 ± 0% proc-vmstat.pgalloc_dma32
268016 ± 0% -51.5% 129939 ± 0% proc-vmstat.pgfault
226937 ± 1% -51.6% 109882 ± 1% proc-vmstat.pgfree
919916 ± 0% -40.3% 548911 ± 0% proc-vmstat.pgpgout
283 ± 3% +131.3% 655 ± 5% sched_debug.cfs_rq[0]:/.tg->runnable_avg
7165 ± 5% +85.5% 13292 ± 2% sched_debug.cfs_rq[0]:/.tg_load_avg
6701 ± 7% +11.4% 7465 ± 5% sched_debug.cfs_rq[0]:/.min_vruntime
38 ± 6% +133.6% 88 ± 5% sched_debug.cfs_rq[0]:/.tg_runnable_contrib
1783 ± 6% +129.9% 4100 ± 5% sched_debug.cfs_rq[0]:/.avg->runnable_avg_sum
7165 ± 5% +85.2% 13270 ± 2% sched_debug.cfs_rq[1]:/.tg_load_avg
285 ± 3% +130.6% 658 ± 5% sched_debug.cfs_rq[1]:/.tg->runnable_avg
41 ± 24% +103.0% 83 ± 5% sched_debug.cfs_rq[1]:/.tg_runnable_contrib
4678 ± 4% +19.4% 5586 ± 5% sched_debug.cfs_rq[1]:/.min_vruntime
1920 ± 24% +100.2% 3843 ± 5% sched_debug.cfs_rq[1]:/.avg->runnable_avg_sum
4979 ± 7% +14.6% 5704 ± 6% sched_debug.cfs_rq[2]:/.min_vruntime
1682 ± 5% +127.1% 3820 ± 6% sched_debug.cfs_rq[2]:/.avg->runnable_avg_sum
7133 ± 5% +86.0% 13267 ± 2% sched_debug.cfs_rq[2]:/.tg_load_avg
36 ± 6% +129.9% 82 ± 6% sched_debug.cfs_rq[2]:/.tg_runnable_contrib
287 ± 3% +130.2% 662 ± 5% sched_debug.cfs_rq[2]:/.tg->runnable_avg
7125 ± 5% +82.1% 12973 ± 2% sched_debug.cfs_rq[3]:/.tg_load_avg
290 ± 4% +128.4% 663 ± 5% sched_debug.cfs_rq[3]:/.tg->runnable_avg
1624 ± 4% +134.6% 3810 ± 8% sched_debug.cfs_rq[3]:/.avg->runnable_avg_sum
34 ± 5% +140.9% 82 ± 9% sched_debug.cfs_rq[3]:/.tg_runnable_contrib
32 ± 10% +149.2% 81 ± 6% sched_debug.cfs_rq[4]:/.tg_runnable_contrib
293 ± 4% +127.2% 665 ± 5% sched_debug.cfs_rq[4]:/.tg->runnable_avg
1172 ± 29% +137.4% 2782 ± 18% sched_debug.cfs_rq[4]:/.blocked_load_avg
3666 ± 4% +20.7% 4426 ± 6% sched_debug.cfs_rq[4]:/.min_vruntime
1214 ± 31% +138.4% 2893 ± 17% sched_debug.cfs_rq[4]:/.tg_load_contrib
1535 ± 10% +143.7% 3741 ± 6% sched_debug.cfs_rq[4]:/.avg->runnable_avg_sum
7125 ± 5% +82.3% 12986 ± 2% sched_debug.cfs_rq[4]:/.tg_load_avg
34 ± 17% +133.8% 79 ± 15% sched_debug.cfs_rq[5]:/.tg_runnable_contrib
7121 ± 5% +82.8% 13020 ± 2% sched_debug.cfs_rq[5]:/.tg_load_avg
294 ± 4% +127.2% 669 ± 5% sched_debug.cfs_rq[5]:/.tg->runnable_avg
1034 ± 32% +123.5% 2312 ± 23% sched_debug.cfs_rq[5]:/.tg_load_contrib
3739 ± 2% +28.2% 4792 ± 12% sched_debug.cfs_rq[5]:/.min_vruntime
1589 ± 16% +131.5% 3679 ± 15% sched_debug.cfs_rq[5]:/.avg->runnable_avg_sum
1001 ± 32% +128.8% 2292 ± 23% sched_debug.cfs_rq[5]:/.blocked_load_avg
3851 ± 1% +19.0% 4583 ± 5% sched_debug.cfs_rq[6]:/.min_vruntime
1882 ± 5% +94.7% 3666 ± 5% sched_debug.cfs_rq[6]:/.avg->runnable_avg_sum
297 ± 4% +126.0% 671 ± 5% sched_debug.cfs_rq[6]:/.tg->runnable_avg
7120 ± 5% +82.7% 13005 ± 2% sched_debug.cfs_rq[6]:/.tg_load_avg
40 ± 5% +95.1% 79 ± 5% sched_debug.cfs_rq[6]:/.tg_runnable_contrib
3826 ± 2% +17.5% 4495 ± 6% sched_debug.cfs_rq[7]:/.min_vruntime
7078 ± 5% +83.5% 12987 ± 1% sched_debug.cfs_rq[7]:/.tg_load_avg
1517 ± 9% +155.4% 3875 ± 14% sched_debug.cfs_rq[7]:/.avg->runnable_avg_sum
32 ± 9% +162.5% 84 ± 15% sched_debug.cfs_rq[7]:/.tg_runnable_contrib
299 ± 3% +125.0% 673 ± 5% sched_debug.cfs_rq[7]:/.tg->runnable_avg
331525 ± 3% -34.0% 218740 ± 5% sched_debug.cpu#0.nr_switches
142801 ± 4% -48.4% 73667 ± 5% sched_debug.cpu#0.ttwu_local
678550 ± 7% -32.7% 456720 ± 34% sched_debug.cpu#0.avg_idle
313488 ± 1% -31.1% 215938 ± 4% sched_debug.cpu#0.ttwu_count
111534 ± 2% -49.4% 56460 ± 3% sched_debug.cpu#0.clock_task
155708 ± 3% -38.2% 96199 ± 5% sched_debug.cpu#0.sched_goidle
28188 ± 1% -33.4% 18783 ± 3% sched_debug.cpu#0.nr_load_updates
111534 ± 2% -49.4% 56460 ± 3% sched_debug.cpu#0.clock
331695 ± 3% -34.0% 218808 ± 5% sched_debug.cpu#0.sched_count
446397 ± 5% -18.7% 362850 ± 3% sched_debug.cpu#1.sched_count
208484 ± 6% -13.1% 181078 ± 3% sched_debug.cpu#1.ttwu_count
29358 ± 2% -32.6% 19790 ± 1% sched_debug.cpu#1.nr_load_updates
111537 ± 2% -49.4% 56461 ± 3% sched_debug.cpu#1.clock_task
111537 ± 2% -49.4% 56461 ± 3% sched_debug.cpu#1.clock
446310 ± 5% -18.7% 362793 ± 3% sched_debug.cpu#1.nr_switches
214342 ± 6% -22.4% 166319 ± 3% sched_debug.cpu#1.sched_goidle
166966 ± 8% -26.5% 122663 ± 4% sched_debug.cpu#1.ttwu_local
707893 ± 7% -31.1% 487966 ± 9% sched_debug.cpu#1.avg_idle
198095 ± 3% -11.9% 174593 ± 9% sched_debug.cpu#2.sched_goidle
111536 ± 2% -49.4% 56460 ± 3% sched_debug.cpu#2.clock_task
151594 ± 4% -12.4% 132847 ± 11% sched_debug.cpu#2.ttwu_local
711629 ± 8% -48.7% 364936 ± 30% sched_debug.cpu#2.avg_idle
111536 ± 2% -49.4% 56460 ± 3% sched_debug.cpu#2.clock
28629 ± 4% -32.1% 19433 ± 7% sched_debug.cpu#2.nr_load_updates
164980 ± 11% -32.8% 110931 ± 7% sched_debug.cpu#3.ttwu_local
441918 ± 8% -24.3% 334685 ± 6% sched_debug.cpu#3.sched_count
441832 ± 8% -24.3% 334627 ± 6% sched_debug.cpu#3.nr_switches
718484 ± 5% -34.8% 468267 ± 21% sched_debug.cpu#3.avg_idle
207428 ± 9% -18.2% 169613 ± 6% sched_debug.cpu#3.ttwu_count
111536 ± 2% -49.4% 56462 ± 3% sched_debug.cpu#3.clock_task
29300 ± 4% -33.8% 19394 ± 7% sched_debug.cpu#3.nr_load_updates
111536 ± 2% -49.4% 56462 ± 3% sched_debug.cpu#3.clock
212194 ± 9% -28.0% 152779 ± 6% sched_debug.cpu#3.sched_goidle
2015 ± 1% -36.0% 1290 ± 1% sched_debug.cpu#4.nr_uninterruptible
68219 ± 3% +16.4% 79412 ± 3% sched_debug.cpu#4.ttwu_count
111535 ± 2% -49.4% 56461 ± 3% sched_debug.cpu#4.clock
15884 ± 0% -35.3% 10276 ± 5% sched_debug.cpu#4.nr_load_updates
111535 ± 2% -49.4% 56461 ± 3% sched_debug.cpu#4.clock_task
101874 ± 19% -26.3% 75129 ± 7% sched_debug.cpu#5.sched_goidle
67242 ± 29% -39.0% 41027 ± 11% sched_debug.cpu#5.ttwu_local
16499 ± 7% -36.4% 10496 ± 5% sched_debug.cpu#5.nr_load_updates
218670 ± 17% -19.5% 175923 ± 6% sched_debug.cpu#5.nr_switches
3 ± 25% +123.1% 7 ± 46% sched_debug.cpu#5.cpu_load[4]
111534 ± 2% -49.4% 56454 ± 3% sched_debug.cpu#5.clock_task
111534 ± 2% -49.4% 56454 ± 3% sched_debug.cpu#5.clock
748334 ± 5% -24.6% 563910 ± 15% sched_debug.cpu#5.avg_idle
218737 ± 17% -19.6% 175958 ± 6% sched_debug.cpu#5.sched_count
111534 ± 2% -49.4% 56460 ± 3% sched_debug.cpu#6.clock_task
111534 ± 2% -49.4% 56460 ± 3% sched_debug.cpu#6.clock
15913 ± 1% -31.5% 10903 ± 10% sched_debug.cpu#6.nr_load_updates
697624 ± 9% -19.9% 559027 ± 3% sched_debug.cpu#6.avg_idle
199869 ± 3% -17.1% 165770 ± 8% sched_debug.cpu#7.sched_count
199797 ± 3% -17.0% 165733 ± 8% sched_debug.cpu#7.nr_switches
111533 ± 2% -49.4% 56460 ± 3% sched_debug.cpu#7.clock
16124 ± 2% -36.8% 10191 ± 7% sched_debug.cpu#7.nr_load_updates
57841 ± 6% -37.1% 36364 ± 13% sched_debug.cpu#7.ttwu_local
92125 ± 3% -24.0% 70057 ± 9% sched_debug.cpu#7.sched_goidle
111533 ± 2% -49.4% 56460 ± 3% sched_debug.cpu#7.clock_task
111537 ± 2% -49.4% 56462 ± 3% sched_debug.cpu_clk
111387 ± 2% -49.4% 56313 ± 3% sched_debug.ktime
111537 ± 2% -49.4% 56462 ± 3% sched_debug.sched_clk
lkp-st02: Core2
Memory: 8G
nhm4: Nehalem
Memory: 4G
vmstat.system.cs
60000 ++------------------------------------------------------------------+
| O O O O O O O O O O O O O O O O O O
50000 O+ O O O |
| |
| |
40000 ++ |
| .*. |
30000 ++.*.. *.. .*. *.. |
*. * : *. |
20000 ++ : : *..*..*..*..*.*..*..* |
| : : |
| : : |
10000 ++ : : |
| : : |
0 ++-O-----*-*--O-----O-----------------------------------------------+
softirqs.BLOCK
120000 ++-----------------------------------------------------------------+
| |
100000 *+.*..* *..*.*..*..*.. |
| : : .*..*.. .*. .* |
| : : *..* *. *. |
80000 ++ : : |
| : : |
60000 ++ : : |
| : : |
40000 ++ : : |
O O:O O : O O O O O O O O O O O O O O O O O O
| : : |
20000 ++ : : |
| : : |
0 ++-O----*--*--O-----O----------------------------------------------+
fsmark.files_per_sec
1800 ++-------------------------------------------------------------------+
O O O O O O O O O O O O O O O O O O O O O O
1600 ++ |
1400 ++ |
| |
1200 ++ |
1000 ++ .*.. |
*..*..* *..*..*. *.. |
800 ++ : : |
600 ++ : : *..*.*..*..*..*..*..* |
| : : |
400 ++ : : |
200 ++ : : |
| : : |
0 ++-O-----*--*-O-----O------------------------------------------------+
fsmark.time.percent_of_cpu_this_job_got
45 ++---------------------------------------------------------------------+
O O O O O O O O O O O O O O O O O O O O O O
40 ++ |
35 ++ |
| |
30 ++ |
25 ++ |
| *.. .*.. |
20 *+.*..* : *..*. *.. |
15 ++ : : *..*.*..*..*..*..*..* |
| : : |
10 ++ : : |
5 ++ : : |
| : : |
0 ++-O-----*--*--O-----O-------------------------------------------------+
fsmark.time.voluntary_context_switches
1.2e+06 ++--------------------O-------O--------O-----------------------O--O
O O O O O O O O O O O O O O O O O |
1e+06 ++ |
*..*..* *.*..*..*..*.*..*..*..*.*..*..*..* |
| : : |
800000 ++ : : |
| : : |
600000 ++ : : |
| : : |
400000 ++ : : |
| : : |
| : : |
200000 ++ : : |
| : : |
0 ++-O----*--*--O----O----------------------------------------------+
fsmark.time.involuntary_context_switches
16000 ++------------------------------------------------------------------O
O O O O O O O O O O O O O O |
14000 ++ O O O O O O O |
12000 ++ |
| |
10000 ++ *.. |
*..*..* : *..*..*.*.. .*.. |
8000 ++ : : *..*..*..*..*.*. * |
| : : |
6000 ++ : : |
4000 ++ : : |
| : : |
2000 ++ : : |
| : : |
0 ++-O-----*-*--O-----O-----------------------------------------------+
time.system_time
30 ++---------------------------------------------------------------------+
| |
25 O+ O O O O O O O O O O O O O O O O O O O O O
| .*..*.*..*..*..*..*..* |
*..*..* *..*..*..*..*. |
20 ++ : : |
| : : |
15 ++ : : |
| : : |
10 ++ : : |
| : : |
| : : |
5 ++ : : |
| : : |
0 ++-O-----*--*--O-----O-------------------------------------------------+
time.percent_of_cpu_this_job_got
45 ++---------------------------------------------------------------------+
O O O O O O O O O O O O O O O O O O O O O O
40 ++ |
35 ++ |
| |
30 ++ |
25 ++ |
| *.. .*.. |
20 *+.*..* : *..*. *.. |
15 ++ : : *..*.*..*..*..*..*..* |
| : : |
10 ++ : : |
5 ++ : : |
| : : |
0 ++-O-----*--*--O-----O-------------------------------------------------+
time.voluntary_context_switches
1.2e+06 ++--------------------O-------O--------O-----------------------O--O
O O O O O O O O O O O O O O O O O |
1e+06 ++ |
*..*..* *.*..*..*..*.*..*..*..*.*..*..*..* |
| : : |
800000 ++ : : |
| : : |
600000 ++ : : |
| : : |
400000 ++ : : |
| : : |
| : : |
200000 ++ : : |
| : : |
0 ++-O----*--*--O----O----------------------------------------------+
time.involuntary_context_switches
16000 ++------------------------------------------------------------------O
O O O O O O O O O O O O O O |
14000 ++ O O O O O O O |
12000 ++ |
| |
10000 ++ *.. |
*..*..* : *..*..*.*.. .*.. |
8000 ++ : : *..*..*..*..*.*. * |
| : : |
6000 ++ : : |
4000 ++ : : |
| : : |
2000 ++ : : |
| : : |
0 ++-O-----*-*--O-----O-----------------------------------------------+
proc-vmstat.nr_dirty
180 ++--------------------------------------------------------------------+
| *.. *.. |
160 ++. .*..*.. .*..*..*.. .. .*..* |
140 *+ * *.*. *. * *. |
| : : |
120 ++ : : |
100 ++ : : O O O |
| O: O :O O O O O O |
80 ++ : : O O O O O O O
60 O+ :O : O O |
| : : |
40 ++ : : |
20 ++ : : |
| : : |
0 ++-O-----*--*--O----O-------------------------------------------------+
proc-vmstat.nr_written
250000 ++-----------------------------------------------------------------+
| .* .*.. .*. |
*..*. : *..*.*..*..*..*..*.*. *. *..* |
200000 ++ : : |
| : : |
| : : |
150000 ++ : :O |
O O:O O : O O O O O O O O O O O O O O O O O
100000 ++ : : |
| : : |
| : : |
50000 ++ : : |
| : : |
| : : |
0 ++-O----*--*--O-----O----------------------------------------------+
proc-vmstat.pgpgout
1e+06 ++-----------------------------------------------------------------+
900000 ++ .* .*.. .*. |
*..*. : *..*.*..*..*..*..*.*. *. *..* |
800000 ++ : : |
700000 ++ : : |
| : : |
600000 ++ : :O |
500000 O+ O:O O : O O O O O O O O O O O O O O O O O
400000 ++ : : |
| : : |
300000 ++ : : |
200000 ++ : : |
| : : |
100000 ++ : : |
0 ++-O----*--*--O-----O----------------------------------------------+
proc-vmstat.pgactivate
45000 ++------------------------------------------------------------------+
| |
40000 O+ O O O O O O O O O O O O O O O O O O O
35000 ++ O O |
| |
30000 *+.*..* *..*..*..*.*.. .*.. .*.. |
25000 ++ : : *. *..*..*.*. * |
| : : |
20000 ++ : : |
15000 ++ : : |
| : : |
10000 ++ : : |
5000 ++ : : |
| : : |
0 ++-O-----*-*--O-----O-----------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
To reproduce:
apt-get install ruby
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/setup-local job.yaml # the job file attached in this email
bin/run-local job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Ying Huang
View attachment "job.yaml" of type "text/plain" (2199 bytes)
View attachment "reproduce" of type "text/plain" (538 bytes)
_______________________________________________
LKP mailing list
LKP@...ux.intel.com
Powered by blists - more mailing lists