[<prev] [next>] [day] [month] [year] [list]
Message-ID: <877fnjp5py.fsf@yhuang-dev.intel.com>
Date: Tue, 22 Sep 2015 10:07:05 +0800
From: kernel test robot <ying.huang@...el.com>
TO: Byungchul Park <byungchul.park@....com>
CC: Ingo Molnar <mingo@...nel.org>
Subject: [lkp] [sched/fair] 50a2a3b246: 6.8% unixbench.score
FYI, we noticed the below changes on
https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
commit 50a2a3b246149d041065a67ccb3e98145f780a2f ("sched/fair: Have task_move_group_fair() unconditionally add the entity load to the runqueue")
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/test:
nhm-white2/unixbench/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/shell1
commit:
a05e8c51ff097ff73ec2947631d9102283545f7c
50a2a3b246149d041065a67ccb3e98145f780a2f
a05e8c51ff097ff7 50a2a3b246149d041065a67ccb
---------------- --------------------------
%stddev %change %stddev
\ | \
10017 ± 0% +6.8% 10695 ± 0% unixbench.score
2659497 ± 1% +9.9% 2923213 ± 0% unixbench.time.involuntary_context_switches
1.42e+08 ± 0% +5.4% 1.497e+08 ± 0% unixbench.time.minor_page_faults
359.00 ± 2% +9.7% 394.00 ± 1% unixbench.time.percent_of_cpu_this_job_got
897.18 ± 0% +6.5% 955.67 ± 0% unixbench.time.system_time
495.06 ± 0% +10.6% 547.62 ± 0% unixbench.time.user_time
4510842 ± 0% +7.5% 4847483 ± 0% unixbench.time.voluntary_context_switches
480122 ± 0% +10.3% 529647 ± 0% softirqs.RCU
495.06 ± 0% +10.6% 547.62 ± 0% time.user_time
1853 ± 3% -8.4% 1697 ± 1% uptime.idle
15016 ± 9% +8.6% 16300 ± 1% vmstat.system.in
12355927 ±115% -67.9% 3962132 ±133% latency_stats.max.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
18212179 ±120% -78.2% 3962132 ±133% latency_stats.sum.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
46.62 ± 2% +9.2% 50.91 ± 1% turbostat.%Busy
1338 ± 2% +9.4% 1463 ± 1% turbostat.Avg_MHz
21.04 ± 2% -13.9% 18.12 ± 0% turbostat.CPU%c1
1.065e+08 ± 0% -24.9% 79962725 ± 0% cpuidle.C1E-NHM.time
1298155 ± 0% -13.4% 1123926 ± 0% cpuidle.C1E-NHM.usage
7.87e+08 ± 0% -9.1% 7.152e+08 ± 0% cpuidle.C3-NHM.time
2381534 ± 0% -25.0% 1786239 ± 0% cpuidle.C3-NHM.usage
1.50 ± 74% +28083.3% 422.75 ± 22% sched_debug.cfs_rq[0]:/.load_avg
1347362 ± 9% -63.5% 491982 ± 0% sched_debug.cfs_rq[0]:/.min_vruntime
122.50 ±129% +1752.2% 2269 ± 4% sched_debug.cfs_rq[0]:/.tg_load_avg
2.25 ± 36% +13633.3% 309.00 ± 18% sched_debug.cfs_rq[0]:/.tg_load_avg_contrib
10.50 ± 65% +7852.4% 835.00 ± 12% sched_debug.cfs_rq[0]:/.util_avg
0.75 ±110% +54333.3% 408.25 ± 22% sched_debug.cfs_rq[1]:/.load_avg
1437995 ± 2% -65.6% 494833 ± 0% sched_debug.cfs_rq[1]:/.min_vruntime
125.50 ±125% +1706.8% 2267 ± 4% sched_debug.cfs_rq[1]:/.tg_load_avg
1.25 ± 66% +24020.0% 301.50 ± 12% sched_debug.cfs_rq[1]:/.tg_load_avg_contrib
8.50 ± 55% +9700.0% 833.00 ± 12% sched_debug.cfs_rq[1]:/.util_avg
72.50 ± 30% +114.1% 155.25 ± 24% sched_debug.cfs_rq[2]:/.load
2.00 ± 70% +15875.0% 319.50 ± 16% sched_debug.cfs_rq[2]:/.load_avg
1382532 ± 4% -64.6% 489518 ± 1% sched_debug.cfs_rq[2]:/.min_vruntime
125.75 ±127% +1704.8% 2269 ± 4% sched_debug.cfs_rq[2]:/.tg_load_avg
2.50 ± 60% +11280.0% 284.50 ± 3% sched_debug.cfs_rq[2]:/.tg_load_avg_contrib
9.75 ± 33% +7459.0% 737.00 ± 6% sched_debug.cfs_rq[2]:/.util_avg
4.25 ± 93% +9800.0% 420.75 ± 25% sched_debug.cfs_rq[3]:/.load_avg
1402802 ± 2% -64.9% 492660 ± 1% sched_debug.cfs_rq[3]:/.min_vruntime
127.75 ±124% +1644.0% 2228 ± 1% sched_debug.cfs_rq[3]:/.tg_load_avg
4.50 ± 83% +6138.9% 280.75 ± 5% sched_debug.cfs_rq[3]:/.tg_load_avg_contrib
14.50 ± 23% +5537.9% 817.50 ± 12% sched_debug.cfs_rq[3]:/.util_avg
52365 ± 5% +16.3% 60890 ± 1% sched_debug.cfs_rq[4]:/.exec_clock
1.50 ±100% +27566.7% 415.00 ± 25% sched_debug.cfs_rq[4]:/.load_avg
662638 ± 5% -23.7% 505740 ± 1% sched_debug.cfs_rq[4]:/.min_vruntime
-684773 ±-13% -102.0% 13746 ± 28% sched_debug.cfs_rq[4]:/.spread0
129.00 ±124% +1626.2% 2226 ± 1% sched_debug.cfs_rq[4]:/.tg_load_avg
2.25 ± 72% +11977.8% 271.75 ± 4% sched_debug.cfs_rq[4]:/.tg_load_avg_contrib
13.25 ± 62% +6213.2% 836.50 ± 12% sched_debug.cfs_rq[4]:/.util_avg
52235 ± 5% +14.6% 59862 ± 2% sched_debug.cfs_rq[5]:/.exec_clock
3.00 ± 52% +13558.3% 409.75 ± 23% sched_debug.cfs_rq[5]:/.load_avg
661534 ± 5% -22.7% 511492 ± 0% sched_debug.cfs_rq[5]:/.min_vruntime
-685906 ±-15% -102.8% 19497 ± 28% sched_debug.cfs_rq[5]:/.spread0
134.25 ±119% +1559.4% 2227 ± 1% sched_debug.cfs_rq[5]:/.tg_load_avg
3.00 ± 52% +8616.7% 261.50 ± 4% sched_debug.cfs_rq[5]:/.tg_load_avg_contrib
15.00 ± 39% +5310.0% 811.50 ± 11% sched_debug.cfs_rq[5]:/.util_avg
51367 ± 5% +18.2% 60717 ± 0% sched_debug.cfs_rq[6]:/.exec_clock
99.25 ±162% +411.3% 507.50 ± 20% sched_debug.cfs_rq[6]:/.load_avg
633721 ± 5% -20.3% 505177 ± 0% sched_debug.cfs_rq[6]:/.min_vruntime
-713722 ±-15% -101.8% 13179 ± 14% sched_debug.cfs_rq[6]:/.spread0
130.00 ±122% +1615.4% 2230 ± 1% sched_debug.cfs_rq[6]:/.tg_load_avg
16.00 ± 44% +5578.1% 908.50 ± 11% sched_debug.cfs_rq[6]:/.util_avg
51705 ± 5% +15.5% 59737 ± 1% sched_debug.cfs_rq[7]:/.exec_clock
9.25 ±118% +3883.8% 368.50 ± 31% sched_debug.cfs_rq[7]:/.load_avg
642655 ± 5% -20.6% 510351 ± 1% sched_debug.cfs_rq[7]:/.min_vruntime
-704795 ±-15% -102.6% 18352 ± 49% sched_debug.cfs_rq[7]:/.spread0
129.25 ±123% +1620.5% 2223 ± 1% sched_debug.cfs_rq[7]:/.tg_load_avg
10.00 ±104% +2460.0% 256.00 ± 1% sched_debug.cfs_rq[7]:/.tg_load_avg_contrib
24.75 ± 57% +3009.1% 769.50 ± 16% sched_debug.cfs_rq[7]:/.util_avg
46.00 ± 17% +96.2% 90.25 ± 69% sched_debug.cpu#0.cpu_load[0]
53.00 ± 12% +34.0% 71.00 ± 22% sched_debug.cpu#0.load
775771 ±114% -68.6% 243664 ± 1% sched_debug.cpu#0.sched_goidle
41.50 ± 13% +32.5% 55.00 ± 13% sched_debug.cpu#1.cpu_load[2]
41.50 ± 9% +30.7% 54.25 ± 9% sched_debug.cpu#1.cpu_load[3]
40.00 ± 8% +33.8% 53.50 ± 6% sched_debug.cpu#1.cpu_load[4]
246556 ± 3% +228.0% 808810 ± 69% sched_debug.cpu#1.ttwu_count
93825 ± 6% +610.3% 666397 ± 86% sched_debug.cpu#1.ttwu_local
36.25 ± 26% +40.7% 51.00 ± 12% sched_debug.cpu#2.cpu_load[1]
41.25 ± 19% +29.1% 53.25 ± 13% sched_debug.cpu#2.cpu_load[2]
0.00 ± 0% +Inf% 1.25 ± 34% sched_debug.cpu#2.nr_running
774754 ± 67% -68.4% 245160 ± 0% sched_debug.cpu#2.sched_goidle
41.50 ± 17% +43.4% 59.50 ± 6% sched_debug.cpu#4.cpu_load[0]
85775 ± 3% +13.1% 97001 ± 1% sched_debug.cpu#4.ttwu_local
28.25 ± 27% +84.1% 52.00 ± 12% sched_debug.cpu#5.cpu_load[0]
100187 ± 3% +15.2% 115465 ± 6% sched_debug.cpu#5.nr_load_updates
501003 ± 4% +416.5% 2587654 ± 80% sched_debug.cpu#5.nr_switches
501188 ± 4% +416.3% 2587851 ± 80% sched_debug.cpu#5.sched_count
194683 ± 6% +538.1% 1242234 ± 83% sched_debug.cpu#5.ttwu_count
80363 ± 6% +1303.0% 1127469 ± 93% sched_debug.cpu#5.ttwu_local
40.50 ± 13% +16.0% 47.00 ± 8% sched_debug.cpu#7.cpu_load[4]
1.28 ±171% -99.5% 0.01 ± 64% sched_debug.rt_rq[4]:/.rt_time
0.00 ± 59% +42561.3% 2.09 ± 99% sched_debug.rt_rq[5]:/.rt_time
nhm-white2: Nehalem
Memory: 4G
cpuidle.C1E-NHM.time
1.1e+08 ++---------------------------------------------------------------+
*.*.. .*.. .*.*..*.*..*.*.*.. .*. .*.. .*.. .*. .*. .*.*
1.05e+08 ++ * * *.*. *..*.* * * *. *. |
| |
1e+08 ++ |
| O O O O |
9.5e+07 O+O O O O O |
| O |
9e+07 ++ |
| O O O |
8.5e+07 ++ O |
| |
8e+07 ++ O O O O |
| |
7.5e+07 ++---------------------------------------------------------------+
cpuidle.C3-NHM.usage
2.4e+06 *+*--*----*-*--*-*----*-*--*-*--*-*----*----*----*---------*----*-*
| *. * * * *. *..*.*. *. |
2.3e+06 ++ |
| |
2.2e+06 ++ |
| |
2.1e+06 ++ |
| |
2e+06 O+O O O O O O O O O |
| O |
1.9e+06 ++ |
| O O O O |
1.8e+06 ++ O O O O |
| |
1.7e+06 ++----------------------------------------------------------------+
unixbench.score
10800 ++------------------------------------------------------------------+
| O O |
10700 ++ O O |
10600 ++ O O O O |
| |
10500 ++ |
10400 ++ |
O O O O O O O O O O O |
10300 ++ |
10200 ++ |
| |
10100 ++ |
10000 *+.*.*..*.*..*.*..*.*..*. .*. .*..*.*..*.*..*
| *. *..*.*..*.*..*.*..*.*..* |
9900 ++------------------------------------------------------------------+
unixbench.time.user_time
550 ++-------------------------------------O------------------------------+
| O O O |
540 ++ O O O |
| O |
| |
530 ++ |
O O O O O O O O O O O |
520 ++ |
| |
510 ++ |
| |
| |
500 ++ .*.. |
*..*.*..*.*..*..*.*..*.*..*..*.*..*.*..*.*..*..*.*..*.*. *.*..*.*..*
490 ++--------------------------------------------------------------------+
unixbench.time.system_time
960 ++--------------------------------------------------------------------+
| O O O O |
950 ++ O O O O |
| |
940 ++ |
O O O O O O O O O O |
930 ++ O |
| |
920 ++ |
| |
910 ++ |
| |
900 *+.*.*..*. .*..*. .*. .*..*.*..*.*..*.*..*..*.*..*.*..*..*. .*
| *. *. *. *..*.*. |
890 ++--------------------------------------------------------------------+
unixbench.time.minor_page_faults
1.5e+08 ++----------------------------------O--O-------------------------+
1.49e+08 ++ O O O |
| O O O |
1.48e+08 ++ |
1.47e+08 ++ |
| |
1.46e+08 O+O O O O O O O O O O |
1.45e+08 ++ |
1.44e+08 ++ |
| |
1.43e+08 ++ |
1.42e+08 *+*..*.*..*.*.*.. .*..*. .*.. .*.*..*.*..*.*
| * * .*.*..*.*.*..*.*..* |
1.41e+08 ++ *.*. |
1.4e+08 ++---------------------------------------------------------------+
unixbench.time.voluntary_context_switches
4.9e+06 ++---------------------------------------------------------------+
| O |
4.85e+06 ++ O O O |
4.8e+06 ++ |
| O O O O |
4.75e+06 ++ |
4.7e+06 ++ |
| |
4.65e+06 O+O O O O O O O O O O |
4.6e+06 ++ |
| |
4.55e+06 ++ |
4.5e+06 *+*..*.*..*.*.*..*.*..*.*.*.. .*.*..*.*..*.*
| *.*..*.*..*.*.*..*.*..* |
4.45e+06 ++---------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Ying Huang
View attachment "job.yaml" of type "text/plain" (3113 bytes)
View attachment "reproduce" of type "text/plain" (14 bytes)
Powered by blists - more mailing lists