[<prev] [next>] [day] [month] [year] [list]
Message-ID: <87d1vzpmsx.fsf@yhuang-dev.intel.com>
Date: Wed, 28 Oct 2015 13:46:54 +0800
From: kernel test robot <ying.huang@...ux.intel.com>
TO: Byungchul Park <byungchul.park@....com>
CC: Ingo Molnar <mingo@...nel.org>
Subject: [lkp] [sched/fair] 1746babbb1: +2.3% unixbench.score
FYI, we noticed the below changes on
https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
commit 1746babbb15594ba2d8d8196589bbbc2b5ff51c9 ("sched/fair: Have task_move_group_fair() also detach entity load from the old runqueue")
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/cpufreq_governor/test:
lituya/unixbench/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/performance/shell8
commit:
50a2a3b246149d041065a67ccb3e98145f780a2f
1746babbb15594ba2d8d8196589bbbc2b5ff51c9
50a2a3b246149d04 1746babbb15594ba2d8d819658
---------------- --------------------------
%stddev %change %stddev
\ | \
25166 ± 0% +2.3% 25756 ± 0% unixbench.score
9026404 ± 0% +3.9% 9376242 ± 0% unixbench.time.involuntary_context_switches
4.683e+08 ± 0% +1.8% 4.765e+08 ± 0% unixbench.time.minor_page_faults
949.00 ± 0% +3.0% 977.25 ± 0% unixbench.time.percent_of_cpu_this_job_got
2286 ± 0% +2.8% 2349 ± 0% unixbench.time.system_time
1305 ± 0% +3.3% 1348 ± 0% unixbench.time.user_time
31346 ± 8% -10.7% 27987 ± 1% cpuidle.POLL.time
2679 ± 8% +179.4% 7486 ± 40% latency_stats.max.wait_on_page_bit.filemap_fdatawait_range.filemap_fdatawait.sync_inodes_sb.sync_inodes_one_sb.iterate_supers.sys_sync.entry_SYSCALL_64_fastpath
107334 ± 22% +251.7% 377450 ± 37% latency_stats.sum.walk_component.path_lookupat.filename_lookup.user_path_at_empty.vfs_fstatat.SYSC_newstat.SyS_newstat.entry_SYSCALL_64_fastpath
979.25 ± 1% +17.5% 1150 ± 2% slabinfo.files_cache.active_objs
979.25 ± 1% +17.5% 1150 ± 2% slabinfo.files_cache.num_objs
33.25 ± 1% +14.3% 38.00 ± 0% vmstat.procs.r
25287 ± 0% +5.2% 26600 ± 0% vmstat.system.in
61.09 ± 0% +2.9% 62.85 ± 0% turbostat.%Busy
2011 ± 0% +2.9% 2069 ± 0% turbostat.Avg_MHz
2.58 ± 2% -10.4% 2.32 ± 4% turbostat.CPU%c3
1.16 ± 28% +30.9% 1.52 ± 0% turbostat.RAMWatt
1363139 ± 2% +390.2% 6682436 ± 5% sched_debug.cfs_rq[0]:/.min_vruntime
508.25 ± 35% -30.8% 351.50 ± 8% sched_debug.cfs_rq[0]:/.tg_load_avg_contrib
1483779 ± 3% +248.1% 5164998 ± 0% sched_debug.cfs_rq[10]:/.min_vruntime
120613 ± 52% -1358.2% -1517619 ±-26% sched_debug.cfs_rq[10]:/.spread0
1481440 ± 3% +243.7% 5092246 ± 2% sched_debug.cfs_rq[11]:/.min_vruntime
118271 ± 54% -1444.7% -1590395 ±-26% sched_debug.cfs_rq[11]:/.spread0
1455329 ± 0% +252.8% 5133998 ± 1% sched_debug.cfs_rq[12]:/.min_vruntime
92159 ± 30% -1780.4% -1548645 ±-27% sched_debug.cfs_rq[12]:/.spread0
536.00 ± 18% -24.9% 402.50 ± 18% sched_debug.cfs_rq[13]:/.load_avg
1451734 ± 0% +254.5% 5146739 ± 0% sched_debug.cfs_rq[13]:/.min_vruntime
88561 ± 34% -1834.3% -1535905 ±-24% sched_debug.cfs_rq[13]:/.spread0
326.25 ± 21% +73.2% 565.00 ± 20% sched_debug.cfs_rq[14]:/.load_avg
1479346 ± 3% +246.9% 5132386 ± 0% sched_debug.cfs_rq[14]:/.min_vruntime
47.67 ±141% +472.7% 273.00 ± 42% sched_debug.cfs_rq[14]:/.removed_load_avg
46.00 ±141% +467.9% 261.25 ± 47% sched_debug.cfs_rq[14]:/.removed_util_avg
116171 ± 55% -1434.5% -1550271 ±-25% sched_debug.cfs_rq[14]:/.spread0
777.75 ± 9% +35.8% 1056 ± 14% sched_debug.cfs_rq[14]:/.util_avg
1451847 ± 0% +252.2% 5113656 ± 1% sched_debug.cfs_rq[15]:/.min_vruntime
88669 ± 29% -1869.5% -1569030 ±-28% sched_debug.cfs_rq[15]:/.spread0
1346003 ± 0% +413.1% 6906332 ± 0% sched_debug.cfs_rq[1]:/.min_vruntime
347.50 ± 3% -10.0% 312.75 ± 4% sched_debug.cfs_rq[1]:/.tg_load_avg_contrib
1362040 ± 2% +408.4% 6924492 ± 0% sched_debug.cfs_rq[2]:/.min_vruntime
1363305 ± 2% +394.7% 6744667 ± 4% sched_debug.cfs_rq[3]:/.min_vruntime
1.50 ±100% +400.0% 7.50 ± 22% sched_debug.cfs_rq[3]:/.nr_spread_over
1346931 ± 0% +398.4% 6712511 ± 5% sched_debug.cfs_rq[4]:/.min_vruntime
1345064 ± 0% +416.2% 6942624 ± 0% sched_debug.cfs_rq[5]:/.min_vruntime
36.00 ± 9% -36.1% 23.00 ± 21% sched_debug.cfs_rq[5]:/.runnable_load_avg
-18089 ±-155% -1537.8% 260083 ±142% sched_debug.cfs_rq[5]:/.spread0
1362865 ± 2% +405.4% 6887733 ± 0% sched_debug.cfs_rq[6]:/.min_vruntime
1343531 ± 0% +397.9% 6689576 ± 5% sched_debug.cfs_rq[7]:/.min_vruntime
45.75 ± 22% -51.9% 22.00 ± 50% sched_debug.cfs_rq[8]:/.load
1479661 ± 3% +246.4% 5125934 ± 0% sched_debug.cfs_rq[8]:/.min_vruntime
116500 ± 18% -1436.2% -1556673 ±-22% sched_debug.cfs_rq[8]:/.spread0
37.50 ± 28% -50.7% 18.50 ± 60% sched_debug.cfs_rq[9]:/.load
1453951 ± 0% +254.7% 5156506 ± 0% sched_debug.cfs_rq[9]:/.min_vruntime
90788 ± 30% -1780.9% -1526106 ±-25% sched_debug.cfs_rq[9]:/.spread0
258966 ± 12% +33.3% 345105 ± 12% sched_debug.cpu#12.avg_idle
0.25 ±173% +800.0% 2.25 ± 57% sched_debug.cpu#14.nr_running
216317 ± 17% +28.8% 278519 ± 11% sched_debug.cpu#4.avg_idle
-210.00 ± -3% +15.5% -242.50 ± -8% sched_debug.cpu#5.nr_uninterruptible
182159 ± 15% +54.5% 281450 ± 12% sched_debug.cpu#6.avg_idle
40.25 ± 33% -32.3% 27.25 ± 23% sched_debug.cpu#7.cpu_load[2]
41.50 ± 17% -67.5% 13.50 ± 64% sched_debug.cpu#9.load
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/test:
nhm-white2/unixbench/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/shell8
commit:
50a2a3b246149d041065a67ccb3e98145f780a2f
1746babbb15594ba2d8d8196589bbbc2b5ff51c9
50a2a3b246149d04 1746babbb15594ba2d8d819658
---------------- --------------------------
%stddev %change %stddev
\ | \
1319 ± 0% +1.4% 1337 ± 0% unixbench.time.system_time
856.00 ± 0% +1.3% 867.52 ± 0% unixbench.time.user_time
31791175 ±141% -34.2% 20924107 ± 51% latency_stats.avg.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
4.97 ± 4% -9.7% 4.48 ± 5% turbostat.CPU%c3
1739900 ± 3% +23.5% 2148246 ± 7% cpuidle.C1-NHM.usage
26162 ± 4% +33.4% 34904 ± 7% cpuidle.POLL.usage
882905 ± 7% +514.3% 5423689 ± 7% sched_debug.cfs_rq[0]:/.min_vruntime
878971 ± 6% +539.3% 5618859 ± 5% sched_debug.cfs_rq[1]:/.min_vruntime
-3940 ±-545% -5052.2% 195127 ±120% sched_debug.cfs_rq[1]:/.spread0
95.00 ± 8% -43.2% 54.00 ± 42% sched_debug.cfs_rq[2]:/.load
878054 ± 6% +507.4% 5333052 ± 6% sched_debug.cfs_rq[2]:/.min_vruntime
878105 ± 5% +524.5% 5483875 ± 6% sched_debug.cfs_rq[3]:/.min_vruntime
883009 ± 7% +423.6% 4623117 ± 5% sched_debug.cfs_rq[4]:/.min_vruntime
87.47 ±4504% -9.2e+05% -800722 ±-23% sched_debug.cfs_rq[4]:/.spread0
880682 ± 7% +432.6% 4690114 ± 5% sched_debug.cfs_rq[5]:/.min_vruntime
6.75 ± 16% +85.2% 12.50 ± 25% sched_debug.cfs_rq[5]:/.nr_spread_over
-2242 ±-737% +32626.7% -733759 ±-40% sched_debug.cfs_rq[5]:/.spread0
883065 ± 7% +416.0% 4556427 ± 7% sched_debug.cfs_rq[6]:/.min_vruntime
138.28 ±14053% -6.3e+05% -867453 ±-44% sched_debug.cfs_rq[6]:/.spread0
884890 ± 5% +419.8% 4599371 ± 5% sched_debug.cfs_rq[7]:/.min_vruntime
7.50 ± 35% +60.0% 12.00 ± 30% sched_debug.cfs_rq[7]:/.nr_spread_over
1960 ±821% -42164.6% -824542 ±-45% sched_debug.cfs_rq[7]:/.spread0
-186.50 ±-16% -40.6% -110.75 ±-13% sched_debug.cpu#0.nr_uninterruptible
91.00 ± 23% -38.7% 55.75 ± 28% sched_debug.cpu#2.load
400614 ± 7% -30.4% 278901 ± 27% sched_debug.cpu#6.avg_idle
323918 ± 45% +96.5% 636444 ± 64% sched_debug.cpu#6.sched_goidle
0.41 ± 69% +3122.5% 13.28 ±163% sched_debug.rt_rq[0]:/.rt_time
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/test:
nhm-white/unixbench/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/shell8
commit:
50a2a3b246149d041065a67ccb3e98145f780a2f
1746babbb15594ba2d8d8196589bbbc2b5ff51c9
50a2a3b246149d04 1746babbb15594ba2d8d819658
---------------- --------------------------
%stddev %change %stddev
\ | \
563.50 ± 3% +3.3% 582.25 ± 0% unixbench.time.percent_of_cpu_this_job_got
1318 ± 0% +1.5% 1338 ± 0% unixbench.time.system_time
854.73 ± 0% +1.3% 866.10 ± 0% unixbench.time.user_time
6687472 ± 0% -1.3% 6602598 ± 0% unixbench.time.voluntary_context_switches
5085 ± 2% +143.2% 12369 ± 58% latency_stats.max.pipe_wait.pipe_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
1878596 ± 4% +13.2% 2127099 ± 6% cpuidle.C1-NHM.usage
191783 ± 4% +38.6% 265815 ± 18% cpuidle.POLL.time
24853 ± 2% +42.7% 35470 ± 8% cpuidle.POLL.usage
72.67 ± 3% +3.5% 75.22 ± 0% turbostat.%Busy
2123 ± 3% +3.5% 2197 ± 0% turbostat.Avg_MHz
5.22 ± 3% -8.9% 4.75 ± 0% turbostat.CPU%c3
441.00 ± 16% -23.0% 339.75 ± 2% sched_debug.cfs_rq[0]:/.load_avg
908244 ± 5% +503.6% 5481834 ± 6% sched_debug.cfs_rq[0]:/.min_vruntime
10.75 ± 13% +53.5% 16.50 ± 20% sched_debug.cfs_rq[0]:/.nr_spread_over
910039 ± 5% +512.9% 5577184 ± 3% sched_debug.cfs_rq[1]:/.min_vruntime
915823 ± 4% +523.4% 5709506 ± 0% sched_debug.cfs_rq[2]:/.min_vruntime
544.25 ± 10% -24.3% 412.00 ± 14% sched_debug.cfs_rq[3]:/.load_avg
911018 ± 5% +495.7% 5427014 ± 5% sched_debug.cfs_rq[3]:/.min_vruntime
8.00 ± 34% +46.9% 11.75 ± 11% sched_debug.cfs_rq[3]:/.nr_spread_over
193.00 ± 31% -65.3% 67.00 ±100% sched_debug.cfs_rq[3]:/.removed_load_avg
190.25 ± 29% -64.8% 67.00 ±100% sched_debug.cfs_rq[3]:/.removed_util_avg
1111 ± 7% -10.0% 1000 ± 5% sched_debug.cfs_rq[3]:/.util_avg
912227 ± 5% +415.3% 4700282 ± 1% sched_debug.cfs_rq[4]:/.min_vruntime
3970 ± 50% -19783.8% -781587 ±-36% sched_debug.cfs_rq[4]:/.spread0
917396 ± 5% +410.3% 4681186 ± 3% sched_debug.cfs_rq[5]:/.min_vruntime
9137 ±125% -8862.6% -800697 ±-52% sched_debug.cfs_rq[5]:/.spread0
923564 ± 4% +417.5% 4779073 ± 0% sched_debug.cfs_rq[6]:/.min_vruntime
15304 ± 75% -4692.4% -702818 ±-46% sched_debug.cfs_rq[6]:/.spread0
916876 ± 5% +405.9% 4638348 ± 2% sched_debug.cfs_rq[7]:/.min_vruntime
8613 ± 95% -9893.4% -843599 ±-48% sched_debug.cfs_rq[7]:/.spread0
91.00 ± 13% +123.9% 203.75 ± 51% sched_debug.cpu#0.load
-194.75 ±-18% -47.5% -102.25 ±-49% sched_debug.cpu#0.nr_uninterruptible
85.50 ± 17% +94.2% 166.00 ± 31% sched_debug.cpu#1.load
1.00 ±122% +200.0% 3.00 ± 33% sched_debug.cpu#1.nr_running
-152.50 ±-19% -43.1% -86.75 ±-39% sched_debug.cpu#1.nr_uninterruptible
13309 ± 24% -42.6% 7642 ± 34% sched_debug.cpu#2.curr->pid
2535607 ± 57% -58.2% 1061081 ± 0% sched_debug.cpu#2.nr_switches
-174.50 ±-29% -63.2% -64.25 ±-29% sched_debug.cpu#2.nr_uninterruptible
2536010 ± 57% -58.1% 1061543 ± 0% sched_debug.cpu#2.sched_count
601197 ± 44% -49.3% 304897 ± 0% sched_debug.cpu#2.sched_goidle
203.75 ± 4% -70.2% 60.75 ± 26% sched_debug.cpu#4.nr_uninterruptible
77.25 ± 5% -16.8% 64.25 ± 9% sched_debug.cpu#5.cpu_load[2]
79.00 ± 6% -19.0% 64.00 ± 7% sched_debug.cpu#5.cpu_load[3]
78.25 ± 7% -16.0% 65.75 ± 6% sched_debug.cpu#5.cpu_load[4]
85.50 ± 12% -28.1% 61.50 ± 9% sched_debug.cpu#7.cpu_load[1]
83.25 ± 10% -26.1% 61.50 ± 13% sched_debug.cpu#7.cpu_load[2]
78.25 ± 7% -20.8% 62.00 ± 12% sched_debug.cpu#7.cpu_load[3]
75.75 ± 4% -15.2% 64.25 ± 7% sched_debug.cpu#7.cpu_load[4]
1390226 ± 60% +56.0% 2168907 ± 57% sched_debug.cpu#7.nr_switches
1390620 ± 60% +56.0% 2169299 ± 57% sched_debug.cpu#7.sched_count
370003 ± 57% +59.8% 591403 ± 56% sched_debug.cpu#7.sched_goidle
614114 ± 68% +62.3% 996936 ± 63% sched_debug.cpu#7.ttwu_count
431156 ± 97% +87.5% 808605 ± 78% sched_debug.cpu#7.ttwu_local
6.61 ±157% -99.9% 0.01 ± 83% sched_debug.rt_rq[6]:/.rt_time
lituya: Grantley Haswell
Memory: 16G
nhm-white2: Nehalem
Memory: 4G
nhm-white: Nehalem
Memory: 6G
unixbench.time.system_time
1400 ++-------------------------------------------------------------------+
O O O O O O O O O O O O O O |
1380 ++ O O O |
1360 ++ |
| |
1340 ++ O O O O O O |
| O O O O |
1320 ++ *..*.*..*
| : |
1300 ++ : |
1280 ++ : |
*..*.*..*.*..*.*..*.*..*.. *..* |
1260 ++ *.*..*.*..*. .*. + |
| *. *..*..*.*..* |
1240 ++-------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Ying Huang
View attachment "job.yaml" of type "text/plain" (3141 bytes)
View attachment "reproduce" of type "text/plain" (14 bytes)
Powered by blists - more mailing lists