[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20160513012939.GA24890@yexl-desktop>
Date: Fri, 13 May 2016 09:29:39 +0800
From: kernel test robot <xiaolong.ye@...el.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Mike Galbraith <efault@....de>,
Thomas Gleixner <tglx@...utronix.de>,
LKML <linux-kernel@...r.kernel.org>, lkp@...org
Subject: [lkp] [sched/core] 2159197d66: netperf.Throughput_Mbps 14.2%
improvement
FYI, we noticed netperf.Throughput_Mbps 14.2% improvement due to commit:
commit 2159197d66770ec01f75c93fb11dc66df81fd45b ("sched/core: Enable increased load resolution on 64-bit kernels")
https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched/core
in testcase - netperf
on test machine - lkp-hsw-d01: 8 threads Haswell with 8G memory
=========================================================================================
cluster/compiler/cpufreq_governor/ip/kconfig/nr_threads/rootfs/runtime/send_size/tbox_group/test/testcase:
cs-localhost/gcc-4.9/performance/ipv4/x86_64-rhel/200%/debian-x86_64-2015-02-07.cgz/300s/10K/lkp-hsw-d01/SCTP_STREAM_MANY/netperf
commit:
e7904a28f5331c21d17af638cb477c83662e3cb6
2159197d66770ec01f75c93fb11dc66df81fd45b
e7904a28f5331c21 2159197d66770ec01f75c93fb1
---------------- --------------------------
%stddev %change %stddev
\ | \
4657 ± 3% +14.2% 5317 ± 0% netperf.Throughput_Mbps
10987 ± 9% +3.4e+05% 37810767 ± 0% netperf.time.involuntary_context_switches
527.25 ± 0% -25.1% 395.00 ± 0% netperf.time.percent_of_cpu_this_job_got
1547 ± 0% -24.9% 1162 ± 0% netperf.time.system_time
44.45 ± 1% -32.1% 30.16 ± 1% netperf.time.user_time
16045063 ± 3% -82.8% 2757566 ± 6% netperf.time.voluntary_context_switches
10119 ± 41% +203.3% 30688 ± 38% cpuidle.C1-HSW.usage
51528 ± 8% +203.0% 156148 ± 9% softirqs.RCU
55.94 ± 1% +4.2% 58.31 ± 0% turbostat.CorWatt
64.79 ± 1% +2.6% 66.45 ± 0% turbostat.PkgWatt
21.00 ± 0% -8.3% 19.25 ± 2% vmstat.procs.r
105875 ± 3% +155.5% 270467 ± 0% vmstat.system.cs
3.535e+08 ± 3% +13.4% 4.007e+08 ± 0% proc-vmstat.numa_hit
3.535e+08 ± 3% +13.4% 4.007e+08 ± 0% proc-vmstat.numa_local
8.196e+08 ± 3% +22.1% 1.001e+09 ± 0% proc-vmstat.pgalloc_dma32
1.043e+09 ± 3% +23.0% 1.282e+09 ± 0% proc-vmstat.pgalloc_normal
1.862e+09 ± 3% +22.6% 2.283e+09 ± 0% proc-vmstat.pgfree
10987 ± 9% +3.4e+05% 37810767 ± 0% time.involuntary_context_switches
527.25 ± 0% -25.1% 395.00 ± 0% time.percent_of_cpu_this_job_got
1547 ± 0% -24.9% 1162 ± 0% time.system_time
44.45 ± 1% -32.1% 30.16 ± 1% time.user_time
16045063 ± 3% -82.8% 2757566 ± 6% time.voluntary_context_switches
727242 ± 6% -94.3% 41723 ± 17% sched_debug.cfs_rq:/.MIN_vruntime.avg
824178 ± 0% -90.6% 77484 ± 0% sched_debug.cfs_rq:/.MIN_vruntime.max
353474 ± 28% -100.0% 0.00 ± 0% sched_debug.cfs_rq:/.MIN_vruntime.min
181141 ± 29% -80.4% 35590 ± 4% sched_debug.cfs_rq:/.MIN_vruntime.stddev
215.56 ± 4% +6.5e+05% 1404265 ± 3% sched_debug.cfs_rq:/.load.avg
289.96 ± 27% +6.7e+05% 1954288 ± 3% sched_debug.cfs_rq:/.load.max
161.25 ± 6% +5.4e+05% 865368 ± 0% sched_debug.cfs_rq:/.load.min
41.64 ± 61% +1.2e+06% 503455 ± 3% sched_debug.cfs_rq:/.load.stddev
196.06 ± 2% +401.4% 983.07 ± 2% sched_debug.cfs_rq:/.load_avg.avg
241.33 ± 7% +359.5% 1108 ± 7% sched_debug.cfs_rq:/.load_avg.max
161.29 ± 0% +434.5% 862.17 ± 0% sched_debug.cfs_rq:/.load_avg.min
31.50 ± 19% +180.5% 88.36 ± 34% sched_debug.cfs_rq:/.load_avg.stddev
727242 ± 6% -94.3% 41723 ± 17% sched_debug.cfs_rq:/.max_vruntime.avg
824178 ± 0% -90.6% 77484 ± 0% sched_debug.cfs_rq:/.max_vruntime.max
353474 ± 28% -100.0% 0.00 ± 0% sched_debug.cfs_rq:/.max_vruntime.min
181141 ± 29% -80.4% 35590 ± 4% sched_debug.cfs_rq:/.max_vruntime.stddev
819702 ± 0% -90.7% 76071 ± 0% sched_debug.cfs_rq:/.min_vruntime.avg
826026 ± 0% -90.4% 79401 ± 1% sched_debug.cfs_rq:/.min_vruntime.max
815229 ± 0% -90.8% 74929 ± 0% sched_debug.cfs_rq:/.min_vruntime.min
3308 ± 26% -57.2% 1415 ± 23% sched_debug.cfs_rq:/.min_vruntime.stddev
1.66 ± 1% -18.9% 1.34 ± 3% sched_debug.cfs_rq:/.nr_running.avg
1.25 ± 6% -33.3% 0.83 ± 0% sched_debug.cfs_rq:/.nr_running.min
0.24 ± 18% +98.5% 0.48 ± 4% sched_debug.cfs_rq:/.nr_running.stddev
151.14 ± 1% +482.7% 880.75 ± 1% sched_debug.cfs_rq:/.runnable_load_avg.avg
168.83 ± 1% +502.6% 1017 ± 0% sched_debug.cfs_rq:/.runnable_load_avg.max
103.42 ± 9% +601.1% 725.04 ± 10% sched_debug.cfs_rq:/.runnable_load_avg.min
21.35 ± 17% +357.0% 97.56 ± 20% sched_debug.cfs_rq:/.runnable_load_avg.stddev
-9567 ±-36% -53.2% -4474 ±-24% sched_debug.cfs_rq:/.spread0.min
3310 ± 26% -57.2% 1416 ± 23% sched_debug.cfs_rq:/.spread0.stddev
536142 ± 16% -39.3% 325631 ± 29% sched_debug.cpu.avg_idle.avg
0.90 ± 14% +150.7% 2.26 ± 5% sched_debug.cpu.clock.stddev
0.90 ± 14% +150.7% 2.26 ± 5% sched_debug.cpu.clock_task.stddev
153.77 ± 0% +477.9% 888.63 ± 1% sched_debug.cpu.cpu_load[0].avg
169.17 ± 1% +503.3% 1020 ± 0% sched_debug.cpu.cpu_load[0].max
122.96 ± 5% +549.6% 798.71 ± 1% sched_debug.cpu.cpu_load[0].min
14.98 ± 13% +408.1% 76.13 ± 2% sched_debug.cpu.cpu_load[0].stddev
153.47 ± 0% +484.2% 896.64 ± 1% sched_debug.cpu.cpu_load[1].avg
166.75 ± 0% +502.6% 1004 ± 1% sched_debug.cpu.cpu_load[1].max
130.62 ± 2% +514.7% 802.92 ± 3% sched_debug.cpu.cpu_load[1].min
11.46 ± 8% +483.7% 66.91 ± 10% sched_debug.cpu.cpu_load[1].stddev
152.94 ± 0% +485.7% 895.80 ± 1% sched_debug.cpu.cpu_load[2].avg
165.04 ± 0% +502.6% 994.50 ± 1% sched_debug.cpu.cpu_load[2].max
136.29 ± 1% +489.4% 803.29 ± 3% sched_debug.cpu.cpu_load[2].min
9.17 ± 12% +589.1% 63.21 ± 12% sched_debug.cpu.cpu_load[2].stddev
153.46 ± 0% +483.7% 895.78 ± 1% sched_debug.cpu.cpu_load[3].avg
163.92 ± 0% +500.3% 984.04 ± 1% sched_debug.cpu.cpu_load[3].max
141.88 ± 1% +468.7% 806.79 ± 2% sched_debug.cpu.cpu_load[3].min
7.18 ± 12% +734.9% 59.98 ± 12% sched_debug.cpu.cpu_load[3].stddev
155.51 ± 0% +475.5% 894.93 ± 1% sched_debug.cpu.cpu_load[4].avg
164.58 ± 0% +489.0% 969.38 ± 0% sched_debug.cpu.cpu_load[4].max
146.58 ± 1% +452.8% 810.38 ± 2% sched_debug.cpu.cpu_load[4].min
5.76 ± 13% +858.1% 55.18 ± 12% sched_debug.cpu.cpu_load[4].stddev
215.88 ± 5% +6.3e+05% 1365964 ± 6% sched_debug.cpu.load.avg
293.25 ± 26% +6.7e+05% 1953269 ± 3% sched_debug.cpu.load.max
166.50 ± 5% +5.2e+05% 865491 ± 0% sched_debug.cpu.load.min
40.05 ± 61% +1.2e+06% 478086 ± 4% sched_debug.cpu.load.stddev
2.51 ± 1% -10.6% 2.24 ± 3% sched_debug.cpu.nr_running.avg
1.96 ± 7% -27.7% 1.42 ± 13% sched_debug.cpu.nr_running.min
0.38 ± 24% +85.2% 0.70 ± 23% sched_debug.cpu.nr_running.stddev
2001135 ± 3% +154.6% 5095769 ± 0% sched_debug.cpu.nr_switches.avg
2026432 ± 4% +158.1% 5231149 ± 0% sched_debug.cpu.nr_switches.max
1975801 ± 3% +152.2% 4983341 ± 1% sched_debug.cpu.nr_switches.min
17661 ± 48% +365.1% 82150 ± 34% sched_debug.cpu.nr_switches.stddev
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Xiaolong
View attachment "job.yaml" of type "text/plain" (3858 bytes)
View attachment "reproduce" of type "text/plain" (2202 bytes)
Powered by blists - more mailing lists