[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <87pp37y4cy.fsf@yhuang-dev.intel.com>
Date: Sat, 01 Aug 2015 13:17:33 +0800
From: kernel test robot <ying.huang@...el.com>
TO: Andy Lutomirski <luto@...nel.org>
CC: Ingo Molnar <mingo@...nel.org>
Subject: [lkp] [x86/build] b2c51106c75: -18.1% will-it-scale.per_process_ops
FYI, we noticed the below changes on
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git x86/asm
commit b2c51106c7581866c37ffc77c5d739f3d4b7cbc9 ("x86/build: Fix detection of GCC -mpreferred-stack-boundary support")
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/cpufreq_governor/test:
wsm/will-it-scale/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/performance/readseek2
commit:
f2a50f8b7da45ff2de93a71393e715a2ab9f3b68
b2c51106c7581866c37ffc77c5d739f3d4b7cbc9
f2a50f8b7da45ff2 b2c51106c7581866c37ffc77c5
---------------- --------------------------
%stddev %change %stddev
\ | \
879002 ± 0% -18.1% 720270 ± 7% will-it-scale.per_process_ops
0.02 ± 0% +34.5% 0.02 ± 7% will-it-scale.scalability
26153173 ± 0% +7.0% 27977076 ± 0% will-it-scale.time.voluntary_context_switches
15.70 ± 2% +14.5% 17.98 ± 0% time.user_time
370683 ± 0% +6.2% 393491 ± 0% vmstat.system.cs
830343 ± 56% -54.0% 382128 ± 39% cpuidle.C1E-NHM.time
788.25 ± 14% -21.7% 617.25 ± 16% cpuidle.C1E-NHM.usage
3947 ± 2% +10.6% 4363 ± 3% slabinfo.kmalloc-192.active_objs
3947 ± 2% +10.6% 4363 ± 3% slabinfo.kmalloc-192.num_objs
1082762 ±162% -100.0% 0.00 ± -1% latency_stats.avg.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
1082762 ±162% -100.0% 0.00 ± -1% latency_stats.max.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
1082762 ±162% -100.0% 0.00 ± -1% latency_stats.sum.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
2.58 ± 8% +19.5% 3.09 ± 3% perf-profile.cpu-cycles._raw_spin_lock_irqsave.finish_wait.__wait_on_bit_lock.__lock_page.find_lock_entry
7.02 ± 3% +9.2% 7.67 ± 2% perf-profile.cpu-cycles._raw_spin_lock_irqsave.prepare_to_wait_exclusive.__wait_on_bit_lock.__lock_page.find_lock_entry
3.07 ± 2% +14.8% 3.53 ± 3% perf-profile.cpu-cycles.finish_wait.__wait_on_bit_lock.__lock_page.find_lock_entry.shmem_getpage_gfp
3.05 ± 5% -8.4% 2.79 ± 4% perf-profile.cpu-cycles.hrtimer_start_range_ns.tick_nohz_stop_sched_tick.__tick_nohz_idle_enter.tick_nohz_idle_enter.cpu_startup_entry
0.98 ± 3% -25.1% 0.74 ± 7% perf-profile.cpu-cycles.is_ftrace_trampoline.print_context_stack.dump_trace.save_stack_trace_tsk.__account_scheduler_latency
1.82 ± 18% +46.6% 2.67 ± 3% perf-profile.cpu-cycles.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.finish_wait.__wait_on_bit_lock.__lock_page
8.05 ± 3% +9.5% 8.82 ± 3% perf-profile.cpu-cycles.prepare_to_wait_exclusive.__wait_on_bit_lock.__lock_page.find_lock_entry.shmem_getpage_gfp
7.75 ± 34% -64.5% 2.75 ± 64% sched_debug.cfs_rq[2]:/.nr_spread_over
1135 ± 20% -43.6% 640.75 ± 49% sched_debug.cfs_rq[3]:/.blocked_load_avg
1215 ± 21% -43.1% 691.50 ± 46% sched_debug.cfs_rq[3]:/.tg_load_contrib
38.50 ± 21% +129.9% 88.50 ± 36% sched_debug.cfs_rq[4]:/.load
26.00 ± 20% +98.1% 51.50 ± 46% sched_debug.cfs_rq[4]:/.runnable_load_avg
128.25 ± 18% +227.5% 420.00 ± 43% sched_debug.cfs_rq[4]:/.utilization_load_avg
1015 ± 78% +101.1% 2042 ± 25% sched_debug.cfs_rq[6]:/.blocked_load_avg
1069 ± 72% +100.2% 2140 ± 23% sched_debug.cfs_rq[6]:/.tg_load_contrib
88.75 ± 14% -47.3% 46.75 ± 36% sched_debug.cfs_rq[9]:/.load
59.25 ± 23% -41.4% 34.75 ± 34% sched_debug.cfs_rq[9]:/.runnable_load_avg
315.50 ± 45% -64.6% 111.67 ± 1% sched_debug.cfs_rq[9]:/.utilization_load_avg
2246758 ± 7% +87.6% 4213925 ± 65% sched_debug.cpu#0.nr_switches
2249376 ± 7% +87.4% 4215969 ± 65% sched_debug.cpu#0.sched_count
1121438 ± 7% +81.0% 2030313 ± 61% sched_debug.cpu#0.sched_goidle
1151160 ± 7% +86.5% 2146608 ± 64% sched_debug.cpu#0.ttwu_count
33.75 ± 15% -22.2% 26.25 ± 6% sched_debug.cpu#1.cpu_load[3]
33.25 ± 10% -18.0% 27.25 ± 7% sched_debug.cpu#1.cpu_load[4]
40.00 ± 18% +24.4% 49.75 ± 18% sched_debug.cpu#10.cpu_load[2]
39.25 ± 14% +22.3% 48.00 ± 10% sched_debug.cpu#10.cpu_load[3]
39.50 ± 15% +20.3% 47.50 ± 6% sched_debug.cpu#10.cpu_load[4]
5269004 ± 1% +27.8% 6731790 ± 30% sched_debug.cpu#10.nr_switches
5273193 ± 1% +27.8% 6736526 ± 30% sched_debug.cpu#10.sched_count
2633974 ± 1% +27.8% 3365271 ± 30% sched_debug.cpu#10.sched_goidle
2644149 ± 1% +26.9% 3356318 ± 30% sched_debug.cpu#10.ttwu_count
30.75 ± 15% +66.7% 51.25 ± 31% sched_debug.cpu#11.cpu_load[1]
5116609 ± 4% +34.5% 6884238 ± 33% sched_debug.cpu#9.nr_switches
5120531 ± 4% +34.5% 6889156 ± 33% sched_debug.cpu#9.sched_count
2557822 ± 4% +34.5% 3441428 ± 33% sched_debug.cpu#9.sched_goidle
0.00 ±141% +4.2e+05% 4.76 ±173% sched_debug.rt_rq[10]:/.rt_time
wsm: Westmere
Memory: 6G
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Ying Huang
View attachment "job.yaml" of type "text/plain" (3216 bytes)
View attachment "reproduce" of type "text/plain" (919 bytes)
Powered by blists - more mailing lists