[<prev] [next>] [day] [month] [year] [list]
Message-ID: <1426142140.6711.214.camel@linux.intel.com>
Date: Thu, 12 Mar 2015 14:35:40 +0800
From: Huang Ying <ying.huang@...ux.intel.com>
To: Davidlohr Bueso <dave@...olabs.net>
Cc: LKML <linux-kernel@...r.kernel.org>, LKP ML <lkp@...org>
Subject: [LKP] [locking/rwsem] 1a99367023f: no primary result change, +23.6%
will-it-scale.time.system_time
FYI, we noticed the below changes on
git://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
commit 1a99367023f6ac664365a37fa508b059e31d0e88 ("locking/rwsem: Check for active lock before bailing on spinning")
There is some minor will-it-scale.per_thread_ops changes below (-1.8%), but not stable enough during bisect.
So in general, there is no user visible change, just more system time.
testbox/testcase/testparams: ivb42/will-it-scale/performance-brk1
b3fd4f03ca0b9952 1a99367023f6ac664365a37fa5
---------------- --------------------------
fail:runs %reproduction fail:runs
| | |
:4 25% 1:4 kmsg.Spurious_LAPIC_timer_interrupt_on_cpu
%stddev %change %stddev
\ | \
308 ± 3% +23.6% 381 ± 1% will-it-scale.time.system_time
99 ± 3% +20.2% 119 ± 0% will-it-scale.time.percent_of_cpu_this_job_got
34098838 ± 1% +6.0% 36159517 ± 2% will-it-scale.time.voluntary_context_switches
314 ± 0% +2.5% 322 ± 0% will-it-scale.time.elapsed_time
314 ± 0% +2.5% 322 ± 0% will-it-scale.time.elapsed_time.max
0.61 ± 20% +428.8% 3.21 ± 5% perf-profile.cpu-cycles.osq_lock.rwsem_down_write_failed.call_rwsem_down_write_failed.sys_brk.system_call_fastpath
0.39 ± 23% +127.3% 0.88 ± 14% perf-profile.cpu-cycles.osq_unlock.rwsem_down_write_failed.call_rwsem_down_write_failed.sys_brk.system_call_fastpath
991202 ± 25% -47.8% 517752 ± 41% sched_debug.cpu#5.sched_count
481295 ± 25% -48.0% 250449 ± 42% sched_debug.cpu#5.sched_goidle
963157 ± 25% -47.9% 501898 ± 42% sched_debug.cpu#5.nr_switches
5.03 ± 16% +133.3% 11.73 ± 7% perf-profile.cpu-cycles.rwsem_spin_on_owner.rwsem_down_write_failed.call_rwsem_down_write_failed.sys_brk.system_call_fastpath
185603 ± 45% +99.3% 369978 ± 34% sched_debug.cpu#9.ttwu_count
17 ± 20% +75.0% 29 ± 35% sched_debug.cfs_rq[33]:/.load
1.07 ± 13% +88.8% 2.02 ± 17% perf-profile.cpu-cycles.__kernel_text_address.print_context_stack.dump_trace.save_stack_trace_tsk.__account_scheduler_latency
2.41 ± 9% +92.7% 4.64 ± 12% perf-profile.cpu-cycles._raw_spin_lock_irqsave.rwsem_wake.call_rwsem_wake.sys_brk.system_call_fastpath
1201 ± 30% -45.8% 651 ± 21% cpuidle.C3-IVT.usage
1.92 ± 3% -39.8% 1.16 ± 19% perf-profile.cpu-cycles._raw_spin_lock.try_to_wake_up.wake_up_process.__rwsem_do_wake.rwsem_wake
1.10 ± 10% +93.6% 2.12 ± 5% perf-profile.cpu-cycles.up_write.vma_adjust.vma_merge.do_brk.sys_brk
6 ± 17% +92.3% 12 ± 22% sched_debug.cfs_rq[6]:/.runnable_load_avg
2.02 ± 13% +95.2% 3.94 ± 20% perf-profile.cpu-cycles.print_context_stack.dump_trace.save_stack_trace_tsk.__account_scheduler_latency.enqueue_entity
6 ± 36% +52.0% 9 ± 17% sched_debug.cpu#6.cpu_load[2]
2.63 ± 14% +95.0% 5.13 ± 18% perf-profile.cpu-cycles.__account_scheduler_latency.enqueue_entity.enqueue_task_fair.enqueue_task.activate_task
5 ± 20% +66.7% 8 ± 9% sched_debug.cpu#6.cpu_load[3]
2.41 ± 14% +93.1% 4.66 ± 19% perf-profile.cpu-cycles.save_stack_trace_tsk.__account_scheduler_latency.enqueue_entity.enqueue_task_fair.enqueue_task
2.34 ± 14% +94.5% 4.55 ± 18% perf-profile.cpu-cycles.dump_trace.save_stack_trace_tsk.__account_scheduler_latency.enqueue_entity.enqueue_task_fair
0.96 ± 13% +71.0% 1.65 ± 13% perf-profile.cpu-cycles.find_vma.sys_brk.system_call_fastpath
17462 ± 4% +15.1% 20096 ± 6% sched_debug.cfs_rq[4]:/.exec_clock
82 ± 24% +116.2% 177 ± 46% sched_debug.cfs_rq[27]:/.tg_load_contrib
155743 ± 31% +81.1% 281980 ± 34% sched_debug.cpu#14.sched_count
13.98 ± 6% +63.6% 22.87 ± 3% perf-profile.cpu-cycles.call_rwsem_down_write_failed.sys_brk.system_call_fastpath
13.94 ± 6% +63.5% 22.78 ± 3% perf-profile.cpu-cycles.rwsem_down_write_failed.call_rwsem_down_write_failed.sys_brk.system_call_fastpath
702 ± 12% -39.2% 427 ± 16% cpuidle.C1E-IVT.usage
103116 ± 29% +82.1% 187794 ± 22% sched_debug.cpu#41.sched_goidle
206574 ± 29% +82.0% 375906 ± 22% sched_debug.cpu#41.nr_switches
214754 ± 29% +79.4% 385314 ± 22% sched_debug.cpu#41.sched_count
5 ± 8% +52.4% 8 ± 8% sched_debug.cpu#6.cpu_load[4]
67108 ± 40% +86.7% 125260 ± 35% sched_debug.cpu#14.sched_goidle
134740 ± 40% +86.4% 251133 ± 35% sched_debug.cpu#14.nr_switches
1.42 ± 8% -33.0% 0.95 ± 6% perf-profile.cpu-cycles.cpuidle_select.cpu_startup_entry.start_secondary
1.27 ± 6% -34.1% 0.83 ± 7% perf-profile.cpu-cycles.menu_select.cpuidle_select.cpu_startup_entry.start_secondary
1.28 ± 7% +44.9% 1.85 ± 8% perf-profile.cpu-cycles.find_vma.do_munmap.sys_brk.system_call_fastpath
2.69 ± 4% +36.6% 3.68 ± 4% perf-profile.cpu-cycles.vma_adjust.vma_merge.do_brk.sys_brk.system_call_fastpath
40423 ± 2% +11.6% 45108 ± 3% sched_debug.cpu#6.nr_load_updates
1.24 ± 7% -31.6% 0.85 ± 11% perf-profile.cpu-cycles.check_preempt_curr.ttwu_do_wakeup.ttwu_do_activate.try_to_wake_up.wake_up_process
5.67 ± 5% -29.9% 3.98 ± 7% perf-profile.cpu-cycles.perf_event_mmap.do_brk.sys_brk.system_call_fastpath
3.09 ± 1% -30.0% 2.16 ± 3% perf-profile.cpu-cycles.dequeue_task_fair.dequeue_task.deactivate_task.__sched_text_start.schedule
2.08 ± 7% -29.9% 1.46 ± 12% perf-profile.cpu-cycles.ttwu_do_wakeup.ttwu_do_activate.try_to_wake_up.wake_up_process.__rwsem_do_wake
2.52 ± 1% -30.1% 1.76 ± 3% perf-profile.cpu-cycles.dequeue_entity.dequeue_task_fair.dequeue_task.deactivate_task.__sched_text_start
3.20 ± 2% -28.8% 2.28 ± 4% perf-profile.cpu-cycles.pick_next_task_fair.__sched_text_start.schedule.schedule_preempt_disabled.cpu_startup_entry
10.55 ± 6% -24.9% 7.92 ± 4% perf-profile.cpu-cycles.cpuidle_enter_state.cpuidle_enter.cpu_startup_entry.start_secondary
1.16 ± 2% -30.1% 0.81 ± 9% perf-profile.cpu-cycles.free_pgtables.unmap_region.do_munmap.sys_brk.system_call_fastpath
1.10 ± 15% -28.9% 0.78 ± 5% perf-profile.cpu-cycles.unmap_page_range.unmap_single_vma.unmap_vmas.unmap_region.do_munmap
18033 ± 2% +10.5% 19932 ± 6% sched_debug.cfs_rq[6]:/.exec_clock
5.61 ± 1% -27.5% 4.07 ± 4% perf-profile.cpu-cycles.__sched_text_start.schedule.schedule_preempt_disabled.cpu_startup_entry.start_secondary
5.74 ± 2% -27.5% 4.16 ± 5% perf-profile.cpu-cycles.schedule_preempt_disabled.cpu_startup_entry.start_secondary
3.30 ± 1% -28.5% 2.36 ± 4% perf-profile.cpu-cycles.dequeue_task.deactivate_task.__sched_text_start.schedule.rwsem_down_write_failed
5.67 ± 1% -27.3% 4.12 ± 4% perf-profile.cpu-cycles.schedule.schedule_preempt_disabled.cpu_startup_entry.start_secondary
3.31 ± 1% -28.5% 2.37 ± 3% perf-profile.cpu-cycles.deactivate_task.__sched_text_start.schedule.rwsem_down_write_failed.call_rwsem_down_write_failed
4.74 ± 5% -29.1% 3.36 ± 6% perf-profile.cpu-cycles.perf_event_aux.perf_event_mmap.do_brk.sys_brk.system_call_fastpath
10.92 ± 5% -24.4% 8.25 ± 4% perf-profile.cpu-cycles.cpuidle_enter.cpu_startup_entry.start_secondary
6.64 ± 2% -28.1% 4.77 ± 3% perf-profile.cpu-cycles.schedule.rwsem_down_write_failed.call_rwsem_down_write_failed.sys_brk.system_call_fastpath
6.51 ± 2% -28.2% 4.67 ± 4% perf-profile.cpu-cycles.__sched_text_start.schedule.rwsem_down_write_failed.call_rwsem_down_write_failed.sys_brk
1.77 ± 3% -29.4% 1.25 ± 9% perf-profile.cpu-cycles.set_next_entity.pick_next_task_fair.__sched_text_start.schedule.schedule_preempt_disabled
1.32 ± 5% +33.2% 1.77 ± 9% perf-profile.cpu-cycles.up_write.sys_brk.system_call_fastpath
205 ± 11% +20.3% 247 ± 13% sched_debug.cpu#33.ttwu_local
5926 ± 3% +38.9% 8234 ± 23% sched_debug.cfs_rq[20]:/.exec_clock
244 ± 9% -26.4% 179 ± 8% sched_debug.cpu#26.ttwu_local
354306 ± 9% -20.2% 282834 ± 2% cpuidle.C6-IVT.usage
0.98 ± 9% -25.8% 0.73 ± 7% perf-profile.cpu-cycles.update_cfs_shares.dequeue_entity.dequeue_task_fair.dequeue_task.deactivate_task
17515 ± 3% +10.5% 19349 ± 3% sched_debug.cfs_rq[10]:/.exec_clock
0.79 ± 15% -27.5% 0.57 ± 13% perf-profile.cpu-cycles.resched_curr.ttwu_do_wakeup.ttwu_do_activate.try_to_wake_up.wake_up_process
3.13 ± 2% +27.9% 4.00 ± 4% perf-profile.cpu-cycles.vma_merge.do_brk.sys_brk.system_call_fastpath
29.84 ± 2% -25.2% 22.32 ± 3% perf-profile.cpu-cycles.start_secondary
29.69 ± 2% -25.2% 22.21 ± 3% perf-profile.cpu-cycles.cpu_startup_entry.start_secondary
1.93 ± 5% -24.9% 1.45 ± 8% perf-profile.cpu-cycles.perf_event_aux_ctx.perf_event_aux.perf_event_mmap.do_brk.sys_brk
3.67 ± 4% -23.4% 2.81 ± 3% perf-profile.cpu-cycles.tick_nohz_stop_sched_tick.__tick_nohz_idle_enter.tick_nohz_idle_enter.cpu_startup_entry.start_secondary
308 ± 3% +23.6% 381 ± 1% time.system_time
1.35 ± 11% -22.7% 1.05 ± 3% perf-profile.cpu-cycles.unmap_single_vma.unmap_vmas.unmap_region.do_munmap.sys_brk
4.09 ± 3% -22.8% 3.16 ± 3% perf-profile.cpu-cycles.__tick_nohz_idle_enter.tick_nohz_idle_enter.cpu_startup_entry.start_secondary
1.05 ± 8% -19.7% 0.84 ± 2% perf-profile.cpu-cycles.lapic_next_deadline.clockevents_program_event.tick_program_event.__hrtimer_start_range_ns.hrtimer_start_range_ns
4.38 ± 3% -22.9% 3.38 ± 4% perf-profile.cpu-cycles.tick_nohz_idle_enter.cpu_startup_entry.start_secondary
99 ± 3% +20.2% 119 ± 0% time.percent_of_cpu_this_job_got
4.31 ± 4% -22.4% 3.35 ± 3% perf-profile.cpu-cycles.tick_nohz_idle_exit.cpu_startup_entry.start_secondary
5808 ± 4% +26.5% 7349 ± 14% sched_debug.cfs_rq[13]:/.exec_clock
2.05 ± 7% -21.1% 1.61 ± 2% perf-profile.cpu-cycles.hrtimer_start_range_ns.tick_nohz_restart.tick_nohz_idle_exit.cpu_startup_entry.start_secondary
1.58 ± 6% +21.9% 1.92 ± 8% perf-profile.cpu-cycles.anon_vma_clone.__split_vma.do_munmap.sys_brk.system_call_fastpath
1.43 ± 10% -20.5% 1.14 ± 2% perf-profile.cpu-cycles.tick_program_event.__hrtimer_start_range_ns.hrtimer_start_range_ns.tick_nohz_restart.tick_nohz_idle_exit
9.35 ± 1% -19.6% 7.52 ± 5% perf-profile.cpu-cycles.intel_idle.cpuidle_enter_state.cpuidle_enter.cpu_startup_entry.start_secondary
6081 ± 7% +14.6% 6970 ± 5% sched_debug.cfs_rq[15]:/.exec_clock
6.83 ± 3% +17.1% 8.00 ± 10% perf-profile.cpu-cycles.enqueue_entity.enqueue_task_fair.enqueue_task.activate_task.ttwu_do_activate
5956 ± 10% +17.5% 7000 ± 8% sched_debug.cfs_rq[17]:/.exec_clock
2.02 ± 6% -20.7% 1.60 ± 2% perf-profile.cpu-cycles.__hrtimer_start_range_ns.hrtimer_start_range_ns.tick_nohz_restart.tick_nohz_idle_exit.cpu_startup_entry
2.57 ± 4% -19.3% 2.08 ± 4% perf-profile.cpu-cycles.hrtimer_start.tick_nohz_stop_sched_tick.__tick_nohz_idle_enter.tick_nohz_idle_enter.cpu_startup_entry
246343 ± 1% +9.9% 270748 ± 3% sched_debug.cfs_rq[14]:/.min_vruntime
1.40 ± 9% -20.3% 1.11 ± 2% perf-profile.cpu-cycles.clockevents_program_event.tick_program_event.__hrtimer_start_range_ns.hrtimer_start_range_ns.tick_nohz_restart
17631 ± 1% +15.3% 20334 ± 6% sched_debug.cfs_rq[8]:/.exec_clock
2.53 ± 4% -19.1% 2.05 ± 4% perf-profile.cpu-cycles.__hrtimer_start_range_ns.hrtimer_start.tick_nohz_stop_sched_tick.__tick_nohz_idle_enter.tick_nohz_idle_enter
2222 ± 7% +8.8% 2419 ± 7% sched_debug.cpu#35.curr->pid
2.95 ± 7% -18.9% 2.40 ± 2% perf-profile.cpu-cycles.tick_nohz_restart.tick_nohz_idle_exit.cpu_startup_entry.start_secondary
6200 ± 7% +17.9% 7311 ± 4% sched_debug.cfs_rq[14]:/.exec_clock
7.35 ± 3% +13.7% 8.36 ± 9% perf-profile.cpu-cycles.enqueue_task_fair.enqueue_task.activate_task.ttwu_do_activate.try_to_wake_up
30922 ± 2% +14.9% 35531 ± 1% sched_debug.cpu#15.nr_load_updates
5940 ± 2% +28.2% 7617 ± 22% sched_debug.cfs_rq[18]:/.exec_clock
1.13 ± 18% +39.4% 1.57 ± 12% perf-profile.cpu-cycles.down_write.sys_brk.system_call_fastpath
10.54 ± 3% -12.9% 9.19 ± 3% perf-profile.cpu-cycles.do_brk.sys_brk.system_call_fastpath
40246 ± 1% +13.2% 45575 ± 3% sched_debug.cpu#8.nr_load_updates
30869 ± 2% +19.1% 36767 ± 4% sched_debug.cpu#20.nr_load_updates
2178 ± 4% +11.7% 2433 ± 6% sched_debug.cpu#33.curr->pid
17639 ± 2% +13.5% 20020 ± 6% sched_debug.cfs_rq[11]:/.exec_clock
39954 ± 2% +13.5% 45359 ± 2% sched_debug.cpu#4.nr_load_updates
2483 ± 2% -9.9% 2238 ± 4% time.involuntary_context_switches
31335 ± 1% +13.6% 35597 ± 3% sched_debug.cpu#13.nr_load_updates
30990 ± 3% +13.9% 35313 ± 2% sched_debug.cpu#17.nr_load_updates
246393 ± 3% +13.8% 280499 ± 6% sched_debug.cfs_rq[18]:/.min_vruntime
31272 ± 2% +14.6% 35823 ± 1% sched_debug.cpu#14.nr_load_updates
242514 ± 1% +13.0% 274042 ± 4% sched_debug.cfs_rq[13]:/.min_vruntime
17452 ± 4% +14.8% 20038 ± 4% sched_debug.cfs_rq[9]:/.exec_clock
39962 ± 3% +13.9% 45502 ± 3% sched_debug.cpu#9.nr_load_updates
31046 ± 1% +16.6% 36199 ± 5% sched_debug.cpu#18.nr_load_updates
62.38 ± 1% +14.6% 71.51 ± 1% perf-profile.cpu-cycles.sys_brk.system_call_fastpath
62.72 ± 1% +14.4% 71.76 ± 1% perf-profile.cpu-cycles.system_call_fastpath
39742 ± 2% +11.1% 44168 ± 1% sched_debug.cpu#10.nr_load_updates
30858596 ± 1% +11.6% 34423247 ± 3% cpuidle.C1-IVT.usage
3.52 ± 4% -8.0% 3.24 ± 3% perf-profile.cpu-cycles.unmap_region.do_munmap.sys_brk.system_call_fastpath
243796 ± 1% +10.9% 270426 ± 2% sched_debug.cfs_rq[16]:/.min_vruntime
16.93 ± 2% -13.5% 14.65 ± 6% perf-profile.cpu-cycles.try_to_wake_up.wake_up_process.__rwsem_do_wake.rwsem_wake.call_rwsem_wake
29303 ± 4% +11.6% 32702 ± 4% sched_debug.cpu#12.nr_load_updates
245510 ± 0% +10.3% 270675 ± 2% sched_debug.cfs_rq[19]:/.min_vruntime
244024 ± 1% +10.4% 269379 ± 1% sched_debug.cfs_rq[15]:/.min_vruntime
17.91 ± 2% -12.8% 15.62 ± 5% perf-profile.cpu-cycles.wake_up_process.__rwsem_do_wake.rwsem_wake.call_rwsem_wake.sys_brk
15043 ± 3% -8.3% 13799 ± 4% slabinfo.kmalloc-512.num_objs
246096 ± 0% +11.1% 273409 ± 3% sched_debug.cfs_rq[12]:/.min_vruntime
18.14 ± 2% -12.5% 15.87 ± 5% perf-profile.cpu-cycles.__rwsem_do_wake.rwsem_wake.call_rwsem_wake.sys_brk.system_call_fastpath
17738 ± 2% +11.4% 19752 ± 5% sched_debug.cfs_rq[1]:/.exec_clock
31513 ± 1% +13.4% 35747 ± 3% sched_debug.cpu#16.nr_load_updates
14995 ± 3% -8.2% 13765 ± 4% slabinfo.kmalloc-512.active_objs
39689 ± 2% +12.7% 44717 ± 2% sched_debug.cpu#11.nr_load_updates
31173 ± 2% +10.8% 34530 ± 0% sched_debug.cpu#19.nr_load_updates
2900 ± 2% +8.2% 3137 ± 6% slabinfo.kmalloc-2048.active_objs
50519 ± 2% +9.3% 55204 ± 0% sched_debug.cpu#43.nr_load_updates
754899 ± 3% -7.2% 700567 ± 5% sched_debug.cpu#35.avg_idle
2189 ± 6% -6.5% 2046 ± 4% sched_debug.cpu#47.curr->pid
245137 ± 1% +10.9% 271884 ± 1% sched_debug.cfs_rq[20]:/.min_vruntime
252683 ± 2% +6.8% 269903 ± 2% sched_debug.cfs_rq[22]:/.min_vruntime
250553 ± 4% +7.3% 268896 ± 2% sched_debug.cfs_rq[21]:/.min_vruntime
40942 ± 4% +10.5% 45255 ± 2% sched_debug.cpu#1.nr_load_updates
19657 ± 4% -9.8% 17725 ± 5% vmstat.system.in
27.10 ± 0% -0.7% 26.90 ± 0% turbostat.%Busy
4.10 ± 0% -2.4% 4.00 ± 0% turbostat.RAMWatt
testbox/testcase/testparams: lituya/will-it-scale/performance-brk1
b3fd4f03ca0b9952 1a99367023f6ac664365a37fa5
---------------- --------------------------
239 ± 1% +32.0% 316 ± 3% will-it-scale.time.system_time
80 ± 1% +30.4% 105 ± 3% will-it-scale.time.percent_of_cpu_this_job_got
52295908 ± 1% -5.4% 49462338 ± 0% will-it-scale.time.voluntary_context_switches
728289 ± 0% -1.8% 715194 ± 0% will-it-scale.per_thread_ops
63 ± 48% -36.9% 40 ± 7% sched_debug.cpu#12.load
223957 ± 16% -62.8% 83209 ± 16% cpuidle.C6-HSW.usage
31 ± 16% +116.1% 67 ± 34% sched_debug.cpu#14.load
80 ± 34% -60.7% 31 ± 20% sched_debug.cpu#2.load
73 ± 25% -53.4% 34 ± 12% sched_debug.cfs_rq[2]:/.load
300986 ± 24% -40.3% 179777 ± 42% sched_debug.cfs_rq[4]:/.min_vruntime
346 ± 33% +91.1% 662 ± 25% cpuidle.POLL.usage
1212812 ± 35% -44.7% 670407 ± 25% sched_debug.cpu#2.ttwu_count
144641 ± 35% -62.3% 54518 ± 15% sched_debug.cpu#6.ttwu_local
33 ± 25% +90.2% 63 ± 34% sched_debug.cfs_rq[14]:/.load
1377774 ± 40% +210.8% 4282777 ± 48% sched_debug.cpu#9.sched_count
34 ± 10% +109.4% 72 ± 43% sched_debug.cpu#14.cpu_load[0]
681074 ± 40% +210.0% 2111486 ± 49% sched_debug.cpu#9.sched_goidle
1362573 ± 40% +210.0% 4223660 ± 49% sched_debug.cpu#9.nr_switches
327 ± 7% +81.2% 593 ± 14% sched_debug.cfs_rq[14]:/.tg_load_contrib
588875 ± 12% +78.6% 1051474 ± 12% sched_debug.cpu#6.sched_count
292062 ± 13% +77.6% 518637 ± 12% sched_debug.cpu#6.sched_goidle
585096 ± 13% +77.5% 1038414 ± 12% sched_debug.cpu#6.nr_switches
262640 ± 6% -41.6% 153289 ± 11% sched_debug.cfs_rq[6]:/.min_vruntime
148498 ± 46% +113.4% 316963 ± 13% sched_debug.cpu#1.ttwu_local
1385681 ± 22% +86.1% 2578972 ± 18% sched_debug.cpu#8.ttwu_count
296 ± 9% +80.1% 533 ± 17% sched_debug.cfs_rq[14]:/.blocked_load_avg
24472 ± 25% -40.1% 14663 ± 48% sched_debug.cfs_rq[4]:/.exec_clock
32 ± 7% +79.7% 57 ± 34% sched_debug.cpu#14.cpu_load[1]
1650425 ± 13% -37.7% 1027432 ± 29% sched_debug.cpu#14.ttwu_count
57 ± 14% +36.2% 78 ± 10% sched_debug.cpu#0.load
43412 ± 13% -26.2% 32048 ± 22% sched_debug.cfs_rq[2]:/.exec_clock
33 ± 6% +67.7% 55 ± 21% sched_debug.cpu#13.cpu_load[0]
64 ± 17% -22.5% 50 ± 19% sched_debug.cpu#9.cpu_load[0]
53 ± 14% +34.4% 72 ± 5% sched_debug.cfs_rq[0]:/.load
31 ± 7% +53.5% 48 ± 22% sched_debug.cpu#14.cpu_load[2]
29 ± 10% +47.9% 43 ± 15% sched_debug.cpu#13.cpu_load[1]
32 ± 16% -36.9% 20 ± 24% sched_debug.cpu#4.cpu_load[1]
30 ± 5% +36.9% 41 ± 10% sched_debug.cpu#14.cpu_load[4]
31 ± 5% +40.5% 44 ± 14% sched_debug.cpu#14.cpu_load[3]
520038 ± 13% -19.4% 419303 ± 18% sched_debug.cfs_rq[2]:/.min_vruntime
469998 ± 13% +31.3% 616962 ± 12% sched_debug.cfs_rq[10]:/.min_vruntime
36098 ± 10% -40.6% 21432 ± 4% sched_debug.cpu#6.nr_load_updates
21 ± 26% -33.7% 14 ± 13% sched_debug.cpu#4.cpu_load[4]
1178 ± 12% +43.7% 1694 ± 13% sched_debug.cpu#14.curr->pid
21211 ± 10% -38.2% 13103 ± 12% sched_debug.cfs_rq[6]:/.exec_clock
39866 ± 16% +32.5% 52814 ± 16% sched_debug.cfs_rq[10]:/.exec_clock
24 ± 23% -35.1% 15 ± 15% sched_debug.cpu#4.cpu_load[3]
1.38 ± 12% -15.0% 1.17 ± 4% perf-profile.cpu-cycles.avc_has_perm_noaudit.cred_has_capability.selinux_capable.selinux_vm_enough_memory.security_vm_enough_memory_mm
239 ± 1% +32.0% 316 ± 3% time.system_time
394 ± 7% +26.1% 497 ± 5% sched_debug.cfs_rq[14]:/.tg_runnable_contrib
18088 ± 7% +26.4% 22861 ± 5% sched_debug.cfs_rq[14]:/.avg->runnable_avg_sum
39 ± 17% -33.8% 26 ± 29% sched_debug.cpu#4.cpu_load[0]
1228 ± 4% +32.1% 1622 ± 6% sched_debug.cpu#0.curr->pid
80 ± 1% +30.4% 105 ± 3% time.percent_of_cpu_this_job_got
27 ± 18% -34.9% 17 ± 19% sched_debug.cpu#4.cpu_load[2]
401 ± 9% -14.3% 344 ± 11% sched_debug.cfs_rq[2]:/.tg_runnable_contrib
6.50 ± 1% -20.3% 5.19 ± 7% time.user_time
502752 ± 2% +29.0% 648466 ± 3% sched_debug.cfs_rq[14]:/.min_vruntime
53 ± 12% +23.6% 65 ± 4% sched_debug.cfs_rq[0]:/.runnable_load_avg
353 ± 12% +28.8% 455 ± 7% sched_debug.cfs_rq[12]:/.tg_runnable_contrib
16234 ± 12% +28.8% 20903 ± 7% sched_debug.cfs_rq[12]:/.avg->runnable_avg_sum
43966 ± 4% +26.9% 55773 ± 4% sched_debug.cfs_rq[14]:/.exec_clock
1344 ± 8% +15.4% 1552 ± 4% sched_debug.cpu#11.curr->pid
1080 ± 1% -15.6% 912 ± 4% time.involuntary_context_switches
433 ± 3% +16.3% 504 ± 2% sched_debug.cfs_rq[9]:/.tg_runnable_contrib
19913 ± 3% +16.5% 23191 ± 2% sched_debug.cfs_rq[9]:/.avg->runnable_avg_sum
54233 ± 6% +14.0% 61828 ± 6% sched_debug.cpu#14.nr_load_updates
48889 ± 7% +17.7% 57564 ± 2% sched_debug.cfs_rq[9]:/.exec_clock
59096 ± 3% +11.9% 66139 ± 4% sched_debug.cpu#9.nr_load_updates
53 ± 12% +18.8% 63 ± 7% sched_debug.cpu#0.cpu_load[0]
13853 ± 10% +35.6% 18786 ± 10% vmstat.system.in
346546 ± 1% -5.3% 328077 ± 0% vmstat.system.cs
1146 ± 0% +4.2% 1195 ± 0% turbostat.Avg_MHz
34.76 ± 0% +4.2% 36.23 ± 0% turbostat.%Busy
ivb42: Ivytown Ivy Bridge-EP
Memory: 64G
lituya: Grantley Haswell
Memory: 16G
will-it-scale.time.percent_of_cpu_this_job_got
125 ++--------------------------------------------------------------------+
O |
120 ++ O O O O |
115 ++ O O O O O O |
| O O O O O |
110 ++ O O O |
| |
105 ++ * |
| + + |
100 ++ .*..*.. + + .*
95 ++ *...*.. ..*.. .. . ..*..*.. + *...*. |
| .. *. * *.. .*. . .* |
90 ++..* *...*. *. |
*. |
85 ++--------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
To reproduce:
apt-get install ruby
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/setup-local job.yaml # the job file attached in this email
bin/run-local job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Ying Huang
View attachment "job.yaml" of type "text/plain" (1567 bytes)
View attachment "reproduce" of type "text/plain" (3582 bytes)
_______________________________________________
LKP mailing list
LKP@...ux.intel.com
Powered by blists - more mailing lists