[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20160804011739.GF20620@yexl-desktop>
Date: Thu, 4 Aug 2016 09:17:39 +0800
From: kernel test robot <xiaolong.ye@...el.com>
To: Jan Kara <jack@...e.cz>
Cc: 0day robot <fengguang.wu@...el.com>,
LKML <linux-kernel@...r.kernel.org>, lkp@...org
Subject: [lkp] [mm] 9d986b84b7: vm-scalability.throughput 65.5% improvement
FYI, we noticed a 65.5% improvement of vm-scalability.throughput due to commit:
commit 9d986b84b73fc2ef25945c5658034cdc69dd26b2 ("mm: Factor out functionality to finish page faults")
https://github.com/0day-ci/linux Jan-Kara/dax-Clear-dirty-bits-after-flushing-caches/20160725-043348
in testcase: vm-scalability
on test machine: 56 threads Grantley Haswell-EP with 64G memory
with following parameters:
runtime: 300s
size: 2T
test: shm-xread-seq-mt
cpufreq_governor: performance
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/size/tbox_group/test/testcase:
gcc-6/performance/x86_64-rhel/debian-x86_64-2015-02-07.cgz/300s/2T/lkp-hsw01/shm-xread-seq-mt/vm-scalability
commit:
49f17d9ed8 ("mm: Allow full handling of COW faults in ->fault handlers")
9d986b84b7 ("mm: Factor out functionality to finish page faults")
49f17d9ed8abce08 9d986b84b73fc2ef25945c5658
---------------- --------------------------
fail:runs %reproduction fail:runs
| | |
:4 25% 1:4 kmsg.DHCP/BOOTP:Reply_not_for_us,op[#]xid[c59e438]
%stddev %change %stddev
\ | \
22642454 ± 3% +65.5% 37479813 ± 1% vm-scalability.throughput
189.93 ± 2% -38.5% 116.78 ± 1% vm-scalability.time.elapsed_time
189.93 ± 2% -38.5% 116.78 ± 1% vm-scalability.time.elapsed_time.max
21150 ± 3% -14.5% 18087 ± 3% vm-scalability.time.involuntary_context_switches
1.504e+08 ± 2% -26.9% 1.1e+08 ± 1% vm-scalability.time.minor_page_faults
1790 ± 0% +10.7% 1981 ± 0% vm-scalability.time.percent_of_cpu_this_job_got
2280 ± 4% -53.6% 1058 ± 2% vm-scalability.time.system_time
1120 ± 0% +12.1% 1256 ± 0% vm-scalability.time.user_time
89352240 ± 3% -38.8% 54640475 ± 1% vm-scalability.time.voluntary_context_switches
112456 ± 3% -32.8% 75530 ± 2% interrupts.CAL:Function_call_interrupts
50794 ± 4% -10.2% 45623 ± 0% meminfo.Active(anon)
472184 ± 5% -38.0% 292556 ± 5% softirqs.RCU
1188901 ± 3% -23.1% 913868 ± 1% softirqs.SCHED
1849061 ± 4% -27.5% 1341103 ± 3% softirqs.TIMER
37.50 ± 1% -10.0% 33.75 ± 1% vmstat.procs.b
21.00 ± 0% +26.2% 26.50 ± 1% vmstat.procs.r
851500 ± 1% -5.5% 804680 ± 0% vmstat.system.cs
62676 ± 1% -4.8% 59670 ± 0% vmstat.system.in
12728 ± 4% -10.2% 11425 ± 0% proc-vmstat.nr_active_anon
12904 ± 4% -40.4% 7692 ± 3% proc-vmstat.numa_hint_faults
6711 ± 7% -38.8% 4110 ± 5% proc-vmstat.numa_hint_faults_local
2045 ± 2% -42.9% 1168 ± 9% proc-vmstat.numa_pages_migrated
12909 ± 4% -40.4% 7698 ± 3% proc-vmstat.numa_pte_updates
8750 ± 15% -77.1% 2005 ± 2% proc-vmstat.pgactivate
1.509e+08 ± 2% -26.9% 1.103e+08 ± 1% proc-vmstat.pgfault
2045 ± 2% -42.9% 1168 ± 9% proc-vmstat.pgmigrate_success
34.13 ± 0% +9.1% 37.25 ± 0% turbostat.%Busy
1050 ± 0% +8.7% 1142 ± 0% turbostat.Avg_MHz
52.91 ± 2% -29.7% 37.21 ± 1% turbostat.CPU%c1
0.17 ± 2% +19.4% 0.20 ± 3% turbostat.CPU%c3
12.80 ± 11% +98.0% 25.35 ± 2% turbostat.CPU%c6
8.86 ± 9% +104.4% 18.10 ± 2% turbostat.Pkg%pc2
0.19 ± 5% +59.7% 0.31 ± 3% turbostat.Pkg%pc6
199.25 ± 0% -2.8% 193.67 ± 0% turbostat.PkgWatt
28.30 ± 2% -7.5% 26.19 ± 0% turbostat.RAMWatt
2.758e+09 ± 5% -55.0% 1.24e+09 ± 3% cpuidle.C1-HSW.time
69866954 ± 4% -43.5% 39487819 ± 2% cpuidle.C1-HSW.usage
3.303e+08 ± 7% -75.6% 80565933 ± 4% cpuidle.C1E-HSW.time
4678516 ± 7% -73.5% 1241972 ± 4% cpuidle.C1E-HSW.usage
24009346 ± 3% -39.8% 14453318 ± 2% cpuidle.C3-HSW.time
90997 ± 5% -53.8% 42072 ± 1% cpuidle.C3-HSW.usage
3.945e+09 ± 1% -28.6% 2.815e+09 ± 0% cpuidle.C6-HSW.time
4079394 ± 1% -28.4% 2920409 ± 0% cpuidle.C6-HSW.usage
1.511e+08 ± 1% -52.8% 71317263 ± 3% cpuidle.POLL.time
1550809 ± 4% -45.6% 843264 ± 2% cpuidle.POLL.usage
4.03e+12 ± 0% -14.8% 3.434e+12 ± 1% perf-stat.branch-instructions
0.20 ± 1% -22.6% 0.16 ± 0% perf-stat.branch-miss-rate
8.143e+09 ± 2% -34.0% 5.371e+09 ± 2% perf-stat.branch-misses
9.72 ± 6% -43.6% 5.48 ± 5% perf-stat.cache-miss-rate
5.043e+09 ± 7% -61.1% 1.963e+09 ± 7% perf-stat.cache-misses
5.184e+10 ± 1% -31.0% 3.577e+10 ± 2% perf-stat.cache-references
1.64e+08 ± 3% -41.6% 95763380 ± 2% perf-stat.context-switches
39030834 ± 3% -43.1% 22197485 ± 2% perf-stat.cpu-migrations
1.175e+13 ± 2% -35.0% 7.633e+12 ± 2% perf-stat.cycles
0.06 ± 2% -13.4% 0.05 ± 3% perf-stat.dTLB-load-miss-rate
1.509e+09 ± 6% -20.5% 1.2e+09 ± 5% perf-stat.dTLB-load-misses
0.02 ± 3% -13.8% 0.02 ± 2% perf-stat.dTLB-store-miss-rate
1.731e+08 ± 9% -30.0% 1.212e+08 ± 4% perf-stat.dTLB-store-misses
9.11e+11 ± 5% -18.6% 7.413e+11 ± 2% perf-stat.dTLB-stores
76155181 ± 1% -28.6% 54350019 ± 1% perf-stat.iTLB-load-misses
2.575e+13 ± 0% -16.7% 2.146e+13 ± 1% perf-stat.instructions
338193 ± 2% +16.8% 394942 ± 2% perf-stat.instructions-per-iTLB-miss
2.19 ± 1% +28.3% 2.81 ± 1% perf-stat.ipc
1.509e+08 ± 2% -26.9% 1.103e+08 ± 1% perf-stat.minor-faults
4103 ± 5% -29.6% 2889 ± 5% perf-stat.node-load-miss-rate
3.123e+09 ± 8% -60.1% 1.247e+09 ± 5% perf-stat.node-load-misses
76026096 ± 5% -43.2% 43176581 ± 3% perf-stat.node-loads
148.62 ± 1% -7.7% 137.17 ± 1% perf-stat.node-store-miss-rate
9.095e+08 ± 7% -60.3% 3.611e+08 ± 6% perf-stat.node-store-misses
6.117e+08 ± 7% -57.0% 2.631e+08 ± 5% perf-stat.node-stores
1.509e+08 ± 2% -26.9% 1.103e+08 ± 1% perf-stat.page-faults
6.61 ± 1% -100.0% 0.00 ± -1% perf-profile.cycles-pp.__account_scheduler_latency.enqueue_entity.enqueue_task_fair.activate_task.ttwu_do_activate
4.89 ± 2% -100.0% 0.00 ± -1% perf-profile.cycles-pp.__do_fault.do_read_fault.isra.70.handle_pte_fault.handle_mm_fault.__do_page_fault
45.87 ± 1% -100.0% 0.00 ± -1% perf-profile.cycles-pp.__do_page_fault.do_page_fault.page_fault
2.77 ± 7% -100.0% 0.00 ± -1% perf-profile.cycles-pp.__kernel_text_address.print_context_stack.dump_trace.save_stack_trace_tsk.__account_scheduler_latency
3.31 ± 2% -100.0% 0.00 ± -1% perf-profile.cycles-pp.__lock_page.find_lock_entry.shmem_getpage_gfp.shmem_fault.__do_fault
1.62 ± 4% -100.0% 0.00 ± -1% perf-profile.cycles-pp.__schedule.schedule.schedule_preempt_disabled.cpu_startup_entry.start_secondary
1.94 ± 2% -100.0% 0.00 ± -1% perf-profile.cycles-pp.__schedule.schedule.schedule_timeout.io_schedule_timeout.bit_wait_io
3.26 ± 2% -100.0% 0.00 ± -1% perf-profile.cycles-pp.__wait_on_bit_lock.__lock_page.find_lock_entry.shmem_getpage_gfp.shmem_fault
9.36 ± 2% -100.0% 0.00 ± -1% perf-profile.cycles-pp.__wake_up.__wake_up_bit.unlock_page.do_read_fault.isra.70.handle_pte_fault
9.49 ± 2% -100.0% 0.00 ± -1% perf-profile.cycles-pp.__wake_up_bit.unlock_page.do_read_fault.isra.70.handle_pte_fault.handle_mm_fault
9.15 ± 2% -100.0% 0.00 ± -1% perf-profile.cycles-pp.__wake_up_common.__wake_up.__wake_up_bit.unlock_page.do_read_fault.isra.70
26.53 ± 2% -100.0% 0.00 ± -1% perf-profile.cycles-pp._raw_spin_lock.do_read_fault.isra.70.handle_pte_fault.handle_mm_fault.__do_page_fault
1.30 ± 2% -100.0% 0.00 ± -1% perf-profile.cycles-pp._raw_spin_lock.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault
0.92 ± 7% -100.0% 0.00 ± -1% perf-profile.cycles-pp._raw_spin_lock.try_to_wake_up.default_wake_function.wake_bit_function.__wake_up_common
1.53 ± 6% -100.0% 0.00 ± -1% perf-profile.cycles-pp.activate_task.ttwu_do_activate.sched_ttwu_pending.cpu_startup_entry.start_secondary
6.18 ± 2% -100.0% 0.00 ± -1% perf-profile.cycles-pp.activate_task.ttwu_do_activate.try_to_wake_up.default_wake_function.wake_bit_function
2.39 ± 2% -100.0% 0.00 ± -1% perf-profile.cycles-pp.bit_wait_io.__wait_on_bit_lock.__lock_page.find_lock_entry.shmem_getpage_gfp
1.07 ± 2% -100.0% 0.00 ± -1% perf-profile.cycles-pp.deactivate_task.__schedule.schedule.schedule_timeout.io_schedule_timeout
8.92 ± 2% -100.0% 0.00 ± -1% perf-profile.cycles-pp.default_wake_function.wake_bit_function.__wake_up_common.__wake_up.__wake_up_bit
45.91 ± 1% -100.0% 0.00 ± -1% perf-profile.cycles-pp.do_page_fault.page_fault
43.05 ± 1% -100.0% 0.00 ± -1% perf-profile.cycles-pp.do_read_fault.isra.70.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault
5.86 ± 1% -100.0% 0.00 ± -1% perf-profile.cycles-pp.dump_trace.save_stack_trace_tsk.__account_scheduler_latency.enqueue_entity.enqueue_task_fair
1.46 ± 6% -100.0% 0.00 ± -1% perf-profile.cycles-pp.enqueue_entity.enqueue_task_fair.activate_task.ttwu_do_activate.sched_ttwu_pending
5.99 ± 2% -100.0% 0.00 ± -1% perf-profile.cycles-pp.enqueue_entity.enqueue_task_fair.activate_task.ttwu_do_activate.try_to_wake_up
1.50 ± 7% -100.0% 0.00 ± -1% perf-profile.cycles-pp.enqueue_task_fair.activate_task.ttwu_do_activate.sched_ttwu_pending.cpu_startup_entry
6.06 ± 2% -100.0% 0.00 ± -1% perf-profile.cycles-pp.enqueue_task_fair.activate_task.ttwu_do_activate.try_to_wake_up.default_wake_function
1.34 ± 4% -100.0% 0.00 ± -1% perf-profile.cycles-pp.filemap_map_pages.do_read_fault.isra.70.handle_pte_fault.handle_mm_fault.__do_page_fault
3.67 ± 1% -100.0% 0.00 ± -1% perf-profile.cycles-pp.find_lock_entry.shmem_getpage_gfp.shmem_fault.__do_fault.do_read_fault.isra.70
44.80 ± 1% -100.0% 0.00 ± -1% perf-profile.cycles-pp.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
44.49 ± 1% -100.0% 0.00 ± -1% perf-profile.cycles-pp.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
2.38 ± 2% -100.0% 0.00 ± -1% perf-profile.cycles-pp.io_schedule_timeout.bit_wait_io.__wait_on_bit_lock.__lock_page.find_lock_entry
26.41 ± 2% -100.0% 0.00 ± -1% perf-profile.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.do_read_fault.isra.70.handle_pte_fault.handle_mm_fault
0.92 ± 7% -100.0% 0.00 ± -1% perf-profile.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.try_to_wake_up.default_wake_function.wake_bit_function
45.94 ± 1% -100.0% 0.00 ± -1% perf-profile.cycles-pp.page_fault
3.20 ± 3% -100.0% 0.00 ± -1% perf-profile.cycles-pp.poll_idle.cpuidle_enter_state.cpuidle_enter.call_cpuidle.cpu_startup_entry
4.96 ± 1% -100.0% 0.00 ± -1% perf-profile.cycles-pp.print_context_stack.dump_trace.save_stack_trace_tsk.__account_scheduler_latency.enqueue_entity
0.96 ± 4% -100.0% 0.00 ± -1% perf-profile.cycles-pp.radix_tree_next_chunk.filemap_map_pages.do_read_fault.isra.70.handle_pte_fault.handle_mm_fault
5.88 ± 1% -100.0% 0.00 ± -1% perf-profile.cycles-pp.save_stack_trace_tsk.__account_scheduler_latency.enqueue_entity.enqueue_task_fair.activate_task
1.74 ± 6% -100.0% 0.00 ± -1% perf-profile.cycles-pp.sched_ttwu_pending.cpu_startup_entry.start_secondary
1.70 ± 4% -100.0% 0.00 ± -1% perf-profile.cycles-pp.schedule.schedule_preempt_disabled.cpu_startup_entry.start_secondary
2.01 ± 2% -100.0% 0.00 ± -1% perf-profile.cycles-pp.schedule.schedule_timeout.io_schedule_timeout.bit_wait_io.__wait_on_bit_lock
1.71 ± 4% -100.0% 0.00 ± -1% perf-profile.cycles-pp.schedule_preempt_disabled.cpu_startup_entry.start_secondary
2.03 ± 2% -100.0% 0.00 ± -1% perf-profile.cycles-pp.schedule_timeout.io_schedule_timeout.bit_wait_io.__wait_on_bit_lock.__lock_page
4.88 ± 2% -100.0% 0.00 ± -1% perf-profile.cycles-pp.shmem_fault.__do_fault.do_read_fault.isra.70.handle_pte_fault.handle_mm_fault
4.62 ± 1% -100.0% 0.00 ± -1% perf-profile.cycles-pp.shmem_getpage_gfp.shmem_fault.__do_fault.do_read_fault.isra.70.handle_pte_fault
8.79 ± 2% -100.0% 0.00 ± -1% perf-profile.cycles-pp.try_to_wake_up.default_wake_function.wake_bit_function.__wake_up_common.__wake_up
1.59 ± 6% -100.0% 0.00 ± -1% perf-profile.cycles-pp.ttwu_do_activate.sched_ttwu_pending.cpu_startup_entry.start_secondary
6.42 ± 2% -100.0% 0.00 ± -1% perf-profile.cycles-pp.ttwu_do_activate.try_to_wake_up.default_wake_function.wake_bit_function.__wake_up_common
9.83 ± 2% -100.0% 0.00 ± -1% perf-profile.cycles-pp.unlock_page.do_read_fault.isra.70.handle_pte_fault.handle_mm_fault.__do_page_fault
9.03 ± 2% -100.0% 0.00 ± -1% perf-profile.cycles-pp.wake_bit_function.__wake_up_common.__wake_up.__wake_up_bit.unlock_page
1.49 ± 0% -100.0% 0.00 ± -1% perf-profile.func.cycles-pp.__kernel_text_address
1.05 ± 4% -100.0% 0.00 ± -1% perf-profile.func.cycles-pp._raw_spin_lock
1.15 ± 1% -100.0% 0.00 ± -1% perf-profile.func.cycles-pp.is_ftrace_trampoline
29.32 ± 2% -100.0% 0.00 ± -1% perf-profile.func.cycles-pp.native_queued_spin_lock_slowpath
3.22 ± 3% -100.0% 0.00 ± -1% perf-profile.func.cycles-pp.poll_idle
1.14 ± 2% -100.0% 0.00 ± -1% perf-profile.func.cycles-pp.print_context_stack
0.97 ± 4% -100.0% 0.00 ± -1% perf-profile.func.cycles-pp.radix_tree_next_chunk
75632 ± 46% -67.7% 24427 ± 31% sched_debug.cfs_rq:/.MIN_vruntime.avg
2328882 ± 33% -60.2% 927159 ± 9% sched_debug.cfs_rq:/.MIN_vruntime.max
391591 ± 36% -62.7% 146153 ± 15% sched_debug.cfs_rq:/.MIN_vruntime.stddev
29087 ± 0% -63.7% 10549 ± 0% sched_debug.cfs_rq:/.exec_clock.avg
43065 ± 1% -68.2% 13716 ± 3% sched_debug.cfs_rq:/.exec_clock.max
14999 ± 3% -50.6% 7412 ± 4% sched_debug.cfs_rq:/.exec_clock.min
13102 ± 3% -80.7% 2529 ± 4% sched_debug.cfs_rq:/.exec_clock.stddev
23.55 ± 21% +95.2% 45.98 ± 5% sched_debug.cfs_rq:/.load_avg.avg
244.12 ± 11% +102.0% 493.25 ± 4% sched_debug.cfs_rq:/.load_avg.max
60.45 ± 16% +108.8% 126.19 ± 3% sched_debug.cfs_rq:/.load_avg.stddev
75632 ± 46% -67.7% 24427 ± 31% sched_debug.cfs_rq:/.max_vruntime.avg
2328882 ± 33% -60.2% 927159 ± 9% sched_debug.cfs_rq:/.max_vruntime.max
391591 ± 36% -62.7% 146153 ± 15% sched_debug.cfs_rq:/.max_vruntime.stddev
2233689 ± 0% -59.4% 907310 ± 0% sched_debug.cfs_rq:/.min_vruntime.avg
3220147 ± 0% -63.1% 1187046 ± 3% sched_debug.cfs_rq:/.min_vruntime.max
1222883 ± 3% -48.7% 627314 ± 4% sched_debug.cfs_rq:/.min_vruntime.min
887785 ± 4% -77.0% 203870 ± 7% sched_debug.cfs_rq:/.min_vruntime.stddev
0.03 ± 0% -33.3% 0.02 ± 0% sched_debug.cfs_rq:/.nr_spread_over.avg
0.75 ± 0% -33.3% 0.50 ± 0% sched_debug.cfs_rq:/.nr_spread_over.max
0.14 ± 0% -33.3% 0.09 ± 0% sched_debug.cfs_rq:/.nr_spread_over.stddev
-985219 ± -2% -71.7% -279060 ±-11% sched_debug.cfs_rq:/.spread0.avg
-1996105 ± -3% -72.0% -559098 ±-11% sched_debug.cfs_rq:/.spread0.min
887841 ± 4% -77.0% 203901 ± 7% sched_debug.cfs_rq:/.spread0.stddev
360.10 ± 11% -16.2% 301.72 ± 4% sched_debug.cfs_rq:/.util_avg.avg
821.81 ± 4% +19.7% 983.50 ± 7% sched_debug.cfs_rq:/.util_avg.max
324766 ± 1% +44.1% 468068 ± 7% sched_debug.cpu.avg_idle.avg
126871 ± 3% -44.2% 70787 ± 1% sched_debug.cpu.clock.avg
126876 ± 3% -44.2% 70791 ± 1% sched_debug.cpu.clock.max
126865 ± 3% -44.2% 70781 ± 1% sched_debug.cpu.clock.min
126871 ± 3% -44.2% 70787 ± 1% sched_debug.cpu.clock_task.avg
126876 ± 3% -44.2% 70791 ± 1% sched_debug.cpu.clock_task.max
126865 ± 3% -44.2% 70781 ± 1% sched_debug.cpu.clock_task.min
0.82 ± 32% +67.8% 1.38 ± 23% sched_debug.cpu.cpu_load[0].avg
2.84 ± 47% +66.9% 4.73 ± 20% sched_debug.cpu.cpu_load[2].avg
34.75 ± 56% +257.6% 124.25 ± 31% sched_debug.cpu.cpu_load[2].max
6.97 ± 62% +158.5% 18.01 ± 29% sched_debug.cpu.cpu_load[2].stddev
2.58 ± 35% +74.7% 4.52 ± 11% sched_debug.cpu.cpu_load[3].avg
29.56 ± 47% +251.8% 104.00 ± 34% sched_debug.cpu.cpu_load[3].max
5.92 ± 46% +174.4% 16.23 ± 26% sched_debug.cpu.cpu_load[3].stddev
2.17 ± 29% +70.9% 3.72 ± 7% sched_debug.cpu.cpu_load[4].avg
22.56 ± 41% +252.9% 79.62 ± 33% sched_debug.cpu.cpu_load[4].max
4.54 ± 38% +170.8% 12.30 ± 26% sched_debug.cpu.cpu_load[4].stddev
3462 ± 7% -36.3% 2204 ± 0% sched_debug.cpu.curr->pid.max
1209 ± 1% -23.3% 928.19 ± 11% sched_debug.cpu.curr->pid.stddev
63097 ± 1% -65.5% 21792 ± 3% sched_debug.cpu.nr_load_updates.avg
84872 ± 2% -65.3% 29465 ± 2% sched_debug.cpu.nr_load_updates.max
39409 ± 2% -62.2% 14880 ± 2% sched_debug.cpu.nr_load_updates.min
18406 ± 5% -75.0% 4595 ± 4% sched_debug.cpu.nr_load_updates.stddev
1388402 ± 0% -69.3% 426233 ± 0% sched_debug.cpu.nr_switches.avg
2085128 ± 1% -72.1% 581408 ± 2% sched_debug.cpu.nr_switches.max
678441 ± 2% -59.4% 275436 ± 5% sched_debug.cpu.nr_switches.min
663497 ± 4% -80.6% 128998 ± 2% sched_debug.cpu.nr_switches.stddev
1709 ± 5% -65.9% 582.00 ± 9% sched_debug.cpu.nr_uninterruptible.max
-2187 ± -2% -66.0% -742.88 ±-13% sched_debug.cpu.nr_uninterruptible.min
1429 ± 2% -69.3% 439.05 ± 5% sched_debug.cpu.nr_uninterruptible.stddev
1388711 ± 0% -69.3% 426119 ± 0% sched_debug.cpu.sched_count.avg
2130949 ± 1% -71.6% 604716 ± 2% sched_debug.cpu.sched_count.max
678062 ± 2% -59.4% 275075 ± 5% sched_debug.cpu.sched_count.min
664120 ± 4% -80.6% 129049 ± 2% sched_debug.cpu.sched_count.stddev
627732 ± 1% -71.2% 180795 ± 1% sched_debug.cpu.sched_goidle.avg
924483 ± 2% -73.6% 244407 ± 2% sched_debug.cpu.sched_goidle.max
321899 ± 1% -63.4% 117707 ± 5% sched_debug.cpu.sched_goidle.min
283575 ± 5% -81.1% 53601 ± 3% sched_debug.cpu.sched_goidle.stddev
759522 ± 0% -67.8% 244467 ± 0% sched_debug.cpu.ttwu_count.avg
1161800 ± 1% -70.9% 337839 ± 2% sched_debug.cpu.ttwu_count.max
351231 ± 3% -56.7% 152210 ± 5% sched_debug.cpu.ttwu_count.min
383100 ± 3% -79.2% 79513 ± 2% sched_debug.cpu.ttwu_count.stddev
126869 ± 3% -44.2% 70783 ± 1% sched_debug.cpu_clk
124049 ± 4% -44.8% 68461 ± 1% sched_debug.ktime
126869 ± 3% -44.2% 70783 ± 1% sched_debug.sched_clk
vm-scalability.time.user_time
1280 ++-------------------------------------------------------------------+
1260 ++ O O O O O O O O O O O O O O O |
O O O O O O O O O O O O |
1240 ++ |
1220 ++ |
| |
1200 ++ |
1180 ++ |
1160 ++ |
*.*. .*..* *.*. .*.*.*..*. .*.* |
1140 ++ * : * : * * + .* |
1120 ++ : : + .*.. .*.. : *.*..* + |
| :: *.* *.*.*.*.* * *.*
1100 ++ * |
1080 ++-------------------------------------------------------------------+
vm-scalability.throughput
4e+07 ++----------------------------------------------------------------+
3.8e+07 ++ O O O O O O O O O O |
O O O O O O O O O O O O O O O O O |
3.6e+07 ++ |
3.4e+07 ++ |
| |
3.2e+07 ++ |
3e+07 ++ |
2.8e+07 ++ |
| |
2.6e+07 ++*. .*. .*.. |
2.4e+07 *+ *.*.* * * *.*.*.* *.*.*. |
| : : + .*. + *.*.*.*.*.|
2.2e+07 ++ : : *..*.*.*.*.*.*.* * *
2e+07 ++--------*-------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Thanks,
Xiaolong
View attachment "config-4.7.0-00005-g9d986b8" of type "text/plain" (151284 bytes)
View attachment "job.yaml" of type "text/plain" (3891 bytes)
View attachment "reproduce" of type "text/plain" (1195 bytes)
Powered by blists - more mailing lists