[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <87si64mv9a.fsf@yhuang-dev.intel.com>
Date: Thu, 24 Sep 2015 10:00:33 +0800
From: kernel test robot <ying.huang@...el.com>
TO: Morten Rasmussen <morten.rasmussen@....com>
CC: Ingo Molnar <mingo@...nel.org>
Subject: [lkp] [sched/fair] 98d8fd8126: -20.8% hackbench.throughput
FYI, we noticed the below changes on
https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched/core
commit 98d8fd8126676f7ba6e133e65b2ca4b17989d32c ("sched/fair: Initialize task load and utilization before placing task on rq")
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/nr_threads/mode/ipc:
lkp-ws02/hackbench/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/1600%/process/pipe
commit:
231678b768da07d19ab5683a39eeb0c250631d02
98d8fd8126676f7ba6e133e65b2ca4b17989d32c
231678b768da07d1 98d8fd8126676f7ba6e133e65b
---------------- --------------------------
%stddev %change %stddev
\ | \
188818 ± 1% -20.8% 149585 ± 1% hackbench.throughput
81712173 ± 4% +211.8% 2.548e+08 ± 1% hackbench.time.involuntary_context_switches
21611286 ± 0% -19.2% 17453366 ± 1% hackbench.time.minor_page_faults
2226 ± 0% +1.3% 2255 ± 0% hackbench.time.percent_of_cpu_this_job_got
12445 ± 0% +2.1% 12704 ± 0% hackbench.time.system_time
2.494e+08 ± 3% +118.5% 5.448e+08 ± 1% hackbench.time.voluntary_context_switches
1097790 ± 0% +50.6% 1653664 ± 1% softirqs.RCU
554877 ± 3% +137.8% 1319318 ± 1% vmstat.system.cs
89017 ± 4% +187.8% 256235 ± 1% vmstat.system.in
1.312e+08 ± 1% -16.0% 1.102e+08 ± 4% numa-numastat.node0.local_node
1.312e+08 ± 1% -16.0% 1.102e+08 ± 4% numa-numastat.node0.numa_hit
1.302e+08 ± 1% -34.9% 84785305 ± 5% numa-numastat.node1.local_node
1.302e+08 ± 1% -34.9% 84785344 ± 5% numa-numastat.node1.numa_hit
302.00 ± 1% -19.2% 244.00 ± 1% time.file_system_outputs
81712173 ± 4% +211.8% 2.548e+08 ± 1% time.involuntary_context_switches
21611286 ± 0% -19.2% 17453366 ± 1% time.minor_page_faults
2.494e+08 ± 3% +118.5% 5.448e+08 ± 1% time.voluntary_context_switches
92.88 ± 0% +1.3% 94.13 ± 0% turbostat.%Busy
2675 ± 0% +1.8% 2723 ± 0% turbostat.Avg_MHz
4.44 ± 1% -24.9% 3.34 ± 2% turbostat.CPU%c1
0.98 ± 2% -32.2% 0.66 ± 3% turbostat.CPU%c3
2.79e+08 ± 4% -25.2% 2.086e+08 ± 6% cpuidle.C1-NHM.time
1.235e+08 ± 4% -28.6% 88251264 ± 7% cpuidle.C1E-NHM.time
243525 ± 4% -21.9% 190252 ± 8% cpuidle.C1E-NHM.usage
1.819e+08 ± 2% -25.8% 1.35e+08 ± 1% cpuidle.C3-NHM.time
260585 ± 1% -20.4% 207474 ± 2% cpuidle.C3-NHM.usage
266207 ± 1% -39.4% 161453 ± 3% cpuidle.C6-NHM.usage
493467 ± 0% +26.5% 624337 ± 0% meminfo.Active
395397 ± 0% +33.0% 525811 ± 0% meminfo.Active(anon)
372719 ± 1% +34.2% 500207 ± 1% meminfo.AnonPages
4543041 ± 1% +37.5% 6248687 ± 1% meminfo.Committed_AS
185265 ± 1% +16.3% 215373 ± 0% meminfo.KernelStack
302233 ± 1% +37.1% 414289 ± 1% meminfo.PageTables
333827 ± 0% +18.6% 396038 ± 0% meminfo.SUnreclaim
380340 ± 0% +16.6% 443518 ± 0% meminfo.Slab
51154 ±143% -100.0% 5.00 ±100% latency_stats.avg.call_rwsem_down_write_failed.copy_process._do_fork.SyS_clone.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 30679 ±100% latency_stats.avg.proc_cgroup_show.proc_single_show.seq_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
7795 ±100% +1304.6% 109497 ± 93% latency_stats.max.blk_execute_rq.scsi_execute.scsi_execute_req_flags.ses_recv_diag.[ses].ses_get_page2_descriptor.[ses].ses_get_power_status.[ses].ses_enclosure_data_process.[ses].ses_intf_add.[ses].class_interface_register.scsi_register_interface.0xffffffffa0006013.do_one_initcall
297190 ±117% -100.0% 23.00 ±100% latency_stats.max.call_rwsem_down_write_failed.copy_process._do_fork.SyS_clone.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 97905 ±109% latency_stats.max.proc_cgroup_show.proc_single_show.seq_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
12901 ±131% -78.9% 2717 ±135% latency_stats.max.wait_on_page_bit.wait_on_page_read.do_read_cache_page.read_cache_page_gfp.btrfs_scan_one_device.[btrfs].btrfs_control_ioctl.[btrfs].do_vfs_ioctl.SyS_ioctl.entry_SYSCALL_64_fastpath
392778 ±128% -100.0% 75.50 ±100% latency_stats.sum.call_rwsem_down_write_failed.copy_process._do_fork.SyS_clone.entry_SYSCALL_64_fastpath
13678 ± 75% -68.1% 4368 ± 67% latency_stats.sum.flush_work.__cancel_work_timer.cancel_delayed_work_sync.disk_block_events.__blkdev_get.blkdev_get.blkdev_get_by_path.btrfs_scan_one_device.[btrfs].btrfs_control_ioctl.[btrfs].do_vfs_ioctl.SyS_ioctl.entry_SYSCALL_64_fastpath
19088 ±101% -100.0% 8.67 ±110% latency_stats.sum.path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 139824 ±104% latency_stats.sum.proc_cgroup_show.proc_single_show.seq_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
17538 ± 95% -71.0% 5094 ± 72% latency_stats.sum.wait_on_page_bit.wait_on_page_read.do_read_cache_page.read_cache_page_gfp.btrfs_scan_one_device.[btrfs].btrfs_control_ioctl.[btrfs].do_vfs_ioctl.SyS_ioctl.entry_SYSCALL_64_fastpath
99221 ± 1% +32.5% 131458 ± 0% proc-vmstat.nr_active_anon
93610 ± 1% +33.5% 124954 ± 0% proc-vmstat.nr_anon_pages
11604 ± 1% +16.4% 13503 ± 0% proc-vmstat.nr_kernel_stack
75770 ± 1% +36.6% 103508 ± 1% proc-vmstat.nr_page_table_pages
83664 ± 0% +18.6% 99265 ± 0% proc-vmstat.nr_slab_unreclaimable
2.615e+08 ± 1% -25.4% 1.95e+08 ± 1% proc-vmstat.numa_hit
2.615e+08 ± 1% -25.4% 1.95e+08 ± 1% proc-vmstat.numa_local
2151 ± 9% +33.7% 2875 ± 7% proc-vmstat.pgactivate
53318467 ± 1% -16.3% 44611139 ± 4% proc-vmstat.pgalloc_dma32
2.124e+08 ± 1% -27.6% 1.538e+08 ± 2% proc-vmstat.pgalloc_normal
21951016 ± 0% -17.9% 18028538 ± 1% proc-vmstat.pgfault
2.656e+08 ± 1% -25.3% 1.983e+08 ± 1% proc-vmstat.pgfree
231019 ± 6% +19.0% 274944 ± 3% numa-meminfo.node0.Active
182158 ± 8% +24.4% 226602 ± 4% numa-meminfo.node0.Active(anon)
4049 ± 0% -100.0% 0.00 ± -1% numa-meminfo.node0.AnonHugePages
171061 ± 5% +28.8% 220397 ± 3% numa-meminfo.node0.AnonPages
253455 ± 4% -5.4% 239791 ± 4% numa-meminfo.node0.FilePages
9402 ± 3% -90.3% 915.75 ± 39% numa-meminfo.node0.Inactive(anon)
10253 ± 0% -50.8% 5041 ± 1% numa-meminfo.node0.Mapped
131300 ± 9% +25.1% 164194 ± 7% numa-meminfo.node0.PageTables
20469 ± 54% -65.5% 7058 ±145% numa-meminfo.node0.Shmem
235929 ± 5% +43.0% 337441 ± 1% numa-meminfo.node1.Active
186748 ± 8% +53.8% 287257 ± 1% numa-meminfo.node1.Active(anon)
175187 ± 5% +52.9% 267826 ± 5% numa-meminfo.node1.AnonPages
1249 ± 39% +676.4% 9697 ± 4% numa-meminfo.node1.Inactive(anon)
105601 ± 9% +39.4% 147194 ± 10% numa-meminfo.node1.KernelStack
5032 ± 1% +103.5% 10238 ± 0% numa-meminfo.node1.Mapped
1028697 ± 5% +40.4% 1444560 ± 5% numa-meminfo.node1.MemUsed
147371 ± 7% +62.4% 239296 ± 7% numa-meminfo.node1.PageTables
185026 ± 7% +43.2% 264909 ± 10% numa-meminfo.node1.SUnreclaim
209508 ± 7% +38.5% 290116 ± 9% numa-meminfo.node1.Slab
45770 ± 7% +24.9% 57169 ± 3% numa-vmstat.node0.nr_active_anon
42981 ± 4% +29.4% 55616 ± 3% numa-vmstat.node0.nr_anon_pages
63378 ± 4% -5.4% 59946 ± 4% numa-vmstat.node0.nr_file_pages
2351 ± 3% -90.3% 228.25 ± 38% numa-vmstat.node0.nr_inactive_anon
2589 ± 1% -51.6% 1253 ± 0% numa-vmstat.node0.nr_mapped
32990 ± 8% +25.6% 41423 ± 7% numa-vmstat.node0.nr_page_table_pages
5131 ± 54% -65.6% 1763 ±145% numa-vmstat.node0.nr_shmem
64745848 ± 2% -13.8% 55814423 ± 2% numa-vmstat.node0.numa_hit
64743896 ± 2% -13.9% 55752339 ± 2% numa-vmstat.node0.numa_local
1951 ± 91% +3081.4% 62084 ± 1% numa-vmstat.node0.numa_other
45977 ± 8% +57.0% 72172 ± 1% numa-vmstat.node1.nr_active_anon
43078 ± 7% +56.1% 67261 ± 3% numa-vmstat.node1.nr_anon_pages
313.50 ± 40% +673.4% 2424 ± 4% numa-vmstat.node1.nr_inactive_anon
6558 ± 11% +39.9% 9175 ± 8% numa-vmstat.node1.nr_kernel_stack
1262 ± 1% +102.2% 2552 ± 0% numa-vmstat.node1.nr_mapped
36358 ± 9% +65.2% 60055 ± 5% numa-vmstat.node1.nr_page_table_pages
45984 ± 9% +43.9% 66189 ± 8% numa-vmstat.node1.nr_slab_unreclaimable
64599981 ± 2% -34.3% 42454481 ± 5% numa-vmstat.node1.numa_hit
64534349 ± 2% -34.2% 42449235 ± 5% numa-vmstat.node1.numa_local
65632 ± 2% -92.0% 5245 ± 23% numa-vmstat.node1.numa_other
148962 ± 0% +32.1% 196766 ± 2% slabinfo.anon_vma.active_objs
3066 ± 0% +32.5% 4062 ± 1% slabinfo.anon_vma.active_slabs
156402 ± 0% +32.5% 207216 ± 1% slabinfo.anon_vma.num_objs
3066 ± 0% +32.5% 4062 ± 1% slabinfo.anon_vma.num_slabs
15321 ± 0% +14.6% 17563 ± 1% slabinfo.files_cache.active_objs
16470 ± 0% +15.1% 18958 ± 1% slabinfo.files_cache.num_objs
8808 ± 0% +17.6% 10359 ± 1% slabinfo.kmalloc-1024.active_objs
9268 ± 0% +16.1% 10758 ± 0% slabinfo.kmalloc-1024.num_objs
22656 ± 1% +9.9% 24899 ± 2% slabinfo.kmalloc-128.num_objs
31867 ± 0% +11.6% 35548 ± 0% slabinfo.kmalloc-192.active_objs
32775 ± 0% +11.4% 36522 ± 0% slabinfo.kmalloc-192.num_objs
15221 ± 0% +23.1% 18731 ± 0% slabinfo.kmalloc-256.active_objs
16380 ± 0% +19.4% 19557 ± 0% slabinfo.kmalloc-256.num_objs
308147 ± 0% +33.0% 409879 ± 2% slabinfo.kmalloc-64.active_objs
6591 ± 1% +17.9% 7770 ± 1% slabinfo.kmalloc-64.active_slabs
421883 ± 1% +17.9% 497347 ± 1% slabinfo.kmalloc-64.num_objs
6591 ± 1% +17.9% 7770 ± 1% slabinfo.kmalloc-64.num_slabs
482.75 ± 11% +39.7% 674.50 ± 7% slabinfo.kmem_cache_node.active_objs
495.50 ± 10% +38.7% 687.25 ± 7% slabinfo.kmem_cache_node.num_objs
9328 ± 0% +29.1% 12045 ± 2% slabinfo.mm_struct.active_objs
612.00 ± 0% +28.6% 787.00 ± 1% slabinfo.mm_struct.active_slabs
10411 ± 0% +28.6% 13390 ± 1% slabinfo.mm_struct.num_objs
612.00 ± 0% +28.6% 787.00 ± 1% slabinfo.mm_struct.num_slabs
12765 ± 1% +15.7% 14765 ± 1% slabinfo.sighand_cache.active_objs
861.75 ± 1% +18.4% 1020 ± 0% slabinfo.sighand_cache.active_slabs
12933 ± 1% +18.4% 15308 ± 0% slabinfo.sighand_cache.num_objs
861.75 ± 1% +18.4% 1020 ± 0% slabinfo.sighand_cache.num_slabs
14455 ± 1% +11.8% 16167 ± 1% slabinfo.signal_cache.active_objs
14698 ± 1% +14.2% 16779 ± 1% slabinfo.signal_cache.num_objs
11628 ± 1% +16.6% 13563 ± 1% slabinfo.task_struct.active_objs
3899 ± 1% +18.5% 4620 ± 1% slabinfo.task_struct.active_slabs
11699 ± 1% +18.5% 13861 ± 1% slabinfo.task_struct.num_objs
3899 ± 1% +18.5% 4620 ± 1% slabinfo.task_struct.num_slabs
224907 ± 0% +34.2% 301780 ± 2% slabinfo.vm_area_struct.active_objs
5290 ± 0% +34.9% 7135 ± 2% slabinfo.vm_area_struct.active_slabs
232815 ± 0% +34.8% 313951 ± 2% slabinfo.vm_area_struct.num_objs
5290 ± 0% +34.9% 7135 ± 2% slabinfo.vm_area_struct.num_slabs
0.30 ± 89% +2528.3% 7.88 ± 13% perf-profile.cpu-cycles.__account_scheduler_latency.enqueue_entity.enqueue_task_fair.enqueue_task.activate_task
0.02 ± 74% +17735.0% 2.97 ± 19% perf-profile.cpu-cycles.__kernel_text_address.print_context_stack.dump_trace.save_stack_trace_tsk.__account_scheduler_latency
0.02 ±-5000% +21150.0% 4.25 ± 44% perf-profile.cpu-cycles.__schedule.schedule.pipe_wait.pipe_read.__vfs_read
0.00 ± -1% +Inf% 0.64 ± 58% perf-profile.cpu-cycles.__schedule.schedule.prepare_exit_to_usermode.syscall_return_slowpath.int_ret_from_sys_call
7.24 ± 19% +141.5% 17.49 ± 14% perf-profile.cpu-cycles.__vfs_read.vfs_read.sys_read.entry_SYSCALL_64_fastpath
1.36 ± 84% +926.2% 13.92 ± 16% perf-profile.cpu-cycles.__wake_up_common.__wake_up_sync_key.pipe_write.__vfs_write.vfs_write
1.46 ±107% +981.3% 15.76 ± 17% perf-profile.cpu-cycles.__wake_up_sync_key.pipe_write.__vfs_write.vfs_write.sys_write
0.03 ±-3333% +3200.0% 0.99 ± 28% perf-profile.cpu-cycles.activate_task.ttwu_do_activate.sched_ttwu_pending.scheduler_ipi.smp_reschedule_interrupt
0.64 ± 86% +1399.6% 9.65 ± 13% perf-profile.cpu-cycles.activate_task.ttwu_do_activate.try_to_wake_up.default_wake_function.autoremove_wake_function
1.28 ± 86% +966.4% 13.62 ± 16% perf-profile.cpu-cycles.autoremove_wake_function.__wake_up_common.__wake_up_sync_key.pipe_write.__vfs_write
1.26 ± 86% +968.6% 13.50 ± 16% perf-profile.cpu-cycles.default_wake_function.autoremove_wake_function.__wake_up_common.__wake_up_sync_key.pipe_write
0.01 ± 0% +3200.0% 0.33 ±140% perf-profile.cpu-cycles.do_wait.sys_wait4.entry_SYSCALL_64_fastpath
0.16 ± 96% +4396.9% 7.20 ± 12% perf-profile.cpu-cycles.dump_trace.save_stack_trace_tsk.__account_scheduler_latency.enqueue_entity.enqueue_task_fair
0.48 ± 86% +1877.1% 9.49 ± 14% perf-profile.cpu-cycles.enqueue_entity.enqueue_task_fair.enqueue_task.activate_task.ttwu_do_activate
0.03 ±-3333% +3175.0% 0.98 ± 28% perf-profile.cpu-cycles.enqueue_task.activate_task.ttwu_do_activate.sched_ttwu_pending.scheduler_ipi
0.64 ± 86% +1396.9% 9.63 ± 13% perf-profile.cpu-cycles.enqueue_task.activate_task.ttwu_do_activate.try_to_wake_up.default_wake_function
0.02 ±-5000% +4162.5% 0.85 ± 27% perf-profile.cpu-cycles.enqueue_task_fair.enqueue_task.activate_task.ttwu_do_activate.sched_ttwu_pending
0.59 ± 86% +1473.7% 9.29 ± 13% perf-profile.cpu-cycles.enqueue_task_fair.enqueue_task.activate_task.ttwu_do_activate.try_to_wake_up
111.99 ± 2% -23.5% 85.65 ± 7% perf-profile.cpu-cycles.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 0.77 ± 51% perf-profile.cpu-cycles.int_ret_from_sys_call
3.68 ± 31% +309.3% 15.06 ± 17% perf-profile.cpu-cycles.pipe_read.__vfs_read.vfs_read.sys_read.entry_SYSCALL_64_fastpath
0.03 ±-3333% +19350.0% 5.83 ± 38% perf-profile.cpu-cycles.pipe_wait.pipe_read.__vfs_read.vfs_read.sys_read
10.52 ± 23% +109.9% 22.09 ± 18% perf-profile.cpu-cycles.pipe_write.__vfs_write.vfs_write.sys_write.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 0.68 ± 56% perf-profile.cpu-cycles.prepare_exit_to_usermode.syscall_return_slowpath.int_ret_from_sys_call
0.10 ± 96% +6080.0% 6.18 ± 13% perf-profile.cpu-cycles.print_context_stack.dump_trace.save_stack_trace_tsk.__account_scheduler_latency.enqueue_entity
0.16 ± 97% +4329.6% 7.24 ± 13% perf-profile.cpu-cycles.save_stack_trace_tsk.__account_scheduler_latency.enqueue_entity.enqueue_task_fair.enqueue_task
0.02 ±-5000% +22300.0% 4.48 ± 43% perf-profile.cpu-cycles.schedule.pipe_wait.pipe_read.__vfs_read.vfs_read
0.00 ± -1% +Inf% 0.62 ± 62% perf-profile.cpu-cycles.schedule.prepare_exit_to_usermode.syscall_return_slowpath.int_ret_from_sys_call
41.93 ± 3% -21.8% 32.80 ± 4% perf-profile.cpu-cycles.sys_read.entry_SYSCALL_64_fastpath
0.01 ± 0% +3225.0% 0.33 ±140% perf-profile.cpu-cycles.sys_wait4.entry_SYSCALL_64_fastpath
65.72 ± 3% -25.2% 49.18 ± 9% perf-profile.cpu-cycles.sys_write.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 0.72 ± 52% perf-profile.cpu-cycles.syscall_return_slowpath.int_ret_from_sys_call
1.28 ± 86% +961.1% 13.58 ± 16% perf-profile.cpu-cycles.try_to_wake_up.default_wake_function.autoremove_wake_function.__wake_up_common.__wake_up_sync_key
0.04 ±-2500% +2512.5% 1.04 ± 28% perf-profile.cpu-cycles.ttwu_do_activate.constprop.85.sched_ttwu_pending.scheduler_ipi.smp_reschedule_interrupt.reschedule_interrupt
0.70 ± 88% +1343.2% 10.10 ± 13% perf-profile.cpu-cycles.ttwu_do_activate.constprop.85.try_to_wake_up.default_wake_function.autoremove_wake_function.__wake_up_common
34.35 ± 3% -16.2% 28.77 ± 6% perf-profile.cpu-cycles.vfs_read.sys_read.entry_SYSCALL_64_fastpath
58.47 ± 4% -26.7% 42.85 ± 9% perf-profile.cpu-cycles.vfs_write.sys_write.entry_SYSCALL_64_fastpath
0.01 ± 70% +4475.0% 0.30 ±139% perf-profile.cpu-cycles.wait_consider_task.do_wait.sys_wait4.entry_SYSCALL_64_fastpath
121.25 ± 40% -43.9% 68.00 ± 4% sched_debug.cfs_rq[0]:/.load_avg
7372993 ± 2% +59.5% 11756998 ± 6% sched_debug.cfs_rq[0]:/.min_vruntime
2526 ± 4% -36.8% 1596 ± 10% sched_debug.cfs_rq[0]:/.tg_load_avg
96.00 ± 8% -29.2% 68.00 ± 4% sched_debug.cfs_rq[0]:/.tg_load_avg_contrib
91.75 ± 23% -40.1% 55.00 ± 12% sched_debug.cfs_rq[10]:/.load_avg
8761646 ± 6% +202.7% 26523962 ± 11% sched_debug.cfs_rq[10]:/.min_vruntime
8.00 ± 56% +1043.8% 91.50 ± 95% sched_debug.cfs_rq[10]:/.nr_spread_over
1375813 ± 32% +967.5% 14686694 ± 19% sched_debug.cfs_rq[10]:/.spread0
2218 ± 8% -22.1% 1729 ± 9% sched_debug.cfs_rq[10]:/.tg_load_avg
92.25 ± 23% -40.1% 55.25 ± 12% sched_debug.cfs_rq[10]:/.tg_load_avg_contrib
83.00 ± 8% -28.6% 59.25 ± 26% sched_debug.cfs_rq[11]:/.load_avg
8281705 ± 5% +221.0% 26587558 ± 12% sched_debug.cfs_rq[11]:/.min_vruntime
12.75 ± 76% +488.2% 75.00 ± 57% sched_debug.cfs_rq[11]:/.nr_spread_over
895534 ± 36% +1546.8% 14747704 ± 20% sched_debug.cfs_rq[11]:/.spread0
2181 ± 7% -24.0% 1657 ± 7% sched_debug.cfs_rq[11]:/.tg_load_avg
83.00 ± 8% -28.6% 59.25 ± 26% sched_debug.cfs_rq[11]:/.tg_load_avg_contrib
7597123 ± 4% +64.9% 12525605 ± 7% sched_debug.cfs_rq[12]:/.min_vruntime
2144 ± 8% -18.9% 1739 ± 8% sched_debug.cfs_rq[12]:/.tg_load_avg
7974727 ± 1% +76.7% 14092249 ± 8% sched_debug.cfs_rq[13]:/.min_vruntime
587700 ± 33% +282.6% 2248438 ± 24% sched_debug.cfs_rq[13]:/.spread0
2161 ± 8% -18.0% 1771 ± 6% sched_debug.cfs_rq[13]:/.tg_load_avg
9111334 ± 6% +49.2% 13594919 ± 7% sched_debug.cfs_rq[14]:/.min_vruntime
2155 ± 10% -17.9% 1768 ± 7% sched_debug.cfs_rq[14]:/.tg_load_avg
91.00 ± 11% -28.3% 65.25 ± 24% sched_debug.cfs_rq[15]:/.load_avg
7755071 ± 5% +80.3% 13985714 ± 7% sched_debug.cfs_rq[15]:/.min_vruntime
9.75 ± 15% +520.5% 60.50 ±128% sched_debug.cfs_rq[15]:/.nr_spread_over
367395 ± 78% +481.9% 2137747 ± 16% sched_debug.cfs_rq[15]:/.spread0
2175 ± 12% -22.1% 1694 ± 7% sched_debug.cfs_rq[15]:/.tg_load_avg
91.00 ± 11% -28.3% 65.25 ± 24% sched_debug.cfs_rq[15]:/.tg_load_avg_contrib
7790185 ± 3% +81.0% 14103194 ± 6% sched_debug.cfs_rq[16]:/.min_vruntime
402239 ± 43% +460.5% 2254357 ± 12% sched_debug.cfs_rq[16]:/.spread0
2196 ± 9% -22.8% 1694 ± 6% sched_debug.cfs_rq[16]:/.tg_load_avg
8545892 ± 6% +64.3% 14041885 ± 7% sched_debug.cfs_rq[17]:/.min_vruntime
1157569 ± 38% +89.4% 2192052 ± 15% sched_debug.cfs_rq[17]:/.spread0
2168 ± 9% -21.2% 1709 ± 3% sched_debug.cfs_rq[17]:/.tg_load_avg
8083785 ± 3% +194.3% 23786635 ± 12% sched_debug.cfs_rq[18]:/.min_vruntime
7.25 ± 69% +286.2% 28.00 ± 51% sched_debug.cfs_rq[18]:/.nr_spread_over
695207 ± 68% +1616.3% 11932029 ± 23% sched_debug.cfs_rq[18]:/.spread0
2134 ± 10% -18.9% 1731 ± 5% sched_debug.cfs_rq[18]:/.tg_load_avg
123.00 ± 23% -50.8% 60.50 ± 18% sched_debug.cfs_rq[19]:/.load_avg
8843043 ± 4% +212.8% 27657482 ± 14% sched_debug.cfs_rq[19]:/.min_vruntime
11.50 ± 72% +1156.5% 144.50 ± 89% sched_debug.cfs_rq[19]:/.nr_spread_over
1454237 ± 28% +986.6% 15802087 ± 22% sched_debug.cfs_rq[19]:/.spread0
2121 ± 10% -18.7% 1724 ± 7% sched_debug.cfs_rq[19]:/.tg_load_avg
121.00 ± 21% -50.0% 60.50 ± 18% sched_debug.cfs_rq[19]:/.tg_load_avg_contrib
101.50 ± 14% -37.2% 63.75 ± 8% sched_debug.cfs_rq[1]:/.load_avg
8066420 ± 2% +67.1% 13476103 ± 7% sched_debug.cfs_rq[1]:/.min_vruntime
689384 ± 53% +147.3% 1704860 ± 22% sched_debug.cfs_rq[1]:/.spread0
2514 ± 4% -39.0% 1533 ± 6% sched_debug.cfs_rq[1]:/.tg_load_avg
101.75 ± 14% -37.3% 63.75 ± 8% sched_debug.cfs_rq[1]:/.tg_load_avg_contrib
91.00 ± 8% -23.1% 70.00 ± 10% sched_debug.cfs_rq[20]:/.load_avg
9057239 ± 7% +200.9% 27252092 ± 13% sched_debug.cfs_rq[20]:/.min_vruntime
1668188 ± 44% +822.9% 15396001 ± 23% sched_debug.cfs_rq[20]:/.spread0
2153 ± 9% -20.7% 1707 ± 5% sched_debug.cfs_rq[20]:/.tg_load_avg
91.00 ± 8% -24.2% 69.00 ± 11% sched_debug.cfs_rq[20]:/.tg_load_avg_contrib
8505030 ± 8% +224.1% 27566380 ± 13% sched_debug.cfs_rq[21]:/.min_vruntime
15.50 ± 51% +575.8% 104.75 ± 42% sched_debug.cfs_rq[21]:/.nr_spread_over
1115371 ± 70% +1308.4% 15708882 ± 21% sched_debug.cfs_rq[21]:/.spread0
2129 ± 9% -19.2% 1720 ± 5% sched_debug.cfs_rq[21]:/.tg_load_avg
8352879 ± 2% +232.1% 27739662 ± 13% sched_debug.cfs_rq[22]:/.min_vruntime
13.00 ± 45% +1930.8% 264.00 ± 38% sched_debug.cfs_rq[22]:/.nr_spread_over
962764 ± 8% +1549.5% 15880365 ± 22% sched_debug.cfs_rq[22]:/.spread0
2119 ± 8% -14.5% 1811 ± 10% sched_debug.cfs_rq[22]:/.tg_load_avg
8119642 ± 4% +241.9% 27759824 ± 13% sched_debug.cfs_rq[23]:/.min_vruntime
729257 ± 37% +2080.1% 15898645 ± 21% sched_debug.cfs_rq[23]:/.spread0
2087 ± 7% -16.0% 1753 ± 3% sched_debug.cfs_rq[23]:/.tg_load_avg
101.75 ± 19% -31.7% 69.50 ± 10% sched_debug.cfs_rq[2]:/.load_avg
9427522 ± 14% +44.3% 13605129 ± 7% sched_debug.cfs_rq[2]:/.min_vruntime
2441 ± 7% -35.2% 1583 ± 10% sched_debug.cfs_rq[2]:/.tg_load_avg
102.50 ± 19% -32.2% 69.50 ± 10% sched_debug.cfs_rq[2]:/.tg_load_avg_contrib
7664612 ± 6% +76.4% 13520491 ± 4% sched_debug.cfs_rq[3]:/.min_vruntime
283759 ±142% +509.0% 1728055 ± 23% sched_debug.cfs_rq[3]:/.spread0
2355 ± 8% -32.8% 1582 ± 12% sched_debug.cfs_rq[3]:/.tg_load_avg
118.75 ± 20% -33.7% 78.75 ± 12% sched_debug.cfs_rq[4]:/.load_avg
7770292 ± 5% +73.0% 13442540 ± 6% sched_debug.cfs_rq[4]:/.min_vruntime
388453 ±139% +322.2% 1640216 ± 19% sched_debug.cfs_rq[4]:/.spread0
2286 ± 8% -29.9% 1603 ± 10% sched_debug.cfs_rq[4]:/.tg_load_avg
119.00 ± 20% -33.2% 79.50 ± 12% sched_debug.cfs_rq[4]:/.tg_load_avg_contrib
41.00 ± 12% +72.0% 70.50 ± 58% sched_debug.cfs_rq[5]:/.load
8361817 ± 5% +59.9% 13374083 ± 7% sched_debug.cfs_rq[5]:/.min_vruntime
2265 ± 8% -29.0% 1608 ± 10% sched_debug.cfs_rq[5]:/.tg_load_avg
8064101 ± 5% +170.9% 21848536 ± 12% sched_debug.cfs_rq[6]:/.min_vruntime
12.25 ± 48% +81.6% 22.25 ± 28% sched_debug.cfs_rq[6]:/.nr_spread_over
680647 ± 89% +1373.8% 10031232 ± 26% sched_debug.cfs_rq[6]:/.spread0
2298 ± 8% -29.7% 1615 ± 8% sched_debug.cfs_rq[6]:/.tg_load_avg
94.25 ± 16% -38.2% 58.25 ± 19% sched_debug.cfs_rq[7]:/.load_avg
8303387 ± 6% +218.5% 26442227 ± 14% sched_debug.cfs_rq[7]:/.min_vruntime
40.25 ± 9% -25.5% 30.00 ± 17% sched_debug.cfs_rq[7]:/.runnable_load_avg
919200 ± 58% +1490.1% 14616571 ± 24% sched_debug.cfs_rq[7]:/.spread0
2277 ± 7% -28.1% 1638 ± 12% sched_debug.cfs_rq[7]:/.tg_load_avg
94.50 ± 16% -38.4% 58.25 ± 19% sched_debug.cfs_rq[7]:/.tg_load_avg_contrib
93.50 ± 19% -38.2% 57.75 ± 18% sched_debug.cfs_rq[8]:/.load_avg
8657132 ± 6% +206.7% 26552197 ± 12% sched_debug.cfs_rq[8]:/.min_vruntime
10.00 ± 49% +2720.0% 282.00 ± 56% sched_debug.cfs_rq[8]:/.nr_spread_over
1272282 ± 40% +1057.2% 14722281 ± 21% sched_debug.cfs_rq[8]:/.spread0
2256 ± 8% -25.2% 1688 ± 9% sched_debug.cfs_rq[8]:/.tg_load_avg
88.25 ± 18% -33.7% 58.50 ± 18% sched_debug.cfs_rq[8]:/.tg_load_avg_contrib
89.25 ± 10% -43.4% 50.50 ± 18% sched_debug.cfs_rq[9]:/.load_avg
8573840 ± 11% +212.1% 26757495 ± 13% sched_debug.cfs_rq[9]:/.min_vruntime
13.00 ± 70% +909.6% 131.25 ± 46% sched_debug.cfs_rq[9]:/.nr_spread_over
1188401 ± 86% +1155.7% 14923175 ± 23% sched_debug.cfs_rq[9]:/.spread0
2235 ± 7% -27.0% 1630 ± 9% sched_debug.cfs_rq[9]:/.tg_load_avg
89.25 ± 10% -43.4% 50.50 ± 18% sched_debug.cfs_rq[9]:/.tg_load_avg_contrib
13660 ± 26% +25.6% 17164 ± 7% sched_debug.cpu#0.curr->pid
25.75 ± 28% +564.1% 171.00 ± 31% sched_debug.cpu#0.nr_running
6234824 ± 3% +79.9% 11214928 ± 11% sched_debug.cpu#0.nr_switches
92.25 ± 41% +101.4% 185.75 ± 37% sched_debug.cpu#0.nr_uninterruptible
10264120 ± 2% +48.8% 15270454 ± 8% sched_debug.cpu#0.sched_count
49574 ± 5% -12.4% 43430 ± 6% sched_debug.cpu#0.sched_goidle
5147188 ± 4% +79.7% 9249436 ± 12% sched_debug.cpu#0.ttwu_count
2269312 ± 2% +62.5% 3688685 ± 10% sched_debug.cpu#0.ttwu_local
23.25 ± 28% +589.2% 160.25 ± 36% sched_debug.cpu#1.nr_running
6569750 ± 2% +83.6% 12058832 ± 9% sched_debug.cpu#1.nr_switches
6570052 ± 2% +83.6% 12059572 ± 9% sched_debug.cpu#1.sched_count
4992425 ± 1% +109.0% 10435243 ± 3% sched_debug.cpu#1.ttwu_count
2463897 ± 1% +74.8% 4307284 ± 9% sched_debug.cpu#1.ttwu_local
13.00 ± 44% +303.8% 52.50 ± 42% sched_debug.cpu#10.nr_running
6572956 ± 2% +196.2% 19469907 ± 4% sched_debug.cpu#10.nr_switches
6573272 ± 2% +196.2% 19471457 ± 4% sched_debug.cpu#10.sched_count
5113245 ± 2% +125.5% 11531340 ± 2% sched_debug.cpu#10.ttwu_count
2449382 ± 2% +146.9% 6046615 ± 4% sched_debug.cpu#10.ttwu_local
500000 ± 0% +14.1% 570712 ± 8% sched_debug.cpu#11.max_idle_balance_cost
15.00 ± 46% +246.7% 52.00 ± 42% sched_debug.cpu#11.nr_running
6631320 ± 2% +189.1% 19172684 ± 4% sched_debug.cpu#11.nr_switches
6631668 ± 2% +189.1% 19174234 ± 4% sched_debug.cpu#11.sched_count
5054950 ± 2% +120.6% 11152494 ± 4% sched_debug.cpu#11.ttwu_count
2405487 ± 2% +145.7% 5910910 ± 3% sched_debug.cpu#11.ttwu_local
12.00 ± 59% +791.7% 107.00 ± 75% sched_debug.cpu#12.nr_running
6356857 ± 4% +95.9% 12451675 ± 8% sched_debug.cpu#12.nr_switches
134.25 ± 46% +58.7% 213.00 ± 22% sched_debug.cpu#12.nr_uninterruptible
6357220 ± 4% +95.9% 12452542 ± 8% sched_debug.cpu#12.sched_count
46934 ± 6% -16.9% 38993 ± 10% sched_debug.cpu#12.sched_goidle
5089230 ± 6% +99.1% 10134621 ± 4% sched_debug.cpu#12.ttwu_count
2416053 ± 2% +79.9% 4346652 ± 6% sched_debug.cpu#12.ttwu_local
6657066 ± 5% +86.1% 12387203 ± 9% sched_debug.cpu#13.nr_switches
94.50 ± 71% +109.8% 198.25 ± 19% sched_debug.cpu#13.nr_uninterruptible
6657360 ± 5% +86.1% 12387844 ± 9% sched_debug.cpu#13.sched_count
5089824 ± 1% +103.3% 10347591 ± 6% sched_debug.cpu#13.ttwu_count
2613812 ± 2% +77.8% 4646155 ± 10% sched_debug.cpu#13.ttwu_local
14.25 ± 64% +761.4% 122.75 ± 73% sched_debug.cpu#14.nr_running
7217227 ± 7% +73.5% 12520898 ± 7% sched_debug.cpu#14.nr_switches
-109.00 ±-154% -226.6% 138.00 ± 34% sched_debug.cpu#14.nr_uninterruptible
7217548 ± 7% +73.5% 12521622 ± 7% sched_debug.cpu#14.sched_count
4933024 ± 2% +99.8% 9853790 ± 5% sched_debug.cpu#14.ttwu_count
2627711 ± 3% +76.7% 4643465 ± 5% sched_debug.cpu#14.ttwu_local
11.50 ± 88% +995.7% 126.00 ± 75% sched_debug.cpu#15.nr_running
6705165 ± 4% +91.4% 12831218 ± 9% sched_debug.cpu#15.nr_switches
41.50 ± 82% +256.0% 147.75 ± 27% sched_debug.cpu#15.nr_uninterruptible
6705518 ± 4% +91.4% 12831891 ± 9% sched_debug.cpu#15.sched_count
5124902 ± 2% +102.0% 10351785 ± 4% sched_debug.cpu#15.ttwu_count
2537246 ± 2% +84.4% 4679721 ± 9% sched_debug.cpu#15.ttwu_local
59.75 ± 72% +116.7% 129.50 ± 63% sched_debug.cpu#16.load
12.00 ± 91% +991.7% 131.00 ± 75% sched_debug.cpu#16.nr_running
6807914 ± 3% +88.7% 12847644 ± 6% sched_debug.cpu#16.nr_switches
35.75 ±243% +416.8% 184.75 ± 28% sched_debug.cpu#16.nr_uninterruptible
6808195 ± 3% +88.7% 12848273 ± 6% sched_debug.cpu#16.sched_count
4965300 ± 5% +109.5% 10400978 ± 2% sched_debug.cpu#16.ttwu_count
2587259 ± 3% +84.3% 4769137 ± 4% sched_debug.cpu#16.ttwu_local
7343797 ± 4% +71.0% 12556479 ± 9% sched_debug.cpu#17.nr_switches
-21.50 ±-291% -637.2% 115.50 ± 9% sched_debug.cpu#17.nr_uninterruptible
7344102 ± 4% +71.0% 12557075 ± 9% sched_debug.cpu#17.sched_count
48302 ± 10% -21.2% 38075 ± 9% sched_debug.cpu#17.sched_goidle
4860214 ± 2% +105.2% 9973186 ± 3% sched_debug.cpu#17.ttwu_count
2631813 ± 1% +77.3% 4667413 ± 6% sched_debug.cpu#17.ttwu_local
10.50 ± 70% +361.9% 48.50 ± 34% sched_debug.cpu#18.nr_running
6423142 ± 0% +193.8% 18871804 ± 6% sched_debug.cpu#18.nr_switches
6423521 ± 0% +193.8% 18873844 ± 6% sched_debug.cpu#18.sched_count
4996106 ± 2% +103.7% 10174733 ± 5% sched_debug.cpu#18.ttwu_count
2472857 ± 2% +120.8% 5460849 ± 4% sched_debug.cpu#18.ttwu_local
13.00 ± 54% +267.3% 47.75 ± 47% sched_debug.cpu#19.nr_running
6685332 ± 4% +198.9% 19980393 ± 5% sched_debug.cpu#19.nr_switches
-66.25 ±-66% +202.6% -200.50 ± -2% sched_debug.cpu#19.nr_uninterruptible
6685659 ± 4% +198.9% 19981845 ± 5% sched_debug.cpu#19.sched_count
4916266 ± 4% +151.1% 12346570 ± 5% sched_debug.cpu#19.ttwu_count
2554700 ± 4% +163.4% 6729723 ± 4% sched_debug.cpu#19.ttwu_local
13552 ± 10% +39.4% 18891 ± 15% sched_debug.cpu#2.curr->pid
17.25 ± 67% +784.1% 152.50 ± 42% sched_debug.cpu#2.nr_running
7014128 ± 5% +72.9% 12125114 ± 8% sched_debug.cpu#2.nr_switches
7014454 ± 5% +72.9% 12125842 ± 8% sched_debug.cpu#2.sched_count
4929757 ± 3% +102.3% 9971509 ± 2% sched_debug.cpu#2.ttwu_count
2473376 ± 3% +75.9% 4350629 ± 7% sched_debug.cpu#2.ttwu_local
9.50 ± 58% +365.8% 44.25 ± 48% sched_debug.cpu#20.nr_running
7094564 ± 7% +180.5% 19900502 ± 5% sched_debug.cpu#20.nr_switches
-22.50 ±-193% +866.7% -217.50 ±-35% sched_debug.cpu#20.nr_uninterruptible
7094941 ± 7% +180.5% 19901947 ± 5% sched_debug.cpu#20.sched_count
4847005 ± 2% +150.6% 12148790 ± 4% sched_debug.cpu#20.ttwu_count
2596984 ± 4% +162.1% 6806600 ± 5% sched_debug.cpu#20.ttwu_local
8.50 ± 50% +400.0% 42.50 ± 52% sched_debug.cpu#21.nr_running
6734635 ± 6% +197.1% 20005787 ± 4% sched_debug.cpu#21.nr_switches
6734978 ± 6% +197.1% 20007174 ± 4% sched_debug.cpu#21.sched_count
4954934 ± 2% +152.5% 12510106 ± 7% sched_debug.cpu#21.ttwu_count
2548363 ± 3% +169.5% 6867282 ± 4% sched_debug.cpu#21.ttwu_local
10.00 ± 53% +365.0% 46.50 ± 40% sched_debug.cpu#22.nr_running
6793937 ± 1% +192.2% 19850213 ± 4% sched_debug.cpu#22.nr_switches
6794279 ± 1% +192.2% 19851667 ± 4% sched_debug.cpu#22.sched_count
4999277 ± 2% +147.2% 12359089 ± 5% sched_debug.cpu#22.ttwu_count
2575092 ± 1% +159.1% 6671652 ± 5% sched_debug.cpu#22.ttwu_local
9.50 ± 47% +355.3% 43.25 ± 39% sched_debug.cpu#23.nr_running
6760476 ± 3% +194.8% 19928574 ± 4% sched_debug.cpu#23.nr_switches
6760836 ± 3% +194.8% 19929942 ± 4% sched_debug.cpu#23.sched_count
5057550 ± 0% +142.4% 12258087 ± 4% sched_debug.cpu#23.ttwu_count
2590172 ± 1% +159.7% 6726524 ± 4% sched_debug.cpu#23.ttwu_local
17.00 ± 59% +764.7% 147.00 ± 46% sched_debug.cpu#3.nr_running
6553148 ± 3% +89.1% 12389631 ± 9% sched_debug.cpu#3.nr_switches
-2.50 ±-3542% -7430.0% 183.25 ± 22% sched_debug.cpu#3.nr_uninterruptible
6553515 ± 3% +89.1% 12390332 ± 9% sched_debug.cpu#3.sched_count
5061548 ± 3% +105.1% 10380529 ± 5% sched_debug.cpu#3.ttwu_count
2374084 ± 3% +82.0% 4321429 ± 9% sched_debug.cpu#3.ttwu_local
822869 ± 10% -12.4% 720748 ± 11% sched_debug.cpu#4.avg_idle
14.50 ± 66% +886.2% 143.00 ± 52% sched_debug.cpu#4.nr_running
6627944 ± 4% +85.8% 12313003 ± 8% sched_debug.cpu#4.nr_switches
6628260 ± 4% +85.8% 12313670 ± 8% sched_debug.cpu#4.sched_count
5029009 ± 4% +105.9% 10353607 ± 4% sched_debug.cpu#4.ttwu_count
2417262 ± 3% +79.5% 4339802 ± 7% sched_debug.cpu#4.ttwu_local
41.25 ± 13% +72.1% 71.00 ± 59% sched_debug.cpu#5.load
17.50 ± 68% +691.4% 138.50 ± 47% sched_debug.cpu#5.nr_running
6997533 ± 3% +73.8% 12164617 ± 8% sched_debug.cpu#5.nr_switches
78.00 ± 48% +184.9% 222.25 ± 21% sched_debug.cpu#5.nr_uninterruptible
6997845 ± 3% +73.8% 12165273 ± 8% sched_debug.cpu#5.sched_count
4969036 ± 2% +94.9% 9684195 ± 1% sched_debug.cpu#5.ttwu_count
2483310 ± 3% +71.0% 4247364 ± 5% sched_debug.cpu#5.ttwu_local
831260 ± 12% -17.7% 683789 ± 9% sched_debug.cpu#6.avg_idle
16.75 ± 36% +334.3% 72.75 ± 45% sched_debug.cpu#6.nr_running
6163396 ± 2% +169.8% 16626723 ± 4% sched_debug.cpu#6.nr_switches
54.50 ±162% -311.5% -115.25 ±-34% sched_debug.cpu#6.nr_uninterruptible
6163891 ± 2% +169.8% 16629352 ± 4% sched_debug.cpu#6.sched_count
5128410 ± 3% +77.0% 9074888 ± 4% sched_debug.cpu#6.ttwu_count
2309853 ± 3% +87.4% 4328580 ± 2% sched_debug.cpu#6.ttwu_local
40.25 ± 9% -26.1% 29.75 ± 18% sched_debug.cpu#7.cpu_load[0]
40.25 ± 9% -28.6% 28.75 ± 22% sched_debug.cpu#7.cpu_load[1]
40.00 ± 8% -28.8% 28.50 ± 24% sched_debug.cpu#7.cpu_load[2]
39.75 ± 8% -28.9% 28.25 ± 25% sched_debug.cpu#7.cpu_load[3]
39.75 ± 8% -28.3% 28.50 ± 24% sched_debug.cpu#7.cpu_load[4]
19.25 ± 41% +231.2% 63.75 ± 48% sched_debug.cpu#7.nr_running
6566332 ± 5% +186.9% 18836315 ± 5% sched_debug.cpu#7.nr_switches
15.50 ±158% -1111.3% -156.75 ±-42% sched_debug.cpu#7.nr_uninterruptible
6566642 ± 5% +186.9% 18837866 ± 5% sched_debug.cpu#7.sched_count
5014516 ± 5% +127.1% 11386703 ± 3% sched_debug.cpu#7.ttwu_count
2467246 ± 2% +138.0% 5872007 ± 3% sched_debug.cpu#7.ttwu_local
6802328 ± 5% +181.3% 19136656 ± 4% sched_debug.cpu#8.nr_switches
-76.25 ±-113% +200.0% -228.75 ±-38% sched_debug.cpu#8.nr_uninterruptible
6802722 ± 5% +181.3% 19138285 ± 4% sched_debug.cpu#8.sched_count
4959323 ± 2% +128.0% 11305581 ± 3% sched_debug.cpu#8.ttwu_count
2437353 ± 1% +146.6% 6011685 ± 3% sched_debug.cpu#8.ttwu_local
500000 ± 0% +14.7% 573462 ± 10% sched_debug.cpu#9.max_idle_balance_cost
10.75 ± 56% +409.3% 54.75 ± 57% sched_debug.cpu#9.nr_running
6544270 ± 6% +188.6% 18884484 ± 4% sched_debug.cpu#9.nr_switches
55.50 ±103% -369.8% -149.75 ±-45% sched_debug.cpu#9.nr_uninterruptible
6544607 ± 6% +188.6% 18885996 ± 4% sched_debug.cpu#9.sched_count
5030467 ± 4% +122.3% 11181045 ± 5% sched_debug.cpu#9.ttwu_count
2417027 ± 4% +138.4% 5762968 ± 4% sched_debug.cpu#9.ttwu_local
0.40 ±172% -99.8% 0.00 ± 85% sched_debug.rt_rq[16]:/.rt_time
0.00 ± 49% +73814.5% 0.80 ±100% sched_debug.rt_rq[9]:/.rt_time
lkp-ws02: Westmere-EP
Memory: 16G
hackbench.time.involuntary_context_switches
3e+08 ++------------------------O---------------------------------------+
O O O O O |
2.5e+08 ++ O O O O O
| O O O O |
| O O O O |
2e+08 ++ O |
| O |
1.5e+08 ++ |
| |
1e+08 ++ |
| .*...*..*..*...*..* |
*..*...*..*..*...*.. .*.. ..*..*. |
5e+07 ++ *. *. |
| |
0 ++----------------------------------------------------------------+
vmstat.system.in
300000 ++-----------------------------------------------------------------+
| O O O |
O O O O O |
250000 ++ O O O O
| O |
| O O O O O O |
200000 ++ O O |
| |
150000 ++ |
| |
| |
100000 ++ |
| .*... .*..*...*..*..*...* |
*..*...*..*. *.. .*... .*...*. |
50000 ++------------------*------*---------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Ying Huang
View attachment "job.yaml" of type "text/plain" (3321 bytes)
View attachment "reproduce" of type "text/plain" (1531 bytes)
Powered by blists - more mailing lists