lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <87y4fevy9i.fsf@yhuang-dev.intel.com> Date: Thu, 08 Oct 2015 11:23:53 +0800 From: kernel test robot <ying.huang@...ux.intel.com> TO: Mel Gorman <mgorman@...e.de> CC: Linus Torvalds <torvalds@...ux-foundation.org> Subject: [lkp] [mm] 72b252aed5: 97.3% vm-scalability.throughput FYI, we noticed the below changes on https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master commit 72b252aed506b8f1a03f7abd29caef4cdf6a043b ("mm: send one IPI per CPU to TLB flush all entries after unmapping pages") ========================================================================================= tbox_group/testcase/rootfs/kconfig/compiler/cpufreq_governor/runtime/test: ivb43/vm-scalability/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/performance/300s/mmap-pread-seq-mt commit: 5b74283ab251b9db55cbbe31d19ca72482103290 72b252aed506b8f1a03f7abd29caef4cdf6a043b 5b74283ab251b9db 72b252aed506b8f1a03f7abd29 ---------------- -------------------------- %stddev %change %stddev \ | \ 21438292 ± 0% +97.3% 42301163 ± 3% vm-scalability.throughput 335.97 ± 0% +3.1% 346.51 ± 0% vm-scalability.time.elapsed_time 335.97 ± 0% +3.1% 346.51 ± 0% vm-scalability.time.elapsed_time.max 214026 ± 1% +23.4% 264006 ± 2% vm-scalability.time.involuntary_context_switches 74132095 ± 6% +135.5% 1.746e+08 ± 5% vm-scalability.time.minor_page_faults 3689 ± 0% +3.3% 3810 ± 0% vm-scalability.time.percent_of_cpu_this_job_got 10657 ± 1% -34.3% 7003 ± 6% vm-scalability.time.system_time 1738 ± 11% +256.8% 6203 ± 6% vm-scalability.time.user_time 515603 ± 9% +118.8% 1128038 ± 3% softirqs.RCU 3882 ± 0% +11.8% 4341 ± 1% uptime.idle 55117755 ± 1% -10.8% 49152184 ± 1% vmstat.memory.cache 7826855 ± 0% -97.2% 215402 ± 4% vmstat.system.in 242816 ± 1% +106.7% 501915 ± 4% slabinfo.radix_tree_node.active_objs 4337 ± 1% +106.7% 8967 ± 4% slabinfo.radix_tree_node.active_slabs 242891 ± 1% +106.7% 502019 ± 4% slabinfo.radix_tree_node.num_objs 4337 ± 1% +106.7% 8967 ± 4% slabinfo.radix_tree_node.num_slabs 214026 ± 1% +23.4% 264006 ± 2% time.involuntary_context_switches 74132095 ± 6% +135.5% 1.746e+08 ± 5% time.minor_page_faults 10657 ± 1% -34.3% 7003 ± 6% time.system_time 1738 ± 11% +256.8% 6203 ± 6% time.user_time 83.39 ± 0% -2.1% 81.65 ± 0% turbostat.%Busy 2496 ± 0% -2.1% 2444 ± 0% turbostat.Avg_MHz 7.16 ± 2% -23.8% 5.45 ± 7% turbostat.CPU%c1 9.45 ± 0% +36.4% 12.89 ± 1% turbostat.CPU%c6 3.85 ± 5% +31.8% 5.07 ± 5% turbostat.Pkg%pc2 182.61 ± 0% +0.9% 184.34 ± 0% turbostat.PkgWatt 8.57 ± 1% +76.2% 15.10 ± 0% turbostat.RAMWatt 3.954e+08 ± 8% -84.6% 60826303 ± 40% cpuidle.C1-IVT.time 1347482 ±112% +579.6% 9157219 ± 20% cpuidle.C1E-IVT.time 1945 ± 16% +1232.2% 25914 ± 11% cpuidle.C1E-IVT.usage 733075 ± 29% +1848.5% 14283727 ± 8% cpuidle.C3-IVT.time 1375 ± 9% +1710.0% 24901 ± 17% cpuidle.C3-IVT.usage 2.288e+09 ± 1% +30.2% 2.979e+09 ± 1% cpuidle.C6-IVT.time 280924 ± 2% +30.4% 366249 ± 7% cpuidle.C6-IVT.usage 1723234 ±105% -99.9% 1010 ± 33% cpuidle.POLL.time 230.50 ± 26% -54.3% 105.25 ± 21% cpuidle.POLL.usage 6662370 ± 35% +177.7% 18500891 ± 18% numa-numastat.node0.local_node 842668 ± 11% +655.5% 6366146 ± 32% numa-numastat.node0.numa_foreign 6662442 ± 35% +177.7% 18501610 ± 18% numa-numastat.node0.numa_hit 3058406 ± 83% +195.5% 9037060 ± 31% numa-numastat.node0.numa_miss 3058478 ± 83% +195.5% 9037778 ± 31% numa-numastat.node0.other_node 9175084 ± 2% +135.9% 21640389 ± 12% numa-numastat.node1.local_node 3058406 ± 83% +195.5% 9037060 ± 31% numa-numastat.node1.numa_foreign 9175097 ± 2% +135.9% 21640489 ± 12% numa-numastat.node1.numa_hit 842668 ± 11% +655.5% 6366146 ± 32% numa-numastat.node1.numa_miss 842680 ± 11% +655.5% 6366246 ± 32% numa-numastat.node1.other_node 32752 ±126% -100.0% 0.00 ± -1% latency_stats.avg.call_rwsem_down_read_failed.task_numa_work.task_work_run.prepare_exit_to_usermode.retint_user 13481 ±128% -93.7% 850.00 ± 75% latency_stats.avg.call_rwsem_down_write_failed.vm_munmap.SyS_munmap.entry_SYSCALL_64_fastpath 4425916 ± 70% -100.0% 0.00 ± -1% latency_stats.avg.pipe_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath 104546 ±116% -93.9% 6354 ± 60% latency_stats.max.call_rwsem_down_read_failed.__do_page_fault.do_page_fault.page_fault 47882 ±139% -100.0% 0.00 ± -1% latency_stats.max.call_rwsem_down_read_failed.task_numa_work.task_work_run.prepare_exit_to_usermode.retint_user 108276 ±112% -93.3% 7261 ± 67% latency_stats.max.call_rwsem_down_write_failed.vm_munmap.SyS_munmap.entry_SYSCALL_64_fastpath 1643482 ±172% -93.8% 101381 ± 96% latency_stats.max.pipe_wait.pipe_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath 4425916 ± 70% -100.0% 0.00 ± -1% latency_stats.max.pipe_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath 636256 ± 96% -68.7% 198992 ± 51% latency_stats.sum.call_rwsem_down_read_failed.__do_page_fault.do_page_fault.page_fault 58099 ±145% -100.0% 0.00 ± -1% latency_stats.sum.call_rwsem_down_read_failed.task_numa_work.task_work_run.prepare_exit_to_usermode.retint_user 108350 ±112% -93.2% 7357 ± 67% latency_stats.sum.call_rwsem_down_write_failed.vm_munmap.SyS_munmap.entry_SYSCALL_64_fastpath 4425916 ± 70% -100.0% 0.00 ± -1% latency_stats.sum.pipe_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath 25651111 ± 3% -8.4% 23507894 ± 2% meminfo.Active 118424 ± 11% +84.2% 218142 ± 2% meminfo.Active(anon) 25532686 ± 3% -8.8% 23289751 ± 2% meminfo.Active(file) 53963165 ± 1% -9.7% 48711030 ± 1% meminfo.Cached 576942 ± 3% +12.1% 646791 ± 1% meminfo.Committed_AS 27939486 ± 0% -11.2% 24818931 ± 1% meminfo.Inactive 27894895 ± 0% -11.2% 24776371 ± 1% meminfo.Inactive(file) 52830555 ± 1% -10.2% 47427044 ± 1% meminfo.Mapped 59418736 ± 0% -10.4% 53234678 ± 1% meminfo.MemAvailable 5387627 ± 2% +112.4% 11444675 ± 5% meminfo.PageTables 179300 ± 1% +82.8% 327678 ± 4% meminfo.SReclaimable 140030 ± 10% +69.7% 237642 ± 2% meminfo.Shmem 269507 ± 0% +55.0% 417843 ± 3% meminfo.Slab 12544698 ± 4% -8.2% 11511691 ± 4% numa-meminfo.node0.Active 12484499 ± 4% -8.2% 11462366 ± 4% numa-meminfo.node0.Active(file) 24780 ± 48% -65.3% 8601 ±148% numa-meminfo.node0.Inactive(anon) 3217075 ± 13% +89.6% 6099612 ± 5% numa-meminfo.node0.PageTables 87620 ± 2% +83.1% 160468 ± 5% numa-meminfo.node0.SReclaimable 129848 ± 1% +59.4% 206939 ± 4% numa-meminfo.node0.Slab 13170817 ± 1% -8.9% 11996391 ± 1% numa-meminfo.node1.Active 60214 ± 21% +181.2% 169310 ± 16% numa-meminfo.node1.Active(anon) 13110603 ± 2% -9.8% 11827080 ± 1% numa-meminfo.node1.Active(file) 2400 ± 69% +104.0% 4896 ± 22% numa-meminfo.node1.AnonHugePages 10012 ± 19% +43.9% 14405 ± 12% numa-meminfo.node1.AnonPages 28135804 ± 2% -11.0% 25039750 ± 2% numa-meminfo.node1.FilePages 14776140 ± 5% -13.0% 12854070 ± 4% numa-meminfo.node1.Inactive 14755523 ± 5% -13.1% 12820185 ± 4% numa-meminfo.node1.Inactive(file) 27592346 ± 2% -11.9% 24320149 ± 2% numa-meminfo.node1.Mapped 2199244 ± 22% +142.9% 5342639 ± 11% numa-meminfo.node1.PageTables 92297 ± 1% +81.3% 167302 ± 4% numa-meminfo.node1.SReclaimable 70797 ± 22% +166.7% 188799 ± 14% numa-meminfo.node1.Shmem 140279 ± 0% +50.4% 210997 ± 4% numa-meminfo.node1.Slab 6074 ± 52% -64.6% 2149 ±148% numa-vmstat.node0.nr_inactive_anon 548.50 ± 7% -30.8% 379.50 ± 10% numa-vmstat.node0.nr_isolated_file 802854 ± 12% +89.9% 1524724 ± 5% numa-vmstat.node0.nr_page_table_pages 166.50 ± 19% +2918.2% 5025 ± 73% numa-vmstat.node0.nr_pages_scanned 21892 ± 3% +83.1% 40094 ± 4% numa-vmstat.node0.nr_slab_reclaimable 442937 ± 17% +681.4% 3461158 ± 38% numa-vmstat.node0.numa_foreign 5955428 ± 38% +111.3% 12585324 ± 22% numa-vmstat.node0.numa_hit 5950241 ± 38% +111.2% 12565373 ± 21% numa-vmstat.node0.numa_local 15179 ± 23% +178.5% 42275 ± 15% numa-vmstat.node1.nr_active_anon 3272659 ± 2% -9.6% 2959171 ± 1% numa-vmstat.node1.nr_active_file 2500 ± 19% +44.0% 3600 ± 12% numa-vmstat.node1.nr_anon_pages 7035926 ± 2% -11.0% 6264839 ± 1% numa-vmstat.node1.nr_file_pages 3695920 ± 5% -13.2% 3207574 ± 4% numa-vmstat.node1.nr_inactive_file 597.50 ± 1% -35.8% 383.50 ± 5% numa-vmstat.node1.nr_isolated_file 6899439 ± 2% -11.8% 6085128 ± 2% numa-vmstat.node1.nr_mapped 550459 ± 22% +142.6% 1335451 ± 11% numa-vmstat.node1.nr_page_table_pages 36.75 ± 26% +4261.9% 1603 ± 25% numa-vmstat.node1.nr_pages_scanned 17753 ± 22% +165.7% 47167 ± 13% numa-vmstat.node1.nr_shmem 23081 ± 1% +81.1% 41801 ± 4% numa-vmstat.node1.nr_slab_reclaimable 8326219 ± 3% +74.1% 14499407 ± 10% numa-vmstat.node1.numa_hit 8247279 ± 3% +75.0% 14435261 ± 10% numa-vmstat.node1.numa_local 443012 ± 17% +681.3% 3461226 ± 38% numa-vmstat.node1.numa_miss 521951 ± 14% +575.4% 3525373 ± 37% numa-vmstat.node1.numa_other 1470 ± 3% +2458.8% 37620 ± 7% proc-vmstat.allocstall 3.00 ± 62% +15316.7% 462.50 ± 12% proc-vmstat.kswapd_low_wmark_hit_quickly 29914 ± 11% +82.4% 54575 ± 2% proc-vmstat.nr_active_anon 1479166 ± 0% -10.7% 1320975 ± 1% proc-vmstat.nr_dirty_background_threshold 2958332 ± 0% -10.7% 2641952 ± 1% proc-vmstat.nr_dirty_threshold 13512242 ± 1% -9.9% 12173462 ± 1% proc-vmstat.nr_file_pages 6975807 ± 0% -11.3% 6189755 ± 1% proc-vmstat.nr_inactive_file 1148 ± 2% -33.0% 769.25 ± 9% proc-vmstat.nr_isolated_file 13232908 ± 1% -10.4% 11852354 ± 1% proc-vmstat.nr_mapped 1350565 ± 1% +112.0% 2863445 ± 4% proc-vmstat.nr_page_table_pages 204.00 ± 20% +3142.4% 6614 ± 30% proc-vmstat.nr_pages_scanned 35504 ± 11% +67.6% 59496 ± 2% proc-vmstat.nr_shmem 44916 ± 0% +82.4% 81949 ± 4% proc-vmstat.nr_slab_reclaimable 3901074 ± 63% +294.8% 15403207 ± 9% proc-vmstat.numa_foreign 15670 ± 14% +199.0% 46848 ± 5% proc-vmstat.numa_hint_faults 13690 ± 15% +197.9% 40783 ± 4% proc-vmstat.numa_hint_faults_local 15833075 ± 13% +153.5% 40137005 ± 6% proc-vmstat.numa_hit 15832991 ± 13% +153.5% 40136210 ± 6% proc-vmstat.numa_local 3901074 ± 63% +294.8% 15403207 ± 9% proc-vmstat.numa_miss 3901158 ± 63% +294.9% 15404001 ± 9% proc-vmstat.numa_other 224.75 ± 4% +553.9% 1469 ± 14% proc-vmstat.numa_pages_migrated 15157 ± 9% +209.5% 46917 ± 5% proc-vmstat.numa_pte_updates 22.75 ± 31% +8146.2% 1876 ± 8% proc-vmstat.pageoutrun 17163894 ± 0% +189.4% 49670802 ± 4% proc-vmstat.pgactivate 840721 ± 1% +166.0% 2236172 ± 9% proc-vmstat.pgalloc_dma32 18949849 ± 1% +182.0% 53430217 ± 4% proc-vmstat.pgalloc_normal 7537536 ± 0% +455.1% 41838374 ± 5% proc-vmstat.pgdeactivate 76121176 ± 8% +132.7% 1.771e+08 ± 5% proc-vmstat.pgfault 18490935 ± 4% +191.3% 53861451 ± 4% proc-vmstat.pgfree 224.75 ± 4% +553.9% 1469 ± 14% proc-vmstat.pgmigrate_success 301873 ± 10% +429.4% 1598042 ± 14% proc-vmstat.pgrefill_dma32 7259314 ± 0% +454.7% 40264356 ± 5% proc-vmstat.pgrefill_normal 653298 ± 11% +352.5% 2956417 ± 14% proc-vmstat.pgscan_direct_dma32 16069728 ± 0% +368.5% 75278864 ± 5% proc-vmstat.pgscan_direct_normal 48794 ± 8% +530.3% 307572 ± 15% proc-vmstat.pgscan_kswapd_dma32 753055 ± 3% +874.4% 7337747 ± 9% proc-vmstat.pgscan_kswapd_normal 101842 ± 14% +1209.1% 1333212 ± 14% proc-vmstat.pgsteal_direct_dma32 2422523 ± 3% +1287.9% 33621398 ± 5% proc-vmstat.pgsteal_direct_normal 5529 ± 2% +2503.3% 143942 ± 21% proc-vmstat.pgsteal_kswapd_dma32 130403 ± 3% +2347.9% 3192139 ± 15% proc-vmstat.pgsteal_kswapd_normal 2.08 ± 40% -94.7% 0.11 ±-909% perf-profile.cpu-cycles.__alloc_pages_nodemask.alloc_kmem_pages_node.copy_process._do_fork.sys_clone 1.38 ± 17% +106.7% 2.85 ± 27% perf-profile.cpu-cycles.__alloc_pages_nodemask.alloc_pages_current.__page_cache_alloc.__do_page_cache_readahead.filemap_fault 25.87 ± 24% -84.0% 4.13 ± 18% perf-profile.cpu-cycles.__alloc_pages_nodemask.alloc_pages_current.__page_cache_alloc.__do_page_cache_readahead.ondemand_readahead 59.80 ± 11% +31.8% 78.83 ± 3% perf-profile.cpu-cycles.__alloc_pages_nodemask.alloc_pages_current.pte_alloc_one.__pte_alloc.handle_mm_fault 1.39 ± 3% -100.0% 0.00 ± -1% perf-profile.cpu-cycles.__alloc_pages_nodemask.alloc_pages_vma.shmem_alloc_page.shmem_getpage_gfp.shmem_write_begin 0.00 ± -1% +Inf% 0.54 ± 95% perf-profile.cpu-cycles.__bitmap_or.try_to_unmap_one.rmap_walk.try_to_unmap.shrink_page_list 27.36 ± 23% -67.1% 8.99 ± 15% perf-profile.cpu-cycles.__do_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault 1.37 ± 17% +111.3% 2.90 ± 24% perf-profile.cpu-cycles.__do_page_cache_readahead.filemap_fault.xfs_filemap_fault.__do_fault.handle_mm_fault 25.96 ± 24% -76.6% 6.07 ± 11% perf-profile.cpu-cycles.__do_page_cache_readahead.ondemand_readahead.page_cache_async_readahead.filemap_fault.xfs_filemap_fault 1.42 ± 3% -92.4% 0.11 ± 13% perf-profile.cpu-cycles.__generic_file_write_iter.generic_file_write_iter.__vfs_write.vfs_write.sys_write 1.43 ± 3% -90.9% 0.13 ± 14% perf-profile.cpu-cycles.__libc_start_main 1.37 ± 17% +111.1% 2.90 ± 25% perf-profile.cpu-cycles.__page_cache_alloc.__do_page_cache_readahead.filemap_fault.xfs_filemap_fault.__do_fault 25.91 ± 24% -83.7% 4.22 ± 18% perf-profile.cpu-cycles.__page_cache_alloc.__do_page_cache_readahead.ondemand_readahead.page_cache_async_readahead.filemap_fault 2.28 ± 21% +195.3% 6.74 ± 10% perf-profile.cpu-cycles.__page_check_address.try_to_unmap_one.rmap_walk.try_to_unmap.shrink_page_list 59.80 ± 11% +31.9% 78.90 ± 3% perf-profile.cpu-cycles.__pte_alloc.handle_mm_fault.__do_page_fault.do_page_fault.page_fault 1.42 ± 3% -92.3% 0.11 ± 11% perf-profile.cpu-cycles.__vfs_write.vfs_write.sys_write.entry_SYSCALL_64_fastpath.__write_nocancel 1.12 ± 24% -92.0% 0.09 ± 17% perf-profile.cpu-cycles.__write_nocancel._start.main.__libc_start_main 2.77 ± 48% -96.0% 0.11 ±-909% perf-profile.cpu-cycles._do_fork.sys_clone.entry_SYSCALL_64_fastpath 2.11 ± 25% +154.4% 5.36 ± 9% perf-profile.cpu-cycles._raw_spin_lock.__page_check_address.try_to_unmap_one.rmap_walk.try_to_unmap 1.43 ± 3% -90.9% 0.13 ± 14% perf-profile.cpu-cycles._start.main.__libc_start_main 2.08 ± 40% -94.7% 0.11 ±-909% perf-profile.cpu-cycles.alloc_kmem_pages_node.copy_process._do_fork.sys_clone.entry_SYSCALL_64_fastpath 1.38 ± 17% +106.9% 2.85 ± 26% perf-profile.cpu-cycles.alloc_pages_current.__page_cache_alloc.__do_page_cache_readahead.filemap_fault.xfs_filemap_fault 25.87 ± 24% -84.0% 4.14 ± 18% perf-profile.cpu-cycles.alloc_pages_current.__page_cache_alloc.__do_page_cache_readahead.ondemand_readahead.page_cache_async_readahead 59.80 ± 11% +31.8% 78.84 ± 3% perf-profile.cpu-cycles.alloc_pages_current.pte_alloc_one.__pte_alloc.handle_mm_fault.__do_page_fault 1.39 ± 3% -99.8% 0.00 ±141% perf-profile.cpu-cycles.alloc_pages_vma.shmem_alloc_page.shmem_getpage_gfp.shmem_write_begin.generic_perform_write 5.08 ± 9% -98.6% 0.07 ± 39% perf-profile.cpu-cycles.call_function_interrupt._raw_spin_lock.__page_check_address.page_referenced_one.rmap_walk 2.37 ± 11% -100.0% 0.00 ± -1% perf-profile.cpu-cycles.call_function_interrupt.native_flush_tlb_others.flush_tlb_page.ptep_clear_flush.try_to_unmap_one 4.45 ± 7% -100.0% 0.00 ± -1% perf-profile.cpu-cycles.call_function_interrupt.physflat_send_IPI_mask.native_send_call_func_ipi.smp_call_function_many.native_flush_tlb_others 2.29 ± 4% -100.0% 0.00 ± -1% perf-profile.cpu-cycles.call_function_interrupt.smp_call_function_many.native_flush_tlb_others.flush_tlb_page.ptep_clear_flush 2.77 ± 48% -96.0% 0.11 ±-909% perf-profile.cpu-cycles.copy_process._do_fork.sys_clone.entry_SYSCALL_64_fastpath 0.00 ± -1% +Inf% 0.77 ± 68% perf-profile.cpu-cycles.cpumask_any_but.try_to_unmap_one.rmap_walk.try_to_unmap.shrink_page_list 39.32 ± 5% -100.0% 0.00 ± -1% perf-profile.cpu-cycles.default_send_IPI_mask_sequence_phys.physflat_send_IPI_mask.native_send_call_func_ipi.smp_call_function_many.native_flush_tlb_others 0.04 ± 42% +4042.9% 1.45 ± 7% perf-profile.cpu-cycles.do_mpage_readpage.mpage_readpages.xfs_vm_readpages.__do_page_cache_readahead.ondemand_readahead 2.08 ± 40% -94.7% 0.11 ±-909% perf-profile.cpu-cycles.do_try_to_free_pages.try_to_free_pages.__alloc_pages_nodemask.alloc_kmem_pages_node.copy_process 27.26 ± 23% -75.0% 6.81 ± 22% perf-profile.cpu-cycles.do_try_to_free_pages.try_to_free_pages.__alloc_pages_nodemask.alloc_pages_current.__page_cache_alloc 60.13 ± 10% +30.9% 78.69 ± 3% perf-profile.cpu-cycles.do_try_to_free_pages.try_to_free_pages.__alloc_pages_nodemask.alloc_pages_current.pte_alloc_one 1.39 ± 3% -100.0% 0.00 ± -1% perf-profile.cpu-cycles.do_try_to_free_pages.try_to_free_pages.__alloc_pages_nodemask.alloc_pages_vma.shmem_alloc_page 2.98 ± 53% -98.5% 0.05 ± 99% perf-profile.cpu-cycles.entry_SYSCALL_64_fastpath 1.12 ± 24% -91.7% 0.09 ± 14% perf-profile.cpu-cycles.entry_SYSCALL_64_fastpath.__write_nocancel._start.main.__libc_start_main 27.36 ± 23% -67.2% 8.98 ± 15% perf-profile.cpu-cycles.filemap_fault.xfs_filemap_fault.__do_fault.handle_mm_fault.__do_page_fault 0.15 ± 48% +3077.6% 4.61 ± 32% perf-profile.cpu-cycles.filemap_map_pages.handle_mm_fault.__do_page_fault.do_page_fault.page_fault 2.72 ± 9% -98.3% 0.05 ± 58% perf-profile.cpu-cycles.flush_smp_call_function_queue.generic_smp_call_function_single_interrupt.smp_call_function_interrupt.call_function_interrupt._raw_spin_lock 1.05 ± 14% -100.0% 0.00 ± -1% perf-profile.cpu-cycles.flush_smp_call_function_queue.generic_smp_call_function_single_interrupt.smp_call_function_interrupt.call_function_interrupt.native_flush_tlb_others 3.04 ± 7% -100.0% 0.00 ± -1% perf-profile.cpu-cycles.flush_smp_call_function_queue.generic_smp_call_function_single_interrupt.smp_call_function_interrupt.call_function_interrupt.physflat_send_IPI_mask 1.33 ± 4% -100.0% 0.00 ± -1% perf-profile.cpu-cycles.flush_smp_call_function_queue.generic_smp_call_function_single_interrupt.smp_call_function_interrupt.call_function_interrupt.smp_call_function_many 4.18 ± 2% -100.0% 0.00 ± -1% perf-profile.cpu-cycles.flush_tlb_func.flush_smp_call_function_queue.generic_smp_call_function_single_interrupt.smp_call_function_interrupt.call_function_interrupt 68.63 ± 4% -100.0% 0.00 ± -1% perf-profile.cpu-cycles.flush_tlb_page.ptep_clear_flush.try_to_unmap_one.rmap_walk.try_to_unmap 1.42 ± 3% -92.3% 0.11 ± 11% perf-profile.cpu-cycles.generic_file_write_iter.__vfs_write.vfs_write.sys_write.entry_SYSCALL_64_fastpath 1.42 ± 3% -92.6% 0.10 ± 15% perf-profile.cpu-cycles.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.__vfs_write.vfs_write 3.19 ± 9% -98.4% 0.05 ± 52% perf-profile.cpu-cycles.generic_smp_call_function_single_interrupt.smp_call_function_interrupt.call_function_interrupt._raw_spin_lock.__page_check_address 1.23 ± 15% -100.0% 0.00 ± -1% perf-profile.cpu-cycles.generic_smp_call_function_single_interrupt.smp_call_function_interrupt.call_function_interrupt.native_flush_tlb_others.flush_tlb_page 3.35 ± 7% -100.0% 0.00 ± -1% perf-profile.cpu-cycles.generic_smp_call_function_single_interrupt.smp_call_function_interrupt.call_function_interrupt.physflat_send_IPI_mask.native_send_call_func_ipi 1.62 ± 4% -100.0% 0.00 ± -1% perf-profile.cpu-cycles.generic_smp_call_function_single_interrupt.smp_call_function_interrupt.call_function_interrupt.smp_call_function_many.native_flush_tlb_others 5.43 ± 2% -28.1% 3.90 ± 10% perf-profile.cpu-cycles.kswapd.kthread.ret_from_fork 5.46 ± 2% -27.9% 3.94 ± 10% perf-profile.cpu-cycles.kthread.ret_from_fork 5.45 ± 2% -100.0% 0.00 ± -1% perf-profile.cpu-cycles.llist_add_batch.smp_call_function_many.native_flush_tlb_others.flush_tlb_page.ptep_clear_flush 2.63 ± 3% -100.0% 0.00 ± -1% perf-profile.cpu-cycles.llist_reverse_order.flush_smp_call_function_queue.generic_smp_call_function_single_interrupt.smp_call_function_interrupt.call_function_interrupt 1.43 ± 3% -90.9% 0.13 ± 14% perf-profile.cpu-cycles.main.__libc_start_main 0.00 ± -1% +Inf% 1.30 ± 42% perf-profile.cpu-cycles.mm_find_pmd.page_referenced_one.rmap_walk.page_referenced.shrink_page_list 0.00 ± -1% +Inf% 1.61 ± 29% perf-profile.cpu-cycles.mm_find_pmd.try_to_unmap_one.rmap_walk.try_to_unmap.shrink_page_list 0.06 ± 30% +3154.5% 1.79 ± 10% perf-profile.cpu-cycles.mpage_readpages.xfs_vm_readpages.__do_page_cache_readahead.ondemand_readahead.page_cache_async_readahead 68.39 ± 4% -100.0% 0.00 ± -1% perf-profile.cpu-cycles.native_flush_tlb_others.flush_tlb_page.ptep_clear_flush.try_to_unmap_one.rmap_walk 1.56 ± 28% +137.3% 3.70 ± 12% perf-profile.cpu-cycles.native_queued_spin_lock_slowpath._raw_spin_lock.__page_check_address.try_to_unmap_one.rmap_walk 44.17 ± 5% -100.0% 0.00 ± -1% perf-profile.cpu-cycles.native_send_call_func_ipi.smp_call_function_many.native_flush_tlb_others.flush_tlb_page.ptep_clear_flush 25.96 ± 24% -76.6% 6.07 ± 11% perf-profile.cpu-cycles.ondemand_readahead.page_cache_async_readahead.filemap_fault.xfs_filemap_fault.__do_fault 25.96 ± 24% -76.6% 6.07 ± 11% perf-profile.cpu-cycles.page_cache_async_readahead.filemap_fault.xfs_filemap_fault.__do_fault.handle_mm_fault 0.03 ± 23% +5058.3% 1.55 ± 9% perf-profile.cpu-cycles.page_remove_rmap.try_to_unmap_one.rmap_walk.try_to_unmap.shrink_page_list 44.02 ± 5% -100.0% 0.00 ± -1% perf-profile.cpu-cycles.physflat_send_IPI_mask.native_send_call_func_ipi.smp_call_function_many.native_flush_tlb_others.flush_tlb_page 59.80 ± 11% +31.9% 78.86 ± 3% perf-profile.cpu-cycles.pte_alloc_one.__pte_alloc.handle_mm_fault.__do_page_fault.do_page_fault 68.80 ± 4% -100.0% 0.00 ± -1% perf-profile.cpu-cycles.ptep_clear_flush.try_to_unmap_one.rmap_walk.try_to_unmap.shrink_page_list 5.46 ± 2% -27.3% 3.96 ± 8% perf-profile.cpu-cycles.ret_from_fork 71.72 ± 4% -17.1% 59.46 ± 8% perf-profile.cpu-cycles.rmap_walk.try_to_unmap.shrink_page_list.shrink_inactive_list.shrink_lruvec 1.39 ± 3% -99.5% 0.01 ± 57% perf-profile.cpu-cycles.shmem_alloc_page.shmem_getpage_gfp.shmem_write_begin.generic_perform_write.__generic_file_write_iter 1.40 ± 3% -98.4% 0.02 ± 19% perf-profile.cpu-cycles.shmem_getpage_gfp.shmem_write_begin.generic_perform_write.__generic_file_write_iter.generic_file_write_iter 1.40 ± 3% -98.2% 0.03 ± 19% perf-profile.cpu-cycles.shmem_write_begin.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.__vfs_write 0.00 ± -1% +Inf% 0.59 ± 48% perf-profile.cpu-cycles.shrink_active_list.shrink_lruvec.shrink_zone.do_try_to_free_pages.try_to_free_pages 5.43 ± 2% -28.6% 3.88 ± 10% perf-profile.cpu-cycles.shrink_inactive_list.shrink_lruvec.shrink_zone.kswapd.kthread 5.43 ± 2% -28.2% 3.90 ± 10% perf-profile.cpu-cycles.shrink_lruvec.shrink_zone.kswapd.kthread.ret_from_fork 5.43 ± 2% -28.7% 3.87 ± 10% perf-profile.cpu-cycles.shrink_page_list.shrink_inactive_list.shrink_lruvec.shrink_zone.kswapd 2.08 ± 40% -94.7% 0.11 ±-909% perf-profile.cpu-cycles.shrink_zone.do_try_to_free_pages.try_to_free_pages.__alloc_pages_nodemask.alloc_kmem_pages_node 2.92 ± 67% -96.2% 0.11 ± 89% perf-profile.cpu-cycles.shrink_zone.do_try_to_free_pages.try_to_free_pages.__alloc_pages_nodemask.alloc_pages_vma 5.43 ± 2% -28.1% 3.90 ± 10% perf-profile.cpu-cycles.shrink_zone.kswapd.kthread.ret_from_fork 3.40 ± 9% -97.8% 0.08 ± 35% perf-profile.cpu-cycles.smp_call_function_interrupt.call_function_interrupt._raw_spin_lock.__page_check_address.page_referenced_one 1.47 ± 13% -100.0% 0.00 ± -1% perf-profile.cpu-cycles.smp_call_function_interrupt.call_function_interrupt.native_flush_tlb_others.flush_tlb_page.ptep_clear_flush 3.46 ± 7% -100.0% 0.00 ± -1% perf-profile.cpu-cycles.smp_call_function_interrupt.call_function_interrupt.physflat_send_IPI_mask.native_send_call_func_ipi.smp_call_function_many 1.76 ± 4% -100.0% 0.00 ± -1% perf-profile.cpu-cycles.smp_call_function_interrupt.call_function_interrupt.smp_call_function_many.native_flush_tlb_others.flush_tlb_page 65.78 ± 4% -100.0% 0.00 ± -1% perf-profile.cpu-cycles.smp_call_function_many.native_flush_tlb_others.flush_tlb_page.ptep_clear_flush.try_to_unmap_one 2.77 ± 48% -96.0% 0.11 ±-909% perf-profile.cpu-cycles.sys_clone.entry_SYSCALL_64_fastpath 1.12 ± 24% -91.8% 0.09 ± 14% perf-profile.cpu-cycles.sys_write.entry_SYSCALL_64_fastpath.__write_nocancel._start.main 2.08 ± 40% -94.7% 0.11 ±-909% perf-profile.cpu-cycles.try_to_free_pages.__alloc_pages_nodemask.alloc_kmem_pages_node.copy_process._do_fork 27.26 ± 23% -75.0% 6.81 ± 22% perf-profile.cpu-cycles.try_to_free_pages.__alloc_pages_nodemask.alloc_pages_current.__page_cache_alloc.__do_page_cache_readahead 60.13 ± 10% +30.9% 78.69 ± 3% perf-profile.cpu-cycles.try_to_free_pages.__alloc_pages_nodemask.alloc_pages_current.pte_alloc_one.__pte_alloc 1.39 ± 3% -100.0% 0.00 ± -1% perf-profile.cpu-cycles.try_to_free_pages.__alloc_pages_nodemask.alloc_pages_vma.shmem_alloc_page.shmem_getpage_gfp 71.75 ± 4% -16.7% 59.77 ± 8% perf-profile.cpu-cycles.try_to_unmap.shrink_page_list.shrink_inactive_list.shrink_lruvec.shrink_zone 71.59 ± 4% -18.4% 58.43 ± 8% perf-profile.cpu-cycles.try_to_unmap_one.rmap_walk.try_to_unmap.shrink_page_list.shrink_inactive_list 0.02 ±102% +5700.0% 1.02 ± 11% perf-profile.cpu-cycles.unlock_page.filemap_map_pages.handle_mm_fault.__do_page_fault.do_page_fault 1.12 ± 24% -92.0% 0.09 ± 13% perf-profile.cpu-cycles.vfs_write.sys_write.entry_SYSCALL_64_fastpath.__write_nocancel._start 27.36 ± 23% -67.1% 8.99 ± 15% perf-profile.cpu-cycles.xfs_filemap_fault.__do_fault.handle_mm_fault.__do_page_fault.do_page_fault 0.06 ± 30% +3159.1% 1.79 ± 10% perf-profile.cpu-cycles.xfs_vm_readpages.__do_page_cache_readahead.ondemand_readahead.page_cache_async_readahead.filemap_fault 146.50 ± 35% -92.7% 10.75 ± 42% sched_debug.cfs_rq[0]:/.nr_spread_over 2737 ± 11% -21.2% 2157 ± 2% sched_debug.cfs_rq[0]:/.tg_load_avg 32.25 ± 46% -49.6% 16.25 ± 6% sched_debug.cfs_rq[10]:/.load 38.00 ± 46% -95.4% 1.75 ± 47% sched_debug.cfs_rq[10]:/.nr_spread_over 2733 ± 12% -22.3% 2123 ± 7% sched_debug.cfs_rq[10]:/.tg_load_avg 63.00 ± 60% -95.2% 3.00 ± 47% sched_debug.cfs_rq[11]:/.nr_spread_over 2733 ± 12% -22.3% 2123 ± 7% sched_debug.cfs_rq[11]:/.tg_load_avg 90.00 ± 51% -90.3% 8.75 ± 71% sched_debug.cfs_rq[12]:/.nr_spread_over 2730 ± 11% -22.3% 2122 ± 7% sched_debug.cfs_rq[12]:/.tg_load_avg 586.25 ± 7% +31.6% 771.25 ± 8% sched_debug.cfs_rq[12]:/.util_avg 78.00 ± 14% -89.7% 8.00 ± 68% sched_debug.cfs_rq[13]:/.nr_spread_over 2725 ± 12% -22.2% 2121 ± 7% sched_debug.cfs_rq[13]:/.tg_load_avg 117.75 ± 45% -94.9% 6.00 ± 71% sched_debug.cfs_rq[14]:/.nr_spread_over 2725 ± 12% -23.8% 2077 ± 4% sched_debug.cfs_rq[14]:/.tg_load_avg 138.25 ± 65% -95.8% 5.75 ± 79% sched_debug.cfs_rq[15]:/.nr_spread_over 2726 ± 12% -24.0% 2070 ± 4% sched_debug.cfs_rq[15]:/.tg_load_avg 67.50 ± 26% -96.7% 2.25 ± 36% sched_debug.cfs_rq[16]:/.nr_spread_over 2727 ± 12% -24.1% 2070 ± 4% sched_debug.cfs_rq[16]:/.tg_load_avg 106.00 ± 65% -97.6% 2.50 ± 72% sched_debug.cfs_rq[17]:/.nr_spread_over 2729 ± 12% -24.1% 2070 ± 4% sched_debug.cfs_rq[17]:/.tg_load_avg 587.00 ± 12% +30.2% 764.00 ± 9% sched_debug.cfs_rq[17]:/.util_avg 132.50 ± 42% -95.5% 6.00 ± 45% sched_debug.cfs_rq[18]:/.nr_spread_over 2727 ± 11% -25.6% 2028 ± 6% sched_debug.cfs_rq[18]:/.tg_load_avg 10.50 ± 38% +154.8% 26.75 ± 46% sched_debug.cfs_rq[19]:/.load 126.25 ± 33% -97.2% 3.50 ± 58% sched_debug.cfs_rq[19]:/.nr_spread_over 9.25 ± 31% +70.3% 15.75 ± 17% sched_debug.cfs_rq[19]:/.runnable_load_avg 2726 ± 11% -25.7% 2027 ± 6% sched_debug.cfs_rq[19]:/.tg_load_avg 561.25 ± 22% +30.9% 734.50 ± 7% sched_debug.cfs_rq[19]:/.util_avg 41.25 ± 20% -90.9% 3.75 ± 89% sched_debug.cfs_rq[1]:/.nr_spread_over 2735 ± 11% -21.2% 2155 ± 2% sched_debug.cfs_rq[1]:/.tg_load_avg 9.25 ± 44% +105.4% 19.00 ± 0% sched_debug.cfs_rq[20]:/.load 79.50 ± 37% -92.5% 6.00 ± 40% sched_debug.cfs_rq[20]:/.nr_spread_over 8.75 ± 40% +80.0% 15.75 ± 12% sched_debug.cfs_rq[20]:/.runnable_load_avg 2726 ± 11% -25.7% 2025 ± 6% sched_debug.cfs_rq[20]:/.tg_load_avg 101.75 ± 30% -92.9% 7.25 ± 51% sched_debug.cfs_rq[21]:/.nr_spread_over 2726 ± 11% -25.7% 2026 ± 6% sched_debug.cfs_rq[21]:/.tg_load_avg 80.00 ± 35% -95.9% 3.25 ± 70% sched_debug.cfs_rq[22]:/.nr_spread_over 2726 ± 11% -25.5% 2029 ± 6% sched_debug.cfs_rq[22]:/.tg_load_avg 181.75 ± 89% -98.2% 3.25 ±107% sched_debug.cfs_rq[23]:/.nr_spread_over 2726 ± 11% -25.7% 2025 ± 7% sched_debug.cfs_rq[23]:/.tg_load_avg 624.50 ± 17% +21.1% 756.50 ± 7% sched_debug.cfs_rq[23]:/.util_avg 26.00 ± 26% -91.3% 2.25 ± 36% sched_debug.cfs_rq[24]:/.nr_spread_over 2726 ± 12% -25.7% 2025 ± 7% sched_debug.cfs_rq[24]:/.tg_load_avg 33.75 ± 33% -94.1% 2.00 ± 0% sched_debug.cfs_rq[25]:/.nr_spread_over 2726 ± 12% -25.7% 2024 ± 7% sched_debug.cfs_rq[25]:/.tg_load_avg 25.50 ± 21% -97.1% 0.75 ±110% sched_debug.cfs_rq[26]:/.nr_spread_over 2725 ± 12% -25.8% 2022 ± 7% sched_debug.cfs_rq[26]:/.tg_load_avg 59.75 ±105% -91.2% 5.25 ± 98% sched_debug.cfs_rq[27]:/.nr_spread_over 2726 ± 12% -25.9% 2021 ± 7% sched_debug.cfs_rq[27]:/.tg_load_avg 27.00 ± 30% -91.7% 2.25 ± 48% sched_debug.cfs_rq[28]:/.nr_spread_over 2726 ± 12% -26.1% 2015 ± 7% sched_debug.cfs_rq[28]:/.tg_load_avg 23.50 ± 21% -89.4% 2.50 ±107% sched_debug.cfs_rq[29]:/.nr_spread_over 2725 ± 12% -25.5% 2031 ± 8% sched_debug.cfs_rq[29]:/.tg_load_avg 32.00 ± 15% -91.4% 2.75 ± 74% sched_debug.cfs_rq[2]:/.nr_spread_over 2732 ± 11% -22.6% 2114 ± 2% sched_debug.cfs_rq[2]:/.tg_load_avg 19.75 ± 9% -26.6% 14.50 ± 14% sched_debug.cfs_rq[30]:/.load 57.50 ± 98% -97.0% 1.75 ± 84% sched_debug.cfs_rq[30]:/.nr_spread_over 2726 ± 12% -25.4% 2033 ± 8% sched_debug.cfs_rq[30]:/.tg_load_avg 28.00 ± 19% -95.5% 1.25 ± 87% sched_debug.cfs_rq[31]:/.nr_spread_over 2726 ± 12% -25.5% 2030 ± 8% sched_debug.cfs_rq[31]:/.tg_load_avg 27.00 ± 19% -81.5% 5.00 ± 67% sched_debug.cfs_rq[32]:/.nr_spread_over 2727 ± 12% -25.6% 2029 ± 8% sched_debug.cfs_rq[32]:/.tg_load_avg 57.25 ± 99% -98.7% 0.75 ±110% sched_debug.cfs_rq[33]:/.nr_spread_over 2727 ± 12% -25.5% 2032 ± 8% sched_debug.cfs_rq[33]:/.tg_load_avg 35.50 ± 56% -95.3% 1.67 ± 28% sched_debug.cfs_rq[34]:/.nr_spread_over 2728 ± 12% -25.5% 2032 ± 8% sched_debug.cfs_rq[34]:/.tg_load_avg 40.25 ± 56% -96.7% 1.33 ± 35% sched_debug.cfs_rq[35]:/.nr_spread_over 2728 ± 12% -25.5% 2032 ± 8% sched_debug.cfs_rq[35]:/.tg_load_avg 88.75 ± 26% -97.5% 2.25 ± 48% sched_debug.cfs_rq[36]:/.nr_spread_over 2726 ± 12% -25.6% 2027 ± 8% sched_debug.cfs_rq[36]:/.tg_load_avg 46.25 ± 25% -94.6% 2.50 ± 66% sched_debug.cfs_rq[37]:/.nr_spread_over 2725 ± 12% -25.6% 2028 ± 8% sched_debug.cfs_rq[37]:/.tg_load_avg 128.25 ± 33% -99.0% 1.25 ±131% sched_debug.cfs_rq[38]:/.nr_spread_over 2723 ± 12% -25.6% 2026 ± 9% sched_debug.cfs_rq[38]:/.tg_load_avg 128239 ± 3% +8.6% 139228 ± 1% sched_debug.cfs_rq[39]:/.exec_clock 11.50 ± 17% +73.9% 20.00 ± 58% sched_debug.cfs_rq[39]:/.load_avg 125.25 ± 37% -98.4% 2.00 ± 79% sched_debug.cfs_rq[39]:/.nr_spread_over 9.25 ± 11% +37.8% 12.75 ± 10% sched_debug.cfs_rq[39]:/.runnable_load_avg 2723 ± 12% -25.6% 2025 ± 9% sched_debug.cfs_rq[39]:/.tg_load_avg 11.50 ± 17% +73.9% 20.00 ± 58% sched_debug.cfs_rq[39]:/.tg_load_avg_contrib 565.25 ± 11% +30.0% 734.75 ± 8% sched_debug.cfs_rq[39]:/.util_avg 52.75 ± 68% -93.8% 3.25 ± 59% sched_debug.cfs_rq[3]:/.nr_spread_over 2733 ± 11% -22.7% 2113 ± 3% sched_debug.cfs_rq[3]:/.tg_load_avg 98.25 ± 23% -98.7% 1.25 ± 34% sched_debug.cfs_rq[40]:/.nr_spread_over 2724 ± 12% -24.1% 2067 ± 11% sched_debug.cfs_rq[40]:/.tg_load_avg 113.75 ± 53% -97.1% 3.25 ± 54% sched_debug.cfs_rq[41]:/.nr_spread_over 2718 ± 12% -23.2% 2088 ± 11% sched_debug.cfs_rq[41]:/.tg_load_avg 9.50 ± 24% +571.1% 63.75 ±113% sched_debug.cfs_rq[42]:/.load 154.00 ± 12% -98.4% 2.50 ± 82% sched_debug.cfs_rq[42]:/.nr_spread_over 8.00 ± 15% +531.2% 50.50 ±118% sched_debug.cfs_rq[42]:/.runnable_load_avg 2718 ± 12% -23.2% 2086 ± 11% sched_debug.cfs_rq[42]:/.tg_load_avg 482.00 ± 15% +31.3% 633.00 ± 9% sched_debug.cfs_rq[42]:/.util_avg 141.75 ± 37% -99.1% 1.25 ± 87% sched_debug.cfs_rq[43]:/.nr_spread_over 2717 ± 12% -24.8% 2043 ± 12% sched_debug.cfs_rq[43]:/.tg_load_avg 98.00 ± 39% -94.1% 5.75 ± 88% sched_debug.cfs_rq[44]:/.nr_spread_over 2718 ± 12% -24.8% 2042 ± 12% sched_debug.cfs_rq[44]:/.tg_load_avg 664.00 ± 3% +17.5% 780.25 ± 8% sched_debug.cfs_rq[44]:/.util_avg 13.25 ± 38% +658.5% 100.50 ± 80% sched_debug.cfs_rq[45]:/.load 119.50 ± 15% -95.3% 5.67 ± 92% sched_debug.cfs_rq[45]:/.nr_spread_over 2716 ± 12% -24.8% 2042 ± 12% sched_debug.cfs_rq[45]:/.tg_load_avg 129025 ± 2% +8.5% 140032 ± 2% sched_debug.cfs_rq[46]:/.exec_clock 105.50 ± 71% -95.7% 4.50 ± 72% sched_debug.cfs_rq[46]:/.nr_spread_over 2716 ± 12% -24.9% 2040 ± 12% sched_debug.cfs_rq[46]:/.tg_load_avg 530.25 ± 17% +30.6% 692.50 ± 3% sched_debug.cfs_rq[46]:/.util_avg 13.50 ± 33% +48.1% 20.00 ± 12% sched_debug.cfs_rq[47]:/.load 193.75 ± 93% -98.6% 2.75 ± 64% sched_debug.cfs_rq[47]:/.nr_spread_over 11.25 ± 19% +53.3% 17.25 ± 7% sched_debug.cfs_rq[47]:/.runnable_load_avg 2717 ± 12% -25.1% 2036 ± 12% sched_debug.cfs_rq[47]:/.tg_load_avg 37.00 ± 37% -88.5% 4.25 ± 67% sched_debug.cfs_rq[4]:/.nr_spread_over 2732 ± 11% -22.8% 2110 ± 2% sched_debug.cfs_rq[4]:/.tg_load_avg 27.00 ± 6% -80.6% 5.25 ± 36% sched_debug.cfs_rq[5]:/.nr_spread_over 348.75 ± 93% -95.6% 15.50 ± 14% sched_debug.cfs_rq[5]:/.runnable_load_avg 2733 ± 11% -23.8% 2081 ± 4% sched_debug.cfs_rq[5]:/.tg_load_avg 117.50 ±132% -94.7% 6.25 ± 79% sched_debug.cfs_rq[6]:/.nr_spread_over 2732 ± 11% -23.8% 2081 ± 4% sched_debug.cfs_rq[6]:/.tg_load_avg 21.75 ± 18% -89.7% 2.25 ± 57% sched_debug.cfs_rq[7]:/.nr_spread_over 2731 ± 11% -23.7% 2083 ± 4% sched_debug.cfs_rq[7]:/.tg_load_avg 33.00 ± 43% -96.2% 1.25 ± 34% sched_debug.cfs_rq[8]:/.nr_spread_over 2734 ± 11% -22.4% 2121 ± 7% sched_debug.cfs_rq[8]:/.tg_load_avg 55.25 ± 83% -92.3% 4.25 ± 97% sched_debug.cfs_rq[9]:/.nr_spread_over 2732 ± 12% -22.4% 2121 ± 7% sched_debug.cfs_rq[9]:/.tg_load_avg 884853 ± 8% -6.8% 824464 ± 8% sched_debug.cpu#0.avg_idle 32.25 ± 46% -49.6% 16.25 ± 6% sched_debug.cpu#10.load 944295 ± 5% -13.7% 814873 ± 14% sched_debug.cpu#13.avg_idle 822.00 ± 26% +57.0% 1290 ± 31% sched_debug.cpu#13.curr->pid 2926 ± 45% +365.2% 13610 ± 67% sched_debug.cpu#13.ttwu_local 984189 ± 0% -13.7% 848936 ± 7% sched_debug.cpu#16.avg_idle 979503 ± 3% -10.7% 874993 ± 5% sched_debug.cpu#18.avg_idle 995.75 ± 13% +35.5% 1348 ± 15% sched_debug.cpu#18.curr->pid 9.25 ± 31% +70.3% 15.75 ± 17% sched_debug.cpu#19.cpu_load[0] 9.00 ± 34% +77.8% 16.00 ± 15% sched_debug.cpu#19.cpu_load[1] 9.25 ± 31% +75.7% 16.25 ± 14% sched_debug.cpu#19.cpu_load[2] 9.25 ± 31% +75.7% 16.25 ± 14% sched_debug.cpu#19.cpu_load[3] 9.75 ± 29% +66.7% 16.25 ± 14% sched_debug.cpu#19.cpu_load[4] 883.25 ± 32% +73.4% 1531 ± 34% sched_debug.cpu#19.curr->pid 10.50 ± 38% +154.8% 26.75 ± 46% sched_debug.cpu#19.load 2117 ± 10% +356.6% 9667 ± 48% sched_debug.cpu#19.ttwu_local 700.50 ± 31% +136.0% 1653 ± 14% sched_debug.cpu#2.ttwu_local 8.75 ± 40% +85.7% 16.25 ± 6% sched_debug.cpu#20.cpu_load[0] 8.75 ± 40% +85.7% 16.25 ± 6% sched_debug.cpu#20.cpu_load[1] 8.75 ± 40% +85.7% 16.25 ± 6% sched_debug.cpu#20.cpu_load[2] 9.00 ± 37% +80.6% 16.25 ± 6% sched_debug.cpu#20.cpu_load[3] 9.25 ± 33% +75.7% 16.25 ± 6% sched_debug.cpu#20.cpu_load[4] 770.25 ± 41% +51.8% 1169 ± 9% sched_debug.cpu#20.curr->pid 9.25 ± 44% +105.4% 19.00 ± 0% sched_debug.cpu#20.load 10.75 ± 10% +430.2% 57.00 ±127% sched_debug.cpu#21.cpu_load[1] 10.75 ± 10% +490.7% 63.50 ±108% sched_debug.cpu#21.cpu_load[2] 11.00 ± 11% +579.5% 74.75 ± 91% sched_debug.cpu#21.cpu_load[3] 11.25 ± 13% +646.7% 84.00 ± 84% sched_debug.cpu#21.cpu_load[4] 10.50 ± 14% +431.0% 55.75 ±131% sched_debug.cpu#23.cpu_load[0] 10.50 ± 14% +433.3% 56.00 ±132% sched_debug.cpu#23.cpu_load[1] 2403 ± 15% +120.0% 5288 ± 56% sched_debug.cpu#23.ttwu_local 562.50 ± 35% +492.5% 3332 ± 72% sched_debug.cpu#24.ttwu_local 8080 ± 40% +118.7% 17673 ± 59% sched_debug.cpu#25.nr_switches 9373 ± 35% +96.7% 18439 ± 56% sched_debug.cpu#25.sched_count 2073 ± 78% +215.0% 6531 ± 77% sched_debug.cpu#25.sched_goidle 3178 ± 61% +114.1% 6803 ± 45% sched_debug.cpu#25.ttwu_count 472.75 ± 20% +326.8% 2017 ± 20% sched_debug.cpu#25.ttwu_local 535.25 ± 32% +215.8% 1690 ± 26% sched_debug.cpu#26.ttwu_local 930733 ± 7% -28.9% 661366 ± 13% sched_debug.cpu#27.avg_idle 967080 ± 5% -20.0% 773442 ± 18% sched_debug.cpu#29.avg_idle 2763 ± 74% +165.8% 7343 ± 34% sched_debug.cpu#29.ttwu_count 483.00 ± 16% +579.6% 3282 ± 86% sched_debug.cpu#29.ttwu_local 3825 ± 43% +70.0% 6504 ± 16% sched_debug.cpu#3.ttwu_count 847.00 ± 25% +169.5% 2282 ± 2% sched_debug.cpu#3.ttwu_local 917918 ± 8% -14.5% 784711 ± 6% sched_debug.cpu#30.avg_idle 19.75 ± 9% -26.6% 14.50 ± 14% sched_debug.cpu#30.load 1.00 ± 0% -100.0% 0.00 ± 0% sched_debug.cpu#30.nr_running 480.25 ± 22% +241.9% 1641 ± 15% sched_debug.cpu#31.ttwu_local 522.00 ± 13% +245.2% 1802 ± 17% sched_debug.cpu#32.ttwu_local 569.00 ± 23% +223.9% 1842 ± 3% sched_debug.cpu#33.ttwu_local 540.25 ± 20% +172.5% 1472 ± 5% sched_debug.cpu#34.ttwu_local 3218 ± 34% +125.6% 7260 ± 29% sched_debug.cpu#36.ttwu_local 9.50 ± 11% +44.7% 13.75 ± 12% sched_debug.cpu#39.cpu_load[0] 9.50 ± 11% +44.7% 13.75 ± 12% sched_debug.cpu#39.cpu_load[1] 9.50 ± 11% +44.7% 13.75 ± 12% sched_debug.cpu#39.cpu_load[2] 9.50 ± 11% +44.7% 13.75 ± 12% sched_debug.cpu#39.cpu_load[3] 9.50 ± 11% +44.7% 13.75 ± 12% sched_debug.cpu#39.cpu_load[4] 990695 ± 1% -17.0% 822031 ± 8% sched_debug.cpu#4.avg_idle 11.00 ± 24% +79.5% 19.75 ± 32% sched_debug.cpu#40.cpu_load[0] 11.00 ± 24% +77.3% 19.50 ± 31% sched_debug.cpu#40.cpu_load[1] 11.25 ± 22% +73.3% 19.50 ± 31% sched_debug.cpu#40.cpu_load[2] 11.50 ± 19% +69.6% 19.50 ± 31% sched_debug.cpu#40.cpu_load[3] 11.50 ± 19% +69.6% 19.50 ± 31% sched_debug.cpu#40.cpu_load[4] 8.00 ± 15% +525.0% 50.00 ±120% sched_debug.cpu#42.cpu_load[0] 8.25 ± 17% +506.1% 50.00 ±121% sched_debug.cpu#42.cpu_load[1] 8.25 ± 17% +503.0% 49.75 ±122% sched_debug.cpu#42.cpu_load[2] 8.25 ± 17% +500.0% 49.50 ±121% sched_debug.cpu#42.cpu_load[3] 8.25 ± 17% +521.2% 51.25 ±122% sched_debug.cpu#42.cpu_load[4] 792.75 ± 25% +182.2% 2236 ± 30% sched_debug.cpu#42.curr->pid 9.50 ± 24% +571.1% 63.75 ±113% sched_debug.cpu#42.load 3297 ± 11% -27.2% 2400 ± 14% sched_debug.cpu#44.ttwu_local 13.25 ± 38% +658.5% 100.50 ± 80% sched_debug.cpu#45.load 8235 ± 60% -61.4% 3183 ± 36% sched_debug.cpu#45.ttwu_local 11.75 ± 21% +44.7% 17.00 ± 9% sched_debug.cpu#47.cpu_load[0] 11.50 ± 19% +45.7% 16.75 ± 10% sched_debug.cpu#47.cpu_load[1] 11.50 ± 19% +45.7% 16.75 ± 10% sched_debug.cpu#47.cpu_load[2] 11.25 ± 19% +46.7% 16.50 ± 12% sched_debug.cpu#47.cpu_load[3] 11.25 ± 19% +46.7% 16.50 ± 12% sched_debug.cpu#47.cpu_load[4] 13.50 ± 33% +48.1% 20.00 ± 12% sched_debug.cpu#47.load 348.75 ± 93% -95.6% 15.50 ± 14% sched_debug.cpu#5.cpu_load[0] 349.25 ± 93% -95.6% 15.25 ± 15% sched_debug.cpu#5.cpu_load[1] 349.25 ± 93% -95.6% 15.25 ± 15% sched_debug.cpu#5.cpu_load[2] 349.00 ± 93% -95.6% 15.25 ± 15% sched_debug.cpu#5.cpu_load[3] 349.00 ± 93% -95.6% 15.25 ± 15% sched_debug.cpu#5.cpu_load[4] 788.00 ± 41% +132.6% 1833 ± 32% sched_debug.cpu#5.curr->pid 990.75 ± 39% +72.2% 1706 ± 15% sched_debug.cpu#6.ttwu_local -7.00 ±-92% -178.6% 5.50 ±155% sched_debug.cpu#7.nr_uninterruptible 2511 ± 55% +76.4% 4430 ± 28% sched_debug.cpu#7.sched_goidle 683.50 ± 19% +147.2% 1689 ± 22% sched_debug.cpu#9.ttwu_local 1.70 ± 28% -47.4% 0.89 ± 57% sched_debug.rt_rq[0]:/.rt_time ========================================================================================= tbox_group/testcase/rootfs/kconfig/compiler/cpufreq_governor/runtime/test: ivb43/vm-scalability/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/performance/300s/mmap-xread-rand-mt commit: 5b74283ab251b9db55cbbe31d19ca72482103290 72b252aed506b8f1a03f7abd29caef4cdf6a043b 5b74283ab251b9db 72b252aed506b8f1a03f7abd29 ---------------- -------------------------- %stddev %change %stddev \ | \ 0.00 ±100% -100.0% 0.00 ± -1% vm-scalability.stddev 126269 ± 2% -34.6% 82602 ± 0% vm-scalability.time.involuntary_context_switches 54531943 ± 0% +14.8% 62613165 ± 0% vm-scalability.time.major_page_faults 61436008 ± 0% +21.6% 74727557 ± 0% vm-scalability.time.minor_page_faults 4056 ± 0% -5.6% 3829 ± 0% vm-scalability.time.percent_of_cpu_this_job_got 12654 ± 0% -5.8% 11920 ± 0% vm-scalability.time.system_time 74.56 ± 0% +19.5% 89.11 ± 0% vm-scalability.time.user_time 36350857 ± 0% +24.5% 45240434 ± 0% vm-scalability.time.voluntary_context_switches 4403716 ± 0% +14.8% 5056679 ± 0% softirqs.SCHED 243364 ± 1% -12.7% 212578 ± 0% meminfo.Active(anon) 263911 ± 1% -11.9% 232436 ± 0% meminfo.Shmem 14808 ± 78% +158.3% 38252 ± 7% numa-meminfo.node0.Inactive(anon) 29051 ± 39% -82.4% 5114 ± 52% numa-meminfo.node1.Inactive(anon) 39481 ±120% -100.0% 0.00 ± -1% latency_stats.avg.pipe_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath 39481 ±120% -100.0% 0.00 ± -1% latency_stats.max.pipe_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath 39481 ±120% -100.0% 0.00 ± -1% latency_stats.sum.pipe_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath 6.25 ± 6% +44.0% 9.00 ± 0% vmstat.procs.b 230932 ± 0% +23.0% 284130 ± 0% vmstat.system.cs 1554716 ± 0% -85.8% 220448 ± 1% vmstat.system.in 85.52 ± 0% -3.7% 82.40 ± 0% turbostat.%Busy 2559 ± 0% -3.6% 2466 ± 0% turbostat.Avg_MHz 10.56 ± 1% +29.9% 13.71 ± 1% turbostat.CPU%c1 7.91 ± 1% -4.1% 7.58 ± 0% turbostat.RAMWatt 126269 ± 2% -34.6% 82602 ± 0% time.involuntary_context_switches 54531943 ± 0% +14.8% 62613165 ± 0% time.major_page_faults 61436008 ± 0% +21.6% 74727557 ± 0% time.minor_page_faults 74.56 ± 0% +19.5% 89.11 ± 0% time.user_time 36350857 ± 0% +24.5% 45240434 ± 0% time.voluntary_context_switches 20018030 ± 15% -31.3% 13752133 ± 19% numa-numastat.node0.numa_foreign 22812255 ± 14% -26.2% 16838094 ± 22% numa-numastat.node0.numa_miss 22812919 ± 14% -26.2% 16838306 ± 22% numa-numastat.node0.other_node 22812255 ± 14% -26.2% 16838094 ± 22% numa-numastat.node1.numa_foreign 20018030 ± 15% -31.3% 13752133 ± 19% numa-numastat.node1.numa_miss 20018852 ± 15% -31.3% 13752234 ± 19% numa-numastat.node1.other_node 1.228e+09 ± 0% +37.4% 1.688e+09 ± 1% cpuidle.C1-IVT.time 32340120 ± 0% +57.8% 51035613 ± 0% cpuidle.C1-IVT.usage 2.121e+08 ± 1% -40.8% 1.256e+08 ± 2% cpuidle.C1E-IVT.time 4533185 ± 1% -30.5% 3149430 ± 2% cpuidle.C1E-IVT.usage 40110810 ± 5% -23.7% 30624162 ± 2% cpuidle.C3-IVT.time 809767 ± 4% -25.7% 601425 ± 2% cpuidle.C3-IVT.usage 7.123e+08 ± 1% +15.3% 8.215e+08 ± 2% cpuidle.C6-IVT.time 42448 ± 3% +31.4% 55772 ± 1% cpuidle.C6-IVT.usage 708.00 ± 7% +50.5% 1065 ± 1% cpuidle.POLL.usage 634.50 ± 6% +1.1e+08% 6.895e+08 ±173% numa-vmstat.node0.nr_alloc_batch 3666 ± 77% +160.9% 9563 ± 7% numa-vmstat.node0.nr_inactive_anon 201.25 ± 3% -92.0% 16.00 ± 4% numa-vmstat.node0.nr_isolated_file 175.50 ± 9% -38.6% 107.75 ± 12% numa-vmstat.node0.nr_pages_scanned 7984325 ± 15% -32.2% 5409698 ± 19% numa-vmstat.node0.numa_foreign 9373650 ± 16% -31.0% 6467646 ± 18% numa-vmstat.node0.numa_miss 9407096 ± 16% -30.9% 6503985 ± 18% numa-vmstat.node0.numa_other 7259 ± 39% -82.3% 1283 ± 52% numa-vmstat.node1.nr_inactive_anon 210.50 ± 4% -92.8% 15.25 ± 8% numa-vmstat.node1.nr_isolated_file 9373769 ± 16% -31.0% 6467702 ± 18% numa-vmstat.node1.numa_foreign 7984344 ± 15% -32.2% 5409734 ± 19% numa-vmstat.node1.numa_miss 8035100 ± 15% -32.1% 5457314 ± 19% numa-vmstat.node1.numa_other 4550721 ± 2% -9.2% 4133466 ± 3% numa-vmstat.node1.workingset_refault 17565 ± 1% -93.7% 1100 ± 17% proc-vmstat.allocstall 26050728 ± 51% -100.0% 0.00 ± -1% proc-vmstat.compact_free_scanned 11858 ± 50% -100.0% 0.00 ± -1% proc-vmstat.compact_isolated 8749 ± 55% -100.0% 0.00 ± -1% proc-vmstat.compact_migrate_scanned 127.00 ± 27% -94.9% 6.50 ± 13% proc-vmstat.kswapd_high_wmark_hit_quickly 819.75 ± 6% +504.4% 4954 ± 2% proc-vmstat.kswapd_low_wmark_hit_quickly 60853 ± 1% -12.7% 53103 ± 0% proc-vmstat.nr_active_anon 392.00 ± 12% -91.8% 32.00 ± 4% proc-vmstat.nr_isolated_file 65986 ± 1% -11.9% 58109 ± 0% proc-vmstat.nr_shmem 42830286 ± 3% -28.6% 30590228 ± 4% proc-vmstat.numa_foreign 23780 ± 2% +10.3% 26221 ± 1% proc-vmstat.numa_hint_faults 42830286 ± 3% -28.6% 30590228 ± 4% proc-vmstat.numa_miss 42831486 ± 3% -28.6% 30590541 ± 4% proc-vmstat.numa_other 2691 ± 3% -23.5% 2058 ± 4% proc-vmstat.numa_pages_migrated 23782 ± 2% +10.4% 26252 ± 1% proc-vmstat.numa_pte_updates 1837 ± 5% +181.5% 5172 ± 2% proc-vmstat.pageoutrun 2908119 ± 0% +23.8% 3599604 ± 0% proc-vmstat.pgactivate 4987674 ± 3% +10.9% 5529373 ± 3% proc-vmstat.pgdeactivate 1.626e+08 ± 0% +19.0% 1.935e+08 ± 0% proc-vmstat.pgfault 54531943 ± 0% +14.8% 62613165 ± 0% proc-vmstat.pgmajfault 7894 ± 34% -73.9% 2058 ± 4% proc-vmstat.pgmigrate_success 4786175 ± 3% +10.8% 5302705 ± 3% proc-vmstat.pgrefill_normal 2040051 ± 2% -91.0% 183121 ± 15% proc-vmstat.pgscan_direct_dma32 45762582 ± 2% -91.1% 4067173 ± 15% proc-vmstat.pgscan_direct_normal 671597 ± 7% +275.0% 2518325 ± 3% proc-vmstat.pgscan_kswapd_dma32 14881691 ± 4% +276.0% 55960850 ± 1% proc-vmstat.pgscan_kswapd_normal 1220132 ± 0% -98.6% 16599 ± 24% proc-vmstat.pgsteal_direct_dma32 27161268 ± 1% -98.2% 501766 ± 21% proc-vmstat.pgsteal_direct_normal 135733 ± 5% +853.3% 1293969 ± 4% proc-vmstat.pgsteal_kswapd_dma32 4269136 ± 2% +592.9% 29581898 ± 1% proc-vmstat.pgsteal_kswapd_normal 1.00 ± 2% +35.0% 1.35 ± 2% perf-profile.cpu-cycles.__account_scheduler_latency.enqueue_entity.enqueue_task_fair.enqueue_task.activate_task 2.66 ± 4% +42.5% 3.78 ± 1% perf-profile.cpu-cycles.__add_to_page_cache_locked.add_to_page_cache_lru.filemap_fault.xfs_filemap_fault.__do_fault 38.25 ± 3% +25.5% 48.02 ± 0% perf-profile.cpu-cycles.__add_to_page_cache_locked.add_to_page_cache_lru.mpage_readpages.xfs_vm_readpages.__do_page_cache_readahead 9.90 ± 22% -91.3% 0.86 ± 8% perf-profile.cpu-cycles.__alloc_pages_nodemask.alloc_pages_current.__page_cache_alloc.__do_page_cache_readahead.filemap_fault 13.14 ± 36% -96.9% 0.41 ± 51% perf-profile.cpu-cycles.__alloc_pages_nodemask.alloc_pages_current.__page_cache_alloc.filemap_fault.xfs_filemap_fault 69.25 ± 1% -14.5% 59.24 ± 0% perf-profile.cpu-cycles.__do_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault 1.54 ± 4% +28.9% 1.99 ± 2% perf-profile.cpu-cycles.__lock_page_or_retry.filemap_fault.xfs_filemap_fault.__do_fault.handle_mm_fault 9.99 ± 21% -90.7% 0.93 ± 12% perf-profile.cpu-cycles.__page_cache_alloc.__do_page_cache_readahead.filemap_fault.xfs_filemap_fault.__do_fault 13.16 ± 36% -96.8% 0.42 ± 48% perf-profile.cpu-cycles.__page_cache_alloc.filemap_fault.xfs_filemap_fault.__do_fault.handle_mm_fault 2.24 ± 8% -97.7% 0.05 ± 8% perf-profile.cpu-cycles.__page_check_address.page_referenced_one.rmap_walk.page_referenced.shrink_page_list 0.83 ± 9% +26.0% 1.05 ± 0% perf-profile.cpu-cycles.__radix_tree_lookup.__do_page_cache_readahead.filemap_fault.xfs_filemap_fault.__do_fault 1.46 ± 1% +32.2% 1.93 ± 0% perf-profile.cpu-cycles.__wait_on_bit.wait_on_page_bit_killable.__lock_page_or_retry.filemap_fault.xfs_filemap_fault 1.69 ± 2% +27.3% 2.15 ± 0% perf-profile.cpu-cycles.__wake_up.__wake_up_bit.unlock_page.handle_mm_fault.__do_page_fault 1.66 ± 0% +28.1% 2.13 ± 0% perf-profile.cpu-cycles.__wake_up_bit.unlock_page.handle_mm_fault.__do_page_fault.do_page_fault 0.87 ± 0% +50.1% 1.30 ± 0% perf-profile.cpu-cycles.__wake_up_common.__wake_up.__wake_up_bit.unlock_page.handle_mm_fault 2.19 ± 9% -99.2% 0.02 ± 24% perf-profile.cpu-cycles._raw_spin_lock.__page_check_address.page_referenced_one.rmap_walk.page_referenced 22.99 ± 4% +30.6% 30.02 ± 0% perf-profile.cpu-cycles._raw_spin_lock.handle_mm_fault.__do_page_fault.do_page_fault.page_fault 2.60 ± 5% +42.3% 3.70 ± 1% perf-profile.cpu-cycles._raw_spin_lock_irq.__add_to_page_cache_locked.add_to_page_cache_lru.filemap_fault.xfs_filemap_fault 37.36 ± 3% +25.5% 46.89 ± 0% perf-profile.cpu-cycles._raw_spin_lock_irq.__add_to_page_cache_locked.add_to_page_cache_lru.mpage_readpages.xfs_vm_readpages 2.70 ± 5% +41.7% 3.83 ± 1% perf-profile.cpu-cycles.add_to_page_cache_lru.filemap_fault.xfs_filemap_fault.__do_fault.handle_mm_fault 38.85 ± 3% +24.8% 48.48 ± 0% perf-profile.cpu-cycles.add_to_page_cache_lru.mpage_readpages.xfs_vm_readpages.__do_page_cache_readahead.filemap_fault 9.96 ± 21% -91.0% 0.90 ± 9% perf-profile.cpu-cycles.alloc_pages_current.__page_cache_alloc.__do_page_cache_readahead.filemap_fault.xfs_filemap_fault 13.16 ± 36% -96.9% 0.40 ± 50% perf-profile.cpu-cycles.alloc_pages_current.__page_cache_alloc.filemap_fault.xfs_filemap_fault.__do_fault 0.96 ± 0% +49.0% 1.43 ± 0% perf-profile.cpu-cycles.autoremove_wake_function.wake_bit_function.__wake_up_common.__wake_up.__wake_up_bit 1.06 ± 15% -100.0% 0.00 ± -1% perf-profile.cpu-cycles.call_function_interrupt.physflat_send_IPI_mask.native_send_call_func_ipi.smp_call_function_many.native_flush_tlb_others 1.97 ± 3% +33.4% 2.63 ± 1% perf-profile.cpu-cycles.cpu_startup_entry.start_secondary 2.52 ± 16% -100.0% 0.00 ± -1% perf-profile.cpu-cycles.default_send_IPI_mask_allbutself_phys.physflat_send_IPI_allbutself.native_send_call_func_ipi.smp_call_function_many.native_flush_tlb_others 8.80 ± 15% -100.0% 0.00 ± -1% perf-profile.cpu-cycles.default_send_IPI_mask_sequence_phys.physflat_send_IPI_mask.native_send_call_func_ipi.smp_call_function_many.native_flush_tlb_others 0.96 ± 0% +46.9% 1.41 ± 1% perf-profile.cpu-cycles.default_wake_function.autoremove_wake_function.wake_bit_function.__wake_up_common.__wake_up 22.40 ± 13% -98.0% 0.45 ± 2% perf-profile.cpu-cycles.do_try_to_free_pages.try_to_free_pages.__alloc_pages_nodemask.alloc_pages_current.__page_cache_alloc 0.93 ± 2% +17.5% 1.09 ± 1% perf-profile.cpu-cycles.dump_trace.save_stack_trace_tsk.__account_scheduler_latency.enqueue_entity.enqueue_task_fair 1.14 ± 2% +57.0% 1.78 ± 1% perf-profile.cpu-cycles.enqueue_entity.enqueue_task_fair.enqueue_task.activate_task.ttwu_do_activate 69.10 ± 1% -14.6% 59.01 ± 0% perf-profile.cpu-cycles.filemap_fault.xfs_filemap_fault.__do_fault.handle_mm_fault.__do_page_fault 1.05 ± 4% +37.8% 1.44 ± 0% perf-profile.cpu-cycles.filemap_map_pages.handle_mm_fault.__do_page_fault.do_page_fault.page_fault 1.01 ± 13% -100.0% 0.00 ± -1% perf-profile.cpu-cycles.flush_tlb_func.flush_smp_call_function_queue.generic_smp_call_function_single_interrupt.smp_call_function_interrupt.call_function_interrupt 18.12 ± 13% -100.0% 0.00 ± -1% perf-profile.cpu-cycles.flush_tlb_page.ptep_clear_flush.try_to_unmap_one.rmap_walk.try_to_unmap 1.49 ± 4% +73.9% 2.60 ± 4% perf-profile.cpu-cycles.kswapd.kthread.ret_from_fork 1.50 ± 4% +73.2% 2.60 ± 4% perf-profile.cpu-cycles.kthread.ret_from_fork 1.16 ± 17% -100.0% 0.00 ± -1% perf-profile.cpu-cycles.llist_add_batch.smp_call_function_many.native_flush_tlb_others.flush_tlb_page.ptep_clear_flush 39.67 ± 3% +24.2% 49.29 ± 0% perf-profile.cpu-cycles.mpage_readpages.xfs_vm_readpages.__do_page_cache_readahead.filemap_fault.xfs_filemap_fault 18.10 ± 13% -100.0% 0.00 ± -1% perf-profile.cpu-cycles.native_flush_tlb_others.flush_tlb_page.ptep_clear_flush.try_to_unmap_one.rmap_walk 1.68 ± 10% -100.0% 0.00 ± -1% perf-profile.cpu-cycles.native_queued_spin_lock_slowpath._raw_spin_lock.__page_check_address.page_referenced_one.rmap_walk 22.57 ± 4% +30.8% 29.52 ± 0% perf-profile.cpu-cycles.native_queued_spin_lock_slowpath._raw_spin_lock.handle_mm_fault.__do_page_fault.do_page_fault 2.56 ± 5% +42.3% 3.65 ± 1% perf-profile.cpu-cycles.native_queued_spin_lock_slowpath._raw_spin_lock_irq.__add_to_page_cache_locked.add_to_page_cache_lru.filemap_fault 36.95 ± 3% +25.6% 46.42 ± 0% perf-profile.cpu-cycles.native_queued_spin_lock_slowpath._raw_spin_lock_irq.__add_to_page_cache_locked.add_to_page_cache_lru.mpage_readpages 0.80 ± 29% +92.5% 1.55 ± 3% perf-profile.cpu-cycles.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.__remove_mapping.shrink_page_list.shrink_inactive_list 12.80 ± 12% -100.0% 0.00 ± -1% perf-profile.cpu-cycles.native_send_call_func_ipi.smp_call_function_many.native_flush_tlb_others.flush_tlb_page.ptep_clear_flush 2.48 ± 9% -89.9% 0.25 ± 2% perf-profile.cpu-cycles.page_referenced.shrink_page_list.shrink_inactive_list.shrink_lruvec.shrink_zone 2.29 ± 11% -95.1% 0.11 ± 3% perf-profile.cpu-cycles.page_referenced_one.rmap_walk.page_referenced.shrink_page_list.shrink_inactive_list 2.85 ± 16% -100.0% 0.00 ± -1% perf-profile.cpu-cycles.physflat_send_IPI_allbutself.native_send_call_func_ipi.smp_call_function_many.native_flush_tlb_others.flush_tlb_page 9.91 ± 16% -100.0% 0.00 ± -1% perf-profile.cpu-cycles.physflat_send_IPI_mask.native_send_call_func_ipi.smp_call_function_many.native_flush_tlb_others.flush_tlb_page 0.90 ± 2% +15.0% 1.04 ± 0% perf-profile.cpu-cycles.print_context_stack.dump_trace.save_stack_trace_tsk.__account_scheduler_latency.enqueue_entity 18.14 ± 13% -100.0% 0.00 ± -1% perf-profile.cpu-cycles.ptep_clear_flush.try_to_unmap_one.rmap_walk.try_to_unmap.shrink_page_list 1.50 ± 4% +73.2% 2.60 ± 4% perf-profile.cpu-cycles.ret_from_fork 2.46 ± 9% -90.5% 0.23 ± 4% perf-profile.cpu-cycles.rmap_walk.page_referenced.shrink_page_list.shrink_inactive_list.shrink_lruvec 18.72 ± 12% -99.5% 0.09 ± 24% perf-profile.cpu-cycles.rmap_walk.try_to_unmap.shrink_page_list.shrink_inactive_list.shrink_lruvec 0.93 ± 2% +17.7% 1.10 ± 0% perf-profile.cpu-cycles.save_stack_trace_tsk.__account_scheduler_latency.enqueue_entity.enqueue_task_fair.enqueue_task 0.75 ± 4% +30.9% 0.99 ± 1% perf-profile.cpu-cycles.sched_ttwu_pending.cpu_startup_entry.start_secondary 22.43 ± 13% -98.0% 0.45 ± 2% perf-profile.cpu-cycles.shrink_inactive_list.shrink_lruvec.shrink_zone.do_try_to_free_pages.try_to_free_pages 1.49 ± 4% +73.5% 2.59 ± 4% perf-profile.cpu-cycles.shrink_inactive_list.shrink_lruvec.shrink_zone.kswapd.kthread 22.44 ± 13% -98.0% 0.45 ± 2% perf-profile.cpu-cycles.shrink_lruvec.shrink_zone.do_try_to_free_pages.try_to_free_pages.__alloc_pages_nodemask 1.49 ± 4% +73.5% 2.59 ± 4% perf-profile.cpu-cycles.shrink_lruvec.shrink_zone.kswapd.kthread.ret_from_fork 22.36 ± 13% -98.1% 0.43 ± 3% perf-profile.cpu-cycles.shrink_page_list.shrink_inactive_list.shrink_lruvec.shrink_zone.do_try_to_free_pages 1.45 ± 4% +72.9% 2.50 ± 4% perf-profile.cpu-cycles.shrink_page_list.shrink_inactive_list.shrink_lruvec.shrink_zone.kswapd 22.40 ± 13% -98.0% 0.45 ± 2% perf-profile.cpu-cycles.shrink_zone.do_try_to_free_pages.try_to_free_pages.__alloc_pages_nodemask.alloc_pages_current 1.49 ± 4% +73.9% 2.60 ± 4% perf-profile.cpu-cycles.shrink_zone.kswapd.kthread.ret_from_fork 0.83 ± 15% -100.0% 0.00 ± -1% perf-profile.cpu-cycles.smp_call_function_interrupt.call_function_interrupt.physflat_send_IPI_mask.native_send_call_func_ipi.smp_call_function_many 17.35 ± 13% -100.0% 0.00 ± -1% perf-profile.cpu-cycles.smp_call_function_many.native_flush_tlb_others.flush_tlb_page.ptep_clear_flush.try_to_unmap_one 1.99 ± 3% +33.5% 2.66 ± 1% perf-profile.cpu-cycles.start_secondary 9.29 ± 23% -99.0% 0.09 ± 72% perf-profile.cpu-cycles.try_to_free_pages.__alloc_pages_nodemask.alloc_pages_current.__page_cache_alloc.__do_page_cache_readahead 13.11 ± 36% -97.3% 0.35 ± 17% perf-profile.cpu-cycles.try_to_free_pages.__alloc_pages_nodemask.alloc_pages_current.__page_cache_alloc.filemap_fault 18.73 ± 13% -99.5% 0.10 ± 23% perf-profile.cpu-cycles.try_to_unmap.shrink_page_list.shrink_inactive_list.shrink_lruvec.shrink_zone 18.62 ± 12% -99.7% 0.06 ± 27% perf-profile.cpu-cycles.try_to_unmap_one.rmap_walk.try_to_unmap.shrink_page_list.shrink_inactive_list 0.96 ± 0% +42.7% 1.37 ± 2% perf-profile.cpu-cycles.try_to_wake_up.default_wake_function.autoremove_wake_function.wake_bit_function.__wake_up_common 1.61 ± 1% +40.8% 2.27 ± 2% perf-profile.cpu-cycles.unlock_page.handle_mm_fault.__do_page_fault.do_page_fault.page_fault 1.47 ± 2% +31.2% 1.93 ± 1% perf-profile.cpu-cycles.wait_on_page_bit_killable.__lock_page_or_retry.filemap_fault.xfs_filemap_fault.__do_fault 0.93 ± 0% +49.5% 1.39 ± 0% perf-profile.cpu-cycles.wake_bit_function.__wake_up_common.__wake_up.__wake_up_bit.unlock_page 69.23 ± 1% -14.5% 59.20 ± 0% perf-profile.cpu-cycles.xfs_filemap_fault.__do_fault.handle_mm_fault.__do_page_fault.do_page_fault 39.68 ± 3% +24.3% 49.30 ± 0% perf-profile.cpu-cycles.xfs_vm_readpages.__do_page_cache_readahead.filemap_fault.xfs_filemap_fault.__do_fault 1308 ± 11% +34.4% 1758 ± 8% sched_debug.cfs_rq[0]:/.tg_load_avg 1303 ± 9% +34.8% 1757 ± 8% sched_debug.cfs_rq[10]:/.tg_load_avg 1306 ± 8% +34.2% 1753 ± 8% sched_debug.cfs_rq[11]:/.tg_load_avg 1305 ± 9% +34.4% 1754 ± 8% sched_debug.cfs_rq[12]:/.tg_load_avg -1328 ±-2785% +5030.9% -68179 ±-40% sched_debug.cfs_rq[13]:/.spread0 1379 ± 17% +27.2% 1754 ± 8% sched_debug.cfs_rq[13]:/.tg_load_avg 1378 ± 17% +27.2% 1753 ± 8% sched_debug.cfs_rq[14]:/.tg_load_avg 1377 ± 17% +27.3% 1753 ± 8% sched_debug.cfs_rq[15]:/.tg_load_avg 1377 ± 17% +27.3% 1753 ± 8% sched_debug.cfs_rq[16]:/.tg_load_avg 1377 ± 17% +27.3% 1754 ± 8% sched_debug.cfs_rq[17]:/.tg_load_avg 1377 ± 17% +27.1% 1751 ± 8% sched_debug.cfs_rq[18]:/.tg_load_avg 1374 ± 16% +27.5% 1752 ± 8% sched_debug.cfs_rq[19]:/.tg_load_avg 767.25 ± 4% -10.1% 689.50 ± 4% sched_debug.cfs_rq[19]:/.util_avg 1304 ± 10% +34.8% 1758 ± 8% sched_debug.cfs_rq[1]:/.tg_load_avg 763.00 ± 2% -16.2% 639.75 ± 8% sched_debug.cfs_rq[1]:/.util_avg 1377 ± 16% +27.3% 1753 ± 8% sched_debug.cfs_rq[20]:/.tg_load_avg 14.75 ± 2% -22.0% 11.50 ± 4% sched_debug.cfs_rq[21]:/.runnable_load_avg 55270 ±101% -175.9% -41953 ±-122% sched_debug.cfs_rq[21]:/.spread0 1377 ± 16% +27.3% 1753 ± 8% sched_debug.cfs_rq[21]:/.tg_load_avg 14.75 ± 2% +22.0% 18.00 ± 7% sched_debug.cfs_rq[22]:/.load 1379 ± 16% +27.1% 1752 ± 8% sched_debug.cfs_rq[22]:/.tg_load_avg 91.75 ± 80% -78.5% 19.75 ± 47% sched_debug.cfs_rq[23]:/.load_avg 24530 ±117% -293.5% -47474 ±-37% sched_debug.cfs_rq[23]:/.spread0 1434 ± 14% +22.1% 1751 ± 8% sched_debug.cfs_rq[23]:/.tg_load_avg 91.25 ± 79% -78.4% 19.75 ± 47% sched_debug.cfs_rq[23]:/.tg_load_avg_contrib 73797 ± 52% +146.9% 182221 ± 13% sched_debug.cfs_rq[24]:/.spread0 1435 ± 14% +22.2% 1753 ± 8% sched_debug.cfs_rq[24]:/.tg_load_avg 1433 ± 14% +22.4% 1755 ± 8% sched_debug.cfs_rq[25]:/.tg_load_avg 1433 ± 14% +22.3% 1753 ± 8% sched_debug.cfs_rq[26]:/.tg_load_avg 1428 ± 14% +22.6% 1752 ± 8% sched_debug.cfs_rq[27]:/.tg_load_avg 14.50 ± 3% -13.8% 12.50 ± 6% sched_debug.cfs_rq[28]:/.runnable_load_avg 1429 ± 14% +22.4% 1750 ± 8% sched_debug.cfs_rq[28]:/.tg_load_avg 21.50 ± 52% -43.0% 12.25 ± 12% sched_debug.cfs_rq[29]:/.load_avg 1430 ± 14% +22.4% 1750 ± 8% sched_debug.cfs_rq[29]:/.tg_load_avg 21.50 ± 52% -43.0% 12.25 ± 12% sched_debug.cfs_rq[29]:/.tg_load_avg_contrib 1302 ± 10% +35.0% 1757 ± 8% sched_debug.cfs_rq[2]:/.tg_load_avg 1429 ± 14% +22.4% 1748 ± 8% sched_debug.cfs_rq[30]:/.tg_load_avg 1427 ± 14% +22.5% 1749 ± 8% sched_debug.cfs_rq[31]:/.tg_load_avg 1425 ± 13% +22.8% 1750 ± 8% sched_debug.cfs_rq[32]:/.tg_load_avg 1428 ± 13% +22.6% 1751 ± 8% sched_debug.cfs_rq[33]:/.tg_load_avg 14.00 ± 0% -16.1% 11.75 ± 7% sched_debug.cfs_rq[34]:/.runnable_load_avg 1430 ± 13% +22.4% 1750 ± 8% sched_debug.cfs_rq[34]:/.tg_load_avg 1429 ± 13% +22.7% 1753 ± 8% sched_debug.cfs_rq[35]:/.tg_load_avg 765.25 ± 2% -15.5% 646.75 ± 8% sched_debug.cfs_rq[35]:/.util_avg 1427 ± 13% +22.5% 1748 ± 8% sched_debug.cfs_rq[36]:/.tg_load_avg 12.75 ± 11% -19.6% 10.25 ± 4% sched_debug.cfs_rq[37]:/.runnable_load_avg 1426 ± 13% +22.2% 1743 ± 8% sched_debug.cfs_rq[37]:/.tg_load_avg 772.25 ± 4% -17.3% 639.00 ± 9% sched_debug.cfs_rq[37]:/.util_avg 1430 ± 13% +23.7% 1769 ± 9% sched_debug.cfs_rq[38]:/.tg_load_avg 1429 ± 13% +22.2% 1746 ± 8% sched_debug.cfs_rq[39]:/.tg_load_avg 1291 ± 9% +36.0% 1756 ± 8% sched_debug.cfs_rq[3]:/.tg_load_avg 1427 ± 13% +22.3% 1746 ± 8% sched_debug.cfs_rq[40]:/.tg_load_avg 1428 ± 13% +22.3% 1746 ± 8% sched_debug.cfs_rq[41]:/.tg_load_avg 1427 ± 13% +22.4% 1746 ± 8% sched_debug.cfs_rq[42]:/.tg_load_avg 1426 ± 13% +22.3% 1744 ± 8% sched_debug.cfs_rq[43]:/.tg_load_avg 16.25 ± 5% -12.3% 14.25 ± 7% sched_debug.cfs_rq[44]:/.load 1426 ± 13% +22.3% 1744 ± 8% sched_debug.cfs_rq[44]:/.tg_load_avg 1425 ± 13% +22.5% 1745 ± 8% sched_debug.cfs_rq[45]:/.tg_load_avg 1423 ± 13% +22.6% 1745 ± 8% sched_debug.cfs_rq[46]:/.tg_load_avg 1421 ± 13% +22.8% 1746 ± 8% sched_debug.cfs_rq[47]:/.tg_load_avg 1292 ± 9% +35.7% 1753 ± 8% sched_debug.cfs_rq[4]:/.tg_load_avg 1294 ± 9% +35.5% 1754 ± 8% sched_debug.cfs_rq[5]:/.tg_load_avg 72.50 ± 77% -80.3% 14.25 ± 7% sched_debug.cfs_rq[6]:/.load_avg 1301 ± 9% +34.9% 1756 ± 8% sched_debug.cfs_rq[6]:/.tg_load_avg 72.50 ± 77% -80.3% 14.25 ± 7% sched_debug.cfs_rq[6]:/.tg_load_avg_contrib 1302 ± 9% +34.8% 1756 ± 8% sched_debug.cfs_rq[7]:/.tg_load_avg 1302 ± 9% +34.9% 1757 ± 8% sched_debug.cfs_rq[8]:/.tg_load_avg 43766 ±155% -245.7% -63758 ±-72% sched_debug.cfs_rq[9]:/.spread0 1301 ± 9% +35.1% 1758 ± 8% sched_debug.cfs_rq[9]:/.tg_load_avg 1658 ± 36% -43.5% 936.25 ± 20% sched_debug.cpu#0.curr->pid 896002 ± 1% +15.8% 1037478 ± 0% sched_debug.cpu#0.nr_switches 440864 ± 1% +15.1% 507620 ± 0% sched_debug.cpu#0.sched_goidle 459656 ± 1% +16.6% 535919 ± 0% sched_debug.cpu#0.ttwu_count 8703 ± 5% +47.1% 12805 ± 3% sched_debug.cpu#0.ttwu_local 897720 ± 0% +15.5% 1036771 ± 0% sched_debug.cpu#1.nr_switches -25.50 ±-96% -374.5% 70.00 ± 51% sched_debug.cpu#1.nr_uninterruptible 900003 ± 0% +15.2% 1037064 ± 0% sched_debug.cpu#1.sched_count 443437 ± 1% +14.9% 509388 ± 0% sched_debug.cpu#1.sched_goidle 452854 ± 1% +15.6% 523693 ± 1% sched_debug.cpu#1.ttwu_count 6367 ± 8% +56.0% 9935 ± 5% sched_debug.cpu#1.ttwu_local 897707 ± 1% +16.5% 1046207 ± 0% sched_debug.cpu#10.nr_switches 99.00 ± 11% +103.3% 201.25 ± 11% sched_debug.cpu#10.nr_uninterruptible 898395 ± 1% +16.5% 1046524 ± 0% sched_debug.cpu#10.sched_count 443538 ± 1% +15.9% 514008 ± 1% sched_debug.cpu#10.sched_goidle 450312 ± 2% +16.1% 522673 ± 0% sched_debug.cpu#10.ttwu_count 5717 ± 4% +61.9% 9254 ± 2% sched_debug.cpu#10.ttwu_local 901460 ± 1% +16.8% 1052822 ± 2% sched_debug.cpu#11.nr_switches 121.00 ± 9% +54.1% 186.50 ± 11% sched_debug.cpu#11.nr_uninterruptible 902132 ± 1% +16.7% 1053133 ± 2% sched_debug.cpu#11.sched_count 445339 ± 1% +16.1% 517214 ± 2% sched_debug.cpu#11.sched_goidle 449212 ± 2% +15.9% 520483 ± 1% sched_debug.cpu#11.ttwu_count 5842 ± 3% +59.7% 9329 ± 0% sched_debug.cpu#11.ttwu_local 12.50 ± 4% -14.0% 10.75 ± 4% sched_debug.cpu#12.cpu_load[1] 12.50 ± 4% -12.0% 11.00 ± 6% sched_debug.cpu#12.cpu_load[2] 13.50 ± 3% -13.0% 11.75 ± 9% sched_debug.cpu#12.cpu_load[3] 1106 ± 10% -22.5% 857.25 ± 13% sched_debug.cpu#12.curr->pid 902682 ± 1% +14.3% 1031501 ± 1% sched_debug.cpu#12.nr_switches 903176 ± 1% +14.3% 1032555 ± 1% sched_debug.cpu#12.sched_count 446012 ± 2% +13.6% 506811 ± 1% sched_debug.cpu#12.sched_goidle 453684 ± 1% +15.9% 525671 ± 1% sched_debug.cpu#12.ttwu_count 5855 ± 5% +57.3% 9211 ± 0% sched_debug.cpu#12.ttwu_local 15.50 ± 14% -29.0% 11.00 ± 20% sched_debug.cpu#13.cpu_load[0] 896176 ± 1% +15.4% 1033945 ± 1% sched_debug.cpu#13.nr_switches 115.25 ± 32% +73.1% 199.50 ± 10% sched_debug.cpu#13.nr_uninterruptible 896699 ± 1% +15.4% 1035234 ± 1% sched_debug.cpu#13.sched_count 441543 ± 1% +15.0% 507978 ± 1% sched_debug.cpu#13.sched_goidle 456371 ± 0% +14.9% 524338 ± 1% sched_debug.cpu#13.ttwu_count 252427 ± 37% -46.0% 136245 ± 36% sched_debug.cpu#14.avg_idle 896834 ± 0% +15.7% 1037194 ± 0% sched_debug.cpu#14.nr_switches 104.00 ± 12% +81.5% 188.75 ± 10% sched_debug.cpu#14.nr_uninterruptible 897346 ± 0% +15.7% 1037947 ± 0% sched_debug.cpu#14.sched_count 443095 ± 0% +14.9% 509200 ± 0% sched_debug.cpu#14.sched_goidle 453114 ± 1% +15.7% 524253 ± 0% sched_debug.cpu#14.ttwu_count 5611 ± 1% +71.5% 9622 ± 5% sched_debug.cpu#14.ttwu_local 899171 ± 0% +15.5% 1038735 ± 1% sched_debug.cpu#15.nr_switches 900076 ± 0% +15.5% 1039339 ± 1% sched_debug.cpu#15.sched_count 444054 ± 0% +14.9% 510318 ± 1% sched_debug.cpu#15.sched_goidle 452549 ± 1% +15.5% 522617 ± 0% sched_debug.cpu#15.ttwu_count 5878 ± 7% +60.0% 9404 ± 1% sched_debug.cpu#15.ttwu_local 893492 ± 1% +15.7% 1033514 ± 1% sched_debug.cpu#16.nr_switches 47.75 ± 44% +100.5% 95.75 ± 26% sched_debug.cpu#16.nr_uninterruptible 894207 ± 1% +15.7% 1034476 ± 1% sched_debug.cpu#16.sched_count 441394 ± 1% +15.0% 507698 ± 1% sched_debug.cpu#16.sched_goidle 467181 ± 1% +14.7% 535820 ± 0% sched_debug.cpu#16.ttwu_count 5722 ± 2% +66.8% 9547 ± 2% sched_debug.cpu#16.ttwu_local 898201 ± 0% +14.7% 1030399 ± 0% sched_debug.cpu#17.nr_switches 898802 ± 0% +14.8% 1031781 ± 0% sched_debug.cpu#17.sched_count 443861 ± 0% +14.1% 506294 ± 0% sched_debug.cpu#17.sched_goidle 456449 ± 0% +16.3% 530855 ± 1% sched_debug.cpu#17.ttwu_count 5549 ± 3% +66.2% 9225 ± 1% sched_debug.cpu#17.ttwu_local 1278 ± 9% -24.3% 967.50 ± 6% sched_debug.cpu#18.curr->pid 892404 ± 1% +17.3% 1047124 ± 2% sched_debug.cpu#18.nr_switches 892794 ± 1% +17.6% 1049492 ± 2% sched_debug.cpu#18.sched_count 440565 ± 0% +16.8% 514499 ± 2% sched_debug.cpu#18.sched_goidle 454247 ± 0% +16.4% 528853 ± 1% sched_debug.cpu#18.ttwu_count 5963 ± 9% +56.7% 9342 ± 2% sched_debug.cpu#18.ttwu_local 903323 ± 0% +15.5% 1043627 ± 0% sched_debug.cpu#19.nr_switches 63.00 ± 12% +104.8% 129.00 ± 29% sched_debug.cpu#19.nr_uninterruptible 904081 ± 0% +15.6% 1045032 ± 0% sched_debug.cpu#19.sched_count 446361 ± 0% +14.8% 512414 ± 0% sched_debug.cpu#19.sched_goidle 453675 ± 1% +17.4% 532453 ± 2% sched_debug.cpu#19.ttwu_count 5665 ± 3% +72.9% 9798 ± 9% sched_debug.cpu#19.ttwu_local 900237 ± 0% +15.8% 1042314 ± 1% sched_debug.cpu#2.nr_switches 88.50 ± 34% +120.9% 195.50 ± 5% sched_debug.cpu#2.nr_uninterruptible 902129 ± 0% +15.6% 1042627 ± 1% sched_debug.cpu#2.sched_count 444763 ± 0% +15.1% 511973 ± 1% sched_debug.cpu#2.sched_goidle 450545 ± 1% +16.3% 523766 ± 0% sched_debug.cpu#2.ttwu_count 5823 ± 4% +60.8% 9363 ± 2% sched_debug.cpu#2.ttwu_local 1598 ± 31% -41.2% 939.75 ± 21% sched_debug.cpu#20.curr->pid 891426 ± 1% +16.2% 1035799 ± 0% sched_debug.cpu#20.nr_switches 892585 ± 1% +16.4% 1039111 ± 0% sched_debug.cpu#20.sched_count 440399 ± 1% +15.5% 508854 ± 0% sched_debug.cpu#20.sched_goidle 458152 ± 1% +17.2% 536906 ± 1% sched_debug.cpu#20.ttwu_count 5681 ± 2% +64.5% 9347 ± 2% sched_debug.cpu#20.ttwu_local 891160 ± 1% +15.9% 1033131 ± 0% sched_debug.cpu#21.nr_switches 27.25 ± 32% +200.0% 81.75 ± 27% sched_debug.cpu#21.nr_uninterruptible 892387 ± 0% +16.0% 1034911 ± 0% sched_debug.cpu#21.sched_count 440313 ± 1% +15.3% 507477 ± 0% sched_debug.cpu#21.sched_goidle 453863 ± 0% +16.8% 530152 ± 0% sched_debug.cpu#21.ttwu_count 5700 ± 4% +64.4% 9373 ± 1% sched_debug.cpu#21.ttwu_local 905973 ± 0% +14.9% 1040901 ± 0% sched_debug.cpu#22.nr_switches 906362 ± 0% +15.0% 1042491 ± 0% sched_debug.cpu#22.sched_count 447669 ± 0% +14.2% 511351 ± 0% sched_debug.cpu#22.sched_goidle 450413 ± 1% +17.2% 527775 ± 0% sched_debug.cpu#22.ttwu_count 5641 ± 3% +67.0% 9422 ± 1% sched_debug.cpu#22.ttwu_local 56.00 ±121% -77.7% 12.50 ± 8% sched_debug.cpu#23.cpu_load[1] 63.50 ±103% -80.3% 12.50 ± 4% sched_debug.cpu#23.cpu_load[2] 73.25 ± 89% -82.6% 12.75 ± 8% sched_debug.cpu#23.cpu_load[3] 901596 ± 1% +15.7% 1042772 ± 1% sched_debug.cpu#23.nr_switches 902456 ± 0% +15.7% 1044536 ± 0% sched_debug.cpu#23.sched_count 445153 ± 1% +15.1% 512287 ± 0% sched_debug.cpu#23.sched_goidle 452143 ± 1% +16.0% 524487 ± 0% sched_debug.cpu#23.ttwu_count 5995 ± 10% +59.4% 9553 ± 5% sched_debug.cpu#23.ttwu_local 1237 ± 0% -24.1% 939.50 ± 20% sched_debug.cpu#24.curr->pid 881601 ± 1% +16.0% 1022273 ± 0% sched_debug.cpu#24.nr_switches -61.00 ±-30% +133.6% -142.50 ±-14% sched_debug.cpu#24.nr_uninterruptible 881952 ± 1% +15.9% 1022551 ± 0% sched_debug.cpu#24.sched_count 436308 ± 1% +15.3% 503274 ± 0% sched_debug.cpu#24.sched_goidle 442368 ± 1% +17.7% 520845 ± 0% sched_debug.cpu#24.ttwu_count 5827 ± 1% +84.6% 10756 ± 15% sched_debug.cpu#24.ttwu_local 885038 ± 1% +15.3% 1020255 ± 0% sched_debug.cpu#25.nr_switches -85.50 ±-18% +70.5% -145.75 ±-14% sched_debug.cpu#25.nr_uninterruptible 885379 ± 1% +15.3% 1020534 ± 0% sched_debug.cpu#25.sched_count 437939 ± 1% +14.7% 502334 ± 0% sched_debug.cpu#25.sched_goidle 446881 ± 1% +15.2% 514905 ± 1% sched_debug.cpu#25.ttwu_count 6514 ± 19% +55.9% 10156 ± 5% sched_debug.cpu#25.ttwu_local 13.75 ± 6% -18.2% 11.25 ± 13% sched_debug.cpu#26.cpu_load[4] 888320 ± 1% +15.8% 1028804 ± 1% sched_debug.cpu#26.nr_switches -70.50 ±-24% +78.7% -126.00 ±-12% sched_debug.cpu#26.nr_uninterruptible 888678 ± 1% +15.8% 1029087 ± 1% sched_debug.cpu#26.sched_count 439681 ± 1% +15.2% 506581 ± 1% sched_debug.cpu#26.sched_goidle 442160 ± 0% +16.8% 516595 ± 1% sched_debug.cpu#26.ttwu_count 5876 ± 4% +77.1% 10408 ± 8% sched_debug.cpu#26.ttwu_local 52.00 ±129% -79.8% 10.50 ± 4% sched_debug.cpu#27.cpu_load[2] 52.25 ±129% -77.5% 11.75 ± 3% sched_debug.cpu#27.cpu_load[3] 894121 ± 0% +14.5% 1023970 ± 0% sched_debug.cpu#27.nr_switches -45.00 ±-55% +163.9% -118.75 ±-12% sched_debug.cpu#27.nr_uninterruptible 894480 ± 0% +14.5% 1024254 ± 0% sched_debug.cpu#27.sched_count 442570 ± 0% +13.9% 504155 ± 0% sched_debug.cpu#27.sched_goidle 442733 ± 0% +15.7% 512165 ± 0% sched_debug.cpu#27.ttwu_count 5931 ± 4% +66.5% 9874 ± 1% sched_debug.cpu#27.ttwu_local 13.75 ± 3% -16.4% 11.50 ± 7% sched_debug.cpu#28.cpu_load[3] 873756 ± 0% +15.3% 1007017 ± 1% sched_debug.cpu#28.nr_switches -81.00 ±-27% +58.6% -128.50 ±-10% sched_debug.cpu#28.nr_uninterruptible 874095 ± 0% +15.2% 1007316 ± 1% sched_debug.cpu#28.sched_count 431990 ± 0% +14.7% 495668 ± 1% sched_debug.cpu#28.sched_goidle 453567 ± 1% +15.3% 522888 ± 0% sched_debug.cpu#28.ttwu_count 6261 ± 10% +59.4% 9983 ± 2% sched_debug.cpu#28.ttwu_local 14.25 ± 3% -14.0% 12.25 ± 3% sched_debug.cpu#29.cpu_load[4] 982.50 ± 0% +12.8% 1108 ± 10% sched_debug.cpu#29.curr->pid 13.25 ± 8% +17.0% 15.50 ± 10% sched_debug.cpu#29.load 877674 ± 1% +16.0% 1017722 ± 0% sched_debug.cpu#29.nr_switches -68.75 ± -9% +85.5% -127.50 ±-20% sched_debug.cpu#29.nr_uninterruptible 878022 ± 1% +15.9% 1018014 ± 0% sched_debug.cpu#29.sched_count 434439 ± 1% +15.3% 501056 ± 0% sched_debug.cpu#29.sched_goidle 453413 ± 0% +15.4% 523377 ± 1% sched_debug.cpu#29.ttwu_count 6008 ± 4% +64.9% 9905 ± 1% sched_debug.cpu#29.ttwu_local 899212 ± 0% +15.6% 1039854 ± 0% sched_debug.cpu#3.nr_switches 123.25 ± 10% +92.1% 236.75 ± 12% sched_debug.cpu#3.nr_uninterruptible 900303 ± 0% +15.5% 1040158 ± 0% sched_debug.cpu#3.sched_count 444106 ± 0% +15.0% 510821 ± 0% sched_debug.cpu#3.sched_goidle 451807 ± 0% +15.5% 521786 ± 0% sched_debug.cpu#3.ttwu_count 5918 ± 4% +55.4% 9198 ± 2% sched_debug.cpu#3.ttwu_local 874898 ± 1% +17.8% 1030320 ± 0% sched_debug.cpu#30.nr_switches -61.00 ±-36% +136.9% -144.50 ± -8% sched_debug.cpu#30.nr_uninterruptible 875239 ± 1% +17.8% 1030617 ± 0% sched_debug.cpu#30.sched_count 432963 ± 1% +17.2% 507318 ± 0% sched_debug.cpu#30.sched_goidle 443692 ± 1% +17.4% 521054 ± 1% sched_debug.cpu#30.ttwu_count 5817 ± 1% +69.4% 9857 ± 1% sched_debug.cpu#30.ttwu_local 882049 ± 1% +16.3% 1025458 ± 0% sched_debug.cpu#31.nr_switches -60.25 ±-35% +88.0% -113.25 ±-10% sched_debug.cpu#31.nr_uninterruptible 882405 ± 1% +16.2% 1025746 ± 0% sched_debug.cpu#31.sched_count 435566 ± 0% +15.9% 504904 ± 0% sched_debug.cpu#31.sched_goidle 444268 ± 0% +16.4% 516991 ± 0% sched_debug.cpu#31.ttwu_count 6865 ± 24% +44.4% 9910 ± 2% sched_debug.cpu#31.ttwu_local 885388 ± 0% +15.8% 1025078 ± 1% sched_debug.cpu#32.nr_switches -83.50 ±-15% +72.2% -143.75 ± -5% sched_debug.cpu#32.nr_uninterruptible 886476 ± 0% +15.7% 1025357 ± 1% sched_debug.cpu#32.sched_count 436645 ± 0% +15.6% 504666 ± 1% sched_debug.cpu#32.sched_goidle 454348 ± 2% +16.0% 527058 ± 1% sched_debug.cpu#32.ttwu_count 879962 ± 0% +16.4% 1023891 ± 2% sched_debug.cpu#33.nr_switches -69.50 ±-21% +127.0% -157.75 ± -5% sched_debug.cpu#33.nr_uninterruptible 880314 ± 0% +16.3% 1024163 ± 2% sched_debug.cpu#33.sched_count 435608 ± 0% +15.8% 504236 ± 2% sched_debug.cpu#33.sched_goidle 443392 ± 1% +15.7% 513128 ± 0% sched_debug.cpu#33.ttwu_count 5749 ± 1% +69.8% 9760 ± 1% sched_debug.cpu#33.ttwu_local 9.00 ± 13% +38.9% 12.50 ± 8% sched_debug.cpu#34.cpu_load[0] 887153 ± 2% +15.8% 1027141 ± 1% sched_debug.cpu#34.nr_switches 887500 ± 2% +15.8% 1027427 ± 1% sched_debug.cpu#34.sched_count 438405 ± 2% +15.4% 505761 ± 1% sched_debug.cpu#34.sched_goidle 444134 ± 1% +16.2% 516095 ± 1% sched_debug.cpu#34.ttwu_count 6679 ± 12% +48.1% 9894 ± 2% sched_debug.cpu#34.ttwu_local 1175 ± 8% -25.5% 875.25 ± 31% sched_debug.cpu#35.curr->pid 895554 ± 2% +15.6% 1035009 ± 1% sched_debug.cpu#35.nr_switches 895914 ± 2% +15.6% 1035279 ± 1% sched_debug.cpu#35.sched_count 443216 ± 2% +15.0% 509826 ± 1% sched_debug.cpu#35.sched_goidle 441262 ± 2% +17.5% 518589 ± 2% sched_debug.cpu#35.ttwu_count 6038 ± 5% +61.2% 9734 ± 0% sched_debug.cpu#35.ttwu_local 880949 ± 1% +14.8% 1011547 ± 1% sched_debug.cpu#36.nr_switches -83.25 ±-30% +99.1% -165.75 ±-13% sched_debug.cpu#36.nr_uninterruptible 881318 ± 1% +14.8% 1011819 ± 1% sched_debug.cpu#36.sched_count 436103 ± 1% +14.2% 498157 ± 1% sched_debug.cpu#36.sched_goidle 450423 ± 2% +14.6% 516147 ± 0% sched_debug.cpu#36.ttwu_count 5686 ± 3% +70.0% 9669 ± 0% sched_debug.cpu#36.ttwu_local 15.75 ± 9% -33.3% 10.50 ± 28% sched_debug.cpu#37.load 885149 ± 1% +15.0% 1017559 ± 0% sched_debug.cpu#37.nr_switches -93.25 ±-14% +70.5% -159.00 ±-23% sched_debug.cpu#37.nr_uninterruptible 885523 ± 1% +14.9% 1017835 ± 0% sched_debug.cpu#37.sched_count 438108 ± 0% +14.4% 501128 ± 0% sched_debug.cpu#37.sched_goidle 440299 ± 1% +16.9% 514925 ± 1% sched_debug.cpu#37.ttwu_count 5783 ± 3% +69.9% 9828 ± 1% sched_debug.cpu#37.ttwu_local 50.75 ±126% -76.8% 11.75 ± 3% sched_debug.cpu#38.cpu_load[2] 887174 ± 1% +14.7% 1017507 ± 0% sched_debug.cpu#38.nr_switches -88.00 ± -7% +79.0% -157.50 ± -5% sched_debug.cpu#38.nr_uninterruptible 888421 ± 1% +14.6% 1017802 ± 0% sched_debug.cpu#38.sched_count 439029 ± 1% +14.1% 501038 ± 0% sched_debug.cpu#38.sched_goidle 446197 ± 1% +15.7% 516103 ± 0% sched_debug.cpu#38.ttwu_count 889672 ± 0% +14.5% 1019050 ± 1% sched_debug.cpu#39.nr_switches -84.75 ±-25% +80.8% -153.25 ±-16% sched_debug.cpu#39.nr_uninterruptible 890040 ± 0% +14.5% 1019333 ± 1% sched_debug.cpu#39.sched_count 440427 ± 0% +13.9% 501827 ± 1% sched_debug.cpu#39.sched_goidle 442139 ± 0% +16.8% 516617 ± 1% sched_debug.cpu#39.ttwu_count 5767 ± 4% +68.1% 9695 ± 1% sched_debug.cpu#39.ttwu_local 887646 ± 0% +15.4% 1024594 ± 0% sched_debug.cpu#4.nr_switches 888454 ± 0% +15.4% 1024910 ± 0% sched_debug.cpu#4.sched_count 438424 ± 0% +14.8% 503191 ± 0% sched_debug.cpu#4.sched_goidle 458254 ± 1% +15.8% 530576 ± 1% sched_debug.cpu#4.ttwu_count 6693 ± 17% +41.7% 9485 ± 2% sched_debug.cpu#4.ttwu_local 201392 ± 2% -30.6% 139701 ± 27% sched_debug.cpu#40.avg_idle 891601 ± 1% +13.8% 1014637 ± 0% sched_debug.cpu#40.nr_switches -115.25 ±-23% +56.2% -180.00 ±-14% sched_debug.cpu#40.nr_uninterruptible 891981 ± 1% +13.8% 1014944 ± 0% sched_debug.cpu#40.sched_count 441305 ± 1% +13.2% 499498 ± 0% sched_debug.cpu#40.sched_goidle 457002 ± 1% +16.8% 533599 ± 1% sched_debug.cpu#40.ttwu_count 5864 ± 2% +81.2% 10627 ± 10% sched_debug.cpu#40.ttwu_local 876743 ± 1% +15.1% 1008823 ± 0% sched_debug.cpu#41.nr_switches -100.75 ± -8% +74.2% -175.50 ±-11% sched_debug.cpu#41.nr_uninterruptible 877104 ± 1% +15.0% 1009097 ± 0% sched_debug.cpu#41.sched_count 433849 ± 1% +14.5% 496667 ± 0% sched_debug.cpu#41.sched_goidle 457314 ± 1% +14.4% 523177 ± 1% sched_debug.cpu#41.ttwu_count 5880 ± 3% +71.0% 10056 ± 4% sched_debug.cpu#41.ttwu_local 13.75 ± 11% -30.9% 9.50 ± 37% sched_debug.cpu#42.cpu_load[0] 13.00 ± 9% -26.9% 9.50 ± 22% sched_debug.cpu#42.cpu_load[1] 1170 ± 8% -26.4% 862.25 ± 24% sched_debug.cpu#42.curr->pid 16.00 ± 20% -39.1% 9.75 ± 28% sched_debug.cpu#42.load 876976 ± 0% +16.5% 1021334 ± 2% sched_debug.cpu#42.nr_switches -77.50 ±-20% +73.9% -134.75 ±-19% sched_debug.cpu#42.nr_uninterruptible 877329 ± 0% +16.4% 1021628 ± 2% sched_debug.cpu#42.sched_count 433997 ± 0% +15.9% 502897 ± 2% sched_debug.cpu#42.sched_goidle 444646 ± 0% +18.3% 526119 ± 2% sched_debug.cpu#42.ttwu_count 5857 ± 1% +70.2% 9969 ± 2% sched_debug.cpu#42.ttwu_local 883831 ± 0% +17.9% 1041998 ± 1% sched_debug.cpu#43.nr_switches -80.25 ±-21% +79.4% -144.00 ± -3% sched_debug.cpu#43.nr_uninterruptible 884189 ± 0% +17.9% 1042291 ± 1% sched_debug.cpu#43.sched_count 437320 ± 0% +17.4% 513263 ± 1% sched_debug.cpu#43.sched_goidle 444475 ± 1% +17.5% 522229 ± 1% sched_debug.cpu#43.ttwu_count 5997 ± 4% +64.8% 9882 ± 1% sched_debug.cpu#43.ttwu_local 89542 ± 53% +108.2% 186442 ± 16% sched_debug.cpu#44.avg_idle 33.50 ±104% -67.9% 10.75 ± 13% sched_debug.cpu#44.cpu_load[1] 23.25 ± 74% -53.8% 10.75 ± 7% sched_debug.cpu#44.cpu_load[2] 883691 ± 1% +15.7% 1022296 ± 1% sched_debug.cpu#44.nr_switches -91.50 ±-14% +79.0% -163.75 ± -6% sched_debug.cpu#44.nr_uninterruptible 884048 ± 1% +15.7% 1022595 ± 1% sched_debug.cpu#44.sched_count 437328 ± 1% +15.1% 503385 ± 1% sched_debug.cpu#44.sched_goidle 453428 ± 1% +16.5% 528215 ± 1% sched_debug.cpu#44.ttwu_count 6394 ± 17% +56.3% 9996 ± 1% sched_debug.cpu#44.ttwu_local 882165 ± 1% +15.3% 1017267 ± 0% sched_debug.cpu#45.nr_switches -101.75 ± -7% +59.5% -162.25 ±-12% sched_debug.cpu#45.nr_uninterruptible 882554 ± 1% +15.3% 1017568 ± 0% sched_debug.cpu#45.sched_count 436614 ± 1% +14.7% 500995 ± 0% sched_debug.cpu#45.sched_goidle 443064 ± 0% +17.1% 518680 ± 0% sched_debug.cpu#45.ttwu_count 5720 ± 2% +71.2% 9791 ± 1% sched_debug.cpu#45.ttwu_local 16.25 ± 7% -26.2% 12.00 ± 24% sched_debug.cpu#46.load 884600 ± 1% +16.2% 1027855 ± 0% sched_debug.cpu#46.nr_switches -80.00 ± -6% +54.1% -123.25 ± -9% sched_debug.cpu#46.nr_uninterruptible 884967 ± 1% +16.2% 1028144 ± 0% sched_debug.cpu#46.sched_count 437779 ± 1% +15.6% 506075 ± 0% sched_debug.cpu#46.sched_goidle 446119 ± 0% +16.5% 519824 ± 0% sched_debug.cpu#46.ttwu_count 5862 ± 5% +82.0% 10671 ± 13% sched_debug.cpu#46.ttwu_local 172065 ± 15% +33.8% 230180 ± 19% sched_debug.cpu#47.avg_idle 899261 ± 1% +13.7% 1022754 ± 0% sched_debug.cpu#47.nr_switches -60.00 ±-18% +126.2% -135.75 ± -8% sched_debug.cpu#47.nr_uninterruptible 899617 ± 1% +13.7% 1023018 ± 0% sched_debug.cpu#47.sched_count 445180 ± 1% +13.1% 503453 ± 1% sched_debug.cpu#47.sched_goidle 441506 ± 1% +16.0% 512043 ± 1% sched_debug.cpu#47.ttwu_count 5824 ± 2% +71.6% 9997 ± 0% sched_debug.cpu#47.ttwu_local 897894 ± 0% +15.4% 1036494 ± 1% sched_debug.cpu#5.nr_switches 115.50 ± 19% +76.4% 203.75 ± 18% sched_debug.cpu#5.nr_uninterruptible 898680 ± 0% +15.4% 1036820 ± 1% sched_debug.cpu#5.sched_count 443249 ± 0% +14.9% 509114 ± 1% sched_debug.cpu#5.sched_goidle 456414 ± 1% +15.9% 529061 ± 1% sched_debug.cpu#5.ttwu_count 6116 ± 7% +53.1% 9364 ± 2% sched_debug.cpu#5.ttwu_local 49.25 ±122% -77.7% 11.00 ± 27% sched_debug.cpu#6.cpu_load[1] 45.25 ±117% -72.9% 12.25 ± 3% sched_debug.cpu#6.cpu_load[3] 43.00 ±102% -68.0% 13.75 ± 3% sched_debug.cpu#6.cpu_load[4] 892798 ± 0% +16.8% 1042681 ± 1% sched_debug.cpu#6.nr_switches 114.25 ± 29% +51.4% 173.00 ± 9% sched_debug.cpu#6.nr_uninterruptible 893464 ± 0% +16.7% 1043011 ± 1% sched_debug.cpu#6.sched_count 441051 ± 0% +16.1% 512183 ± 1% sched_debug.cpu#6.sched_goidle 453193 ± 0% +17.5% 532424 ± 0% sched_debug.cpu#6.ttwu_count 5825 ± 2% +60.8% 9365 ± 1% sched_debug.cpu#6.ttwu_local 895548 ± 0% +16.5% 1042892 ± 0% sched_debug.cpu#7.nr_switches 115.25 ± 8% +71.4% 197.50 ± 10% sched_debug.cpu#7.nr_uninterruptible 896963 ± 0% +16.3% 1043187 ± 0% sched_debug.cpu#7.sched_count 442489 ± 0% +15.8% 512298 ± 0% sched_debug.cpu#7.sched_goidle 447035 ± 1% +17.6% 525521 ± 1% sched_debug.cpu#7.ttwu_count 5671 ± 2% +65.2% 9366 ± 1% sched_debug.cpu#7.ttwu_local 1104 ± 8% -21.1% 871.25 ± 12% sched_debug.cpu#8.curr->pid 899203 ± 1% +15.7% 1040156 ± 0% sched_debug.cpu#8.nr_switches 81.25 ± 20% +138.5% 193.75 ± 17% sched_debug.cpu#8.nr_uninterruptible 899904 ± 1% +15.6% 1040478 ± 0% sched_debug.cpu#8.sched_count 444308 ± 1% +15.0% 510913 ± 0% sched_debug.cpu#8.sched_goidle 456891 ± 0% +16.1% 530356 ± 1% sched_debug.cpu#8.ttwu_count 5833 ± 3% +64.1% 9571 ± 2% sched_debug.cpu#8.ttwu_local 892211 ± 0% +15.7% 1032362 ± 0% sched_debug.cpu#9.nr_switches 107.50 ± 30% +44.7% 155.50 ± 9% sched_debug.cpu#9.nr_uninterruptible 893246 ± 0% +15.6% 1032690 ± 0% sched_debug.cpu#9.sched_count 440900 ± 0% +15.0% 507011 ± 0% sched_debug.cpu#9.sched_goidle 450459 ± 1% +17.0% 527166 ± 1% sched_debug.cpu#9.ttwu_count 5840 ± 3% +58.7% 9270 ± 1% sched_debug.cpu#9.ttwu_local ========================================================================================= tbox_group/testcase/rootfs/kconfig/compiler/cpufreq_governor/runtime/test: ivb43/vm-scalability/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/performance/300s/mmap-xread-seq-mt commit: 5b74283ab251b9db55cbbe31d19ca72482103290 72b252aed506b8f1a03f7abd29caef4cdf6a043b 5b74283ab251b9db 72b252aed506b8f1a03f7abd29 ---------------- -------------------------- %stddev %change %stddev \ | \ 0.02 ± 35% +2065.6% 0.36 ± 56% vm-scalability.stddev 51810077 ± 0% +39.6% 72335450 ± 1% vm-scalability.throughput 216591 ± 1% -19.4% 174563 ± 5% vm-scalability.time.involuntary_context_switches 44829 ± 9% +1032.4% 507664 ± 36% vm-scalability.time.major_page_faults 16780385 ± 3% +226.1% 54719850 ± 5% vm-scalability.time.minor_page_faults 6330 ± 1% -80.7% 1222 ± 6% vm-scalability.time.system_time 7148 ± 0% +73.0% 12369 ± 0% vm-scalability.time.user_time 1315986 ± 11% +636.3% 9689609 ± 7% vm-scalability.time.voluntary_context_switches 859865 ± 2% +83.1% 1574143 ± 2% softirqs.RCU 332209 ± 4% +186.2% 950725 ± 6% softirqs.SCHED 569667 ± 0% +40.1% 798305 ± 2% slabinfo.radix_tree_node.active_objs 10185 ± 0% +40.5% 14309 ± 2% slabinfo.radix_tree_node.active_slabs 569841 ± 0% +40.2% 799094 ± 2% slabinfo.radix_tree_node.num_objs 10185 ± 0% +40.5% 14309 ± 2% slabinfo.radix_tree_node.num_slabs 604.00 ± 3% -19.2% 487.75 ± 5% vmstat.memory.buff 0.00 ± 0% +Inf% 1.00 ± 0% vmstat.procs.b 11821 ± 9% +444.4% 64357 ± 7% vmstat.system.cs 6374523 ± 0% -93.0% 443715 ± 1% vmstat.system.in 94.59 ± 0% -1.5% 93.15 ± 0% turbostat.%Busy 2830 ± 0% -1.5% 2788 ± 0% turbostat.Avg_MHz 2.77 ± 9% +38.3% 3.83 ± 8% turbostat.CPU%c1 2.64 ± 4% +14.1% 3.02 ± 3% turbostat.CPU%c6 16.91 ± 0% +19.9% 20.26 ± 3% turbostat.RAMWatt 91194824 ± 10% +164.1% 2.408e+08 ± 3% cpuidle.C1-IVT.time 1384157 ± 18% +651.7% 10404112 ± 6% cpuidle.C1-IVT.usage 27499 ± 53% +160.8% 71705 ± 28% cpuidle.C1E-IVT.usage 119744 ± 8% -31.9% 81495 ± 11% cpuidle.C6-IVT.usage 732367 ±138% -99.1% 6291 ± 29% cpuidle.POLL.time 69.50 ± 15% +392.1% 342.00 ± 9% cpuidle.POLL.usage 216591 ± 1% -19.4% 174563 ± 5% time.involuntary_context_switches 44829 ± 9% +1032.4% 507664 ± 36% time.major_page_faults 16780385 ± 3% +226.1% 54719850 ± 5% time.minor_page_faults 6330 ± 1% -80.7% 1222 ± 6% time.system_time 7148 ± 0% +73.0% 12369 ± 0% time.user_time 1315986 ± 11% +636.3% 9689609 ± 7% time.voluntary_context_switches 17132 ± 74% -100.0% 0.00 ± -1% latency_stats.avg.down.console_lock.do_con_write.con_write.n_tty_write.tty_write.redirected_tty_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath 350960 ± 34% -100.0% 0.00 ± -1% latency_stats.avg.pipe_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath 1311605 ± 11% +635.4% 9645518 ± 7% latency_stats.hits.wait_on_page_bit_killable.__lock_page_or_retry.filemap_fault.xfs_filemap_fault.__do_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault 17132 ± 74% -100.0% 0.00 ± -1% latency_stats.max.down.console_lock.do_con_write.con_write.n_tty_write.tty_write.redirected_tty_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath 403141 ± 19% -100.0% 0.00 ± -1% latency_stats.max.pipe_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath 424643 ± 45% -98.8% 4994 ± 0% latency_stats.max.pipe_wait.pipe_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath 17132 ± 74% -100.0% 0.00 ± -1% latency_stats.sum.down.console_lock.do_con_write.con_write.n_tty_write.tty_write.redirected_tty_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath 422038 ± 17% -100.0% 0.00 ± -1% latency_stats.sum.pipe_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath 984.50 ± 46% +864.2% 9493 ± 13% latency_stats.sum.wait_on_page_bit.__migration_entry_wait.migration_entry_wait.handle_mm_fault.__do_page_fault.do_page_fault.page_fault 343712 ± 0% +3891.1% 13717794 ± 6% meminfo.Active 246083 ± 0% -50.5% 121767 ± 1% meminfo.Active(anon) 97628 ± 1% +13826.3% 13596027 ± 6% meminfo.Active(file) 708505 ± 1% -17.7% 582855 ± 0% meminfo.Committed_AS 58438358 ± 0% -23.5% 44689642 ± 2% meminfo.Inactive 58394639 ± 0% -23.5% 44648143 ± 2% meminfo.Inactive(file) 273439 ± 0% +42.5% 389613 ± 3% meminfo.PageTables 363333 ± 0% +36.3% 495127 ± 2% meminfo.SReclaimable 266234 ± 1% -47.2% 140690 ± 1% meminfo.Shmem 453917 ± 0% +29.2% 586387 ± 2% meminfo.Slab 23330228 ± 9% -74.8% 5875582 ± 42% numa-numastat.node0.local_node 12290254 ± 21% -95.2% 590932 ± 60% numa-numastat.node0.numa_foreign 23330480 ± 9% -74.8% 5875562 ± 42% numa-numastat.node0.numa_hit 9753001 ± 22% +270.9% 36177183 ± 7% numa-numastat.node0.numa_miss 9753252 ± 22% +270.9% 36177163 ± 7% numa-numastat.node0.other_node 20501267 ± 14% +212.0% 63967196 ± 4% numa-numastat.node1.local_node 9753001 ± 22% +270.9% 36177183 ± 7% numa-numastat.node1.numa_foreign 20501309 ± 14% +212.0% 63967206 ± 4% numa-numastat.node1.numa_hit 12290254 ± 21% -95.2% 590932 ± 60% numa-numastat.node1.numa_miss 12290296 ± 21% -95.2% 590942 ± 60% numa-numastat.node1.other_node 124544 ± 52% +3464.6% 4439563 ± 16% numa-meminfo.node0.Active 48063 ± 3% +9029.5% 4387938 ± 17% numa-meminfo.node0.Active(file) 28709365 ± 2% -18.4% 23427355 ± 3% numa-meminfo.node0.Inactive 28683055 ± 2% -18.4% 23399305 ± 3% numa-meminfo.node0.Inactive(file) 3351147 ± 23% +26.4% 4234518 ± 3% numa-meminfo.node0.MemFree 136578 ± 1% +12.6% 153788 ± 3% numa-meminfo.node0.PageTables 178236 ± 1% +46.8% 261731 ± 4% numa-meminfo.node0.SReclaimable 224080 ± 1% +37.6% 308313 ± 4% numa-meminfo.node0.Slab 219112 ± 30% +4130.9% 9270475 ± 6% numa-meminfo.node1.Active 49561 ± 4% +18463.2% 9200230 ± 6% numa-meminfo.node1.Active(file) 29707374 ± 2% -28.5% 21238979 ± 3% numa-meminfo.node1.Inactive 29689862 ± 2% -28.5% 21225606 ± 3% numa-meminfo.node1.Inactive(file) 2422652 ± 27% -30.4% 1687202 ± 7% numa-meminfo.node1.MemFree 137095 ± 1% +71.2% 234741 ± 2% numa-meminfo.node1.PageTables 185271 ± 2% +25.8% 233000 ± 1% numa-meminfo.node1.SReclaimable 230011 ± 1% +20.7% 277673 ± 1% numa-meminfo.node1.Slab 154233 ± 1% +17.6% 181327 ± 0% numa-meminfo.node1.Unevictable 12015 ± 3% +9027.4% 1096682 ± 17% numa-vmstat.node0.nr_active_file 502.75 ± 3% +304.1% 2031 ±126% numa-vmstat.node0.nr_alloc_batch 832040 ± 22% +28.3% 1067687 ± 2% numa-vmstat.node0.nr_free_pages 7176576 ± 2% -18.6% 5841039 ± 3% numa-vmstat.node0.nr_inactive_file 341.00 ± 2% -95.4% 15.75 ± 20% numa-vmstat.node0.nr_isolated_file 34121 ± 1% +12.8% 38482 ± 3% numa-vmstat.node0.nr_page_table_pages 15820 ± 32% -35.5% 10206 ± 12% numa-vmstat.node0.nr_pages_scanned 44517 ± 1% +47.0% 65420 ± 4% numa-vmstat.node0.nr_slab_reclaimable 5960767 ± 39% -94.7% 316990 ± 66% numa-vmstat.node0.numa_foreign 12640038 ± 18% -67.0% 4167191 ± 39% numa-vmstat.node0.numa_hit 12618874 ± 18% -67.0% 4161630 ± 39% numa-vmstat.node0.numa_local 6063244 ± 31% +199.0% 18130167 ± 6% numa-vmstat.node0.numa_miss 6084407 ± 31% +198.1% 18135728 ± 6% numa-vmstat.node0.numa_other 12390 ± 4% +18425.2% 2295268 ± 6% numa-vmstat.node1.nr_active_file 603790 ± 27% -29.2% 427721 ± 8% numa-vmstat.node1.nr_free_pages 7424337 ± 2% -28.5% 5305290 ± 3% numa-vmstat.node1.nr_inactive_file 391.00 ± 5% -97.2% 11.00 ± 12% numa-vmstat.node1.nr_isolated_file 34243 ± 1% +71.5% 58737 ± 2% numa-vmstat.node1.nr_page_table_pages 15106 ± 53% +131.6% 34986 ± 17% numa-vmstat.node1.nr_pages_scanned 46262 ± 1% +25.8% 58180 ± 0% numa-vmstat.node1.nr_slab_reclaimable 38646 ± 1% +17.3% 45333 ± 0% numa-vmstat.node1.nr_unevictable 6063586 ± 31% +199.0% 18130814 ± 6% numa-vmstat.node1.numa_foreign 12659295 ± 21% +166.9% 33791088 ± 3% numa-vmstat.node1.numa_hit 12596302 ± 21% +167.6% 33712655 ± 3% numa-vmstat.node1.numa_local 5961206 ± 39% -94.7% 316997 ± 66% numa-vmstat.node1.numa_miss 6024199 ± 38% -93.4% 395430 ± 53% numa-vmstat.node1.numa_other 11950 ± 1% -56.3% 5220 ± 44% proc-vmstat.allocstall 246.00 ± 7% +7649.7% 19064 ± 6% proc-vmstat.kswapd_low_wmark_hit_quickly 61590 ± 1% -50.5% 30457 ± 1% proc-vmstat.nr_active_anon 24406 ± 1% +13819.7% 3397283 ± 6% proc-vmstat.nr_active_file 14609102 ± 0% -23.7% 11149953 ± 2% proc-vmstat.nr_inactive_file 747.50 ± 4% -96.4% 27.00 ± 12% proc-vmstat.nr_isolated_file 68766 ± 0% +41.5% 97318 ± 2% proc-vmstat.nr_page_table_pages 31785 ± 22% +84.5% 58657 ± 24% proc-vmstat.nr_pages_scanned 66588 ± 1% -47.2% 35190 ± 1% proc-vmstat.nr_shmem 91166 ± 0% +35.8% 123763 ± 2% proc-vmstat.nr_slab_reclaimable 22043255 ± 3% +66.8% 36768116 ± 6% proc-vmstat.numa_foreign 14363 ± 3% +102.3% 29052 ± 0% proc-vmstat.numa_hint_faults 12946 ± 5% +96.2% 25396 ± 1% proc-vmstat.numa_hint_faults_local 43827658 ± 2% +59.3% 69838611 ± 6% proc-vmstat.numa_hit 43827364 ± 2% +59.3% 69838621 ± 6% proc-vmstat.numa_local 22043255 ± 3% +66.8% 36768116 ± 6% proc-vmstat.numa_miss 22043548 ± 3% +66.8% 36768106 ± 6% proc-vmstat.numa_other 14283 ± 2% +103.5% 29064 ± 0% proc-vmstat.numa_pte_updates 539.00 ± 11% +3665.7% 20297 ± 5% proc-vmstat.pageoutrun 61303 ± 1% +62220.2% 38204636 ± 17% proc-vmstat.pgactivate 2887589 ± 0% +36.6% 3943048 ± 5% proc-vmstat.pgalloc_dma32 63135436 ± 0% +62.9% 1.029e+08 ± 4% proc-vmstat.pgalloc_normal 19050516 ± 4% +254.0% 67444692 ± 5% proc-vmstat.pgfault 64225693 ± 1% +64.1% 1.054e+08 ± 4% proc-vmstat.pgfree 44829 ± 9% +1032.4% 507664 ± 36% proc-vmstat.pgmajfault 2187306 ± 20% +68.9% 3695007 ± 6% proc-vmstat.pgscan_kswapd_dma32 43313276 ± 17% +272.5% 1.613e+08 ± 4% proc-vmstat.pgscan_kswapd_normal 1845787 ± 1% -69.8% 558233 ± 37% proc-vmstat.pgsteal_direct_dma32 41399936 ± 1% -89.3% 4410054 ± 34% proc-vmstat.pgsteal_direct_normal 272424 ± 10% +653.2% 2051780 ± 5% proc-vmstat.pgsteal_kswapd_dma32 5097242 ± 4% +1364.1% 74629699 ± 2% proc-vmstat.pgsteal_kswapd_normal 19498 ± 4% +3034.7% 611222 ± 41% proc-vmstat.slabs_scanned 91902 ± 0% +22.3% 112380 ± 4% proc-vmstat.unevictable_pgs_culled 0.01 ± 57% +22133.3% 1.67 ± 42% perf-profile.cpu-cycles.__account_scheduler_latency.enqueue_entity.enqueue_task_fair.enqueue_task.activate_task 0.20 ± 16% +2161.7% 4.58 ± 24% perf-profile.cpu-cycles.__add_to_page_cache_locked.add_to_page_cache_lru.mpage_readpages.xfs_vm_readpages.__do_page_cache_readahead 44.47 ± 8% -94.9% 2.25 ± 73% perf-profile.cpu-cycles.__alloc_pages_nodemask.alloc_pages_current.__page_cache_alloc.__do_page_cache_readahead.filemap_fault 35.23 ± 9% -91.6% 2.97 ± 32% perf-profile.cpu-cycles.__alloc_pages_nodemask.alloc_pages_current.__page_cache_alloc.__do_page_cache_readahead.ondemand_readahead 2.82 ± 46% -95.0% 0.14 ±120% perf-profile.cpu-cycles.__alloc_pages_nodemask.alloc_pages_current.pte_alloc_one.__pte_alloc.handle_mm_fault 0.02 ± 0% +5725.0% 1.16 ± 17% perf-profile.cpu-cycles.__delete_from_page_cache.__remove_mapping.shrink_page_list.shrink_inactive_list.shrink_lruvec 81.70 ± 1% -51.0% 40.02 ± 11% perf-profile.cpu-cycles.__do_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault 44.49 ± 8% -90.0% 4.44 ± 46% perf-profile.cpu-cycles.__do_page_cache_readahead.filemap_fault.xfs_filemap_fault.__do_fault.handle_mm_fault 85.03 ± 0% -28.0% 61.24 ± 6% perf-profile.cpu-cycles.__do_page_fault.do_page_fault.page_fault 0.49 ± 0% +555.8% 3.23 ± 16% perf-profile.cpu-cycles.__hrtimer_run_queues.hrtimer_interrupt.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt 0.00 ± -1% +Inf% 0.77 ± 40% perf-profile.cpu-cycles.__kernel_text_address.print_context_stack.dump_trace.save_stack_trace_tsk.__account_scheduler_latency 0.00 ± -1% +Inf% 1.80 ± 45% perf-profile.cpu-cycles.__lock_page_or_retry.filemap_fault.xfs_filemap_fault.__do_fault.handle_mm_fault 0.00 ± -1% +Inf% 0.98 ± 22% perf-profile.cpu-cycles.__lru_cache_add.lru_cache_add.add_to_page_cache_lru.mpage_readpages.xfs_vm_readpages 44.47 ± 8% -94.9% 2.26 ± 73% perf-profile.cpu-cycles.__page_cache_alloc.__do_page_cache_readahead.filemap_fault.xfs_filemap_fault.__do_fault 35.26 ± 9% -90.9% 3.20 ± 32% perf-profile.cpu-cycles.__page_cache_alloc.__do_page_cache_readahead.ondemand_readahead.page_cache_async_readahead.filemap_fault 41.02 ± 1% -98.0% 0.80 ± 32% perf-profile.cpu-cycles.__page_check_address.page_referenced_one.rmap_walk.page_referenced.shrink_page_list 31.96 ± 3% -99.3% 0.21 ± 29% perf-profile.cpu-cycles.__page_check_address.try_to_unmap_one.rmap_walk.try_to_unmap.shrink_page_list 2.82 ± 46% -91.0% 0.25 ± 55% perf-profile.cpu-cycles.__pte_alloc.handle_mm_fault.__do_page_fault.do_page_fault.page_fault 0.66 ± 19% +654.4% 4.96 ± 39% perf-profile.cpu-cycles.__remove_mapping.shrink_page_list.shrink_inactive_list.shrink_lruvec.shrink_zone 0.00 ± -1% +Inf% 0.68 ± 40% perf-profile.cpu-cycles.__schedule.schedule.schedule_timeout.io_schedule_timeout.bit_wait_io 0.00 ± -1% +Inf% 1.69 ± 45% perf-profile.cpu-cycles.__wait_on_bit.wait_on_page_bit_killable.__lock_page_or_retry.filemap_fault.xfs_filemap_fault 0.00 ± -1% +Inf% 1.45 ± 41% perf-profile.cpu-cycles.__wake_up.__wake_up_bit.unlock_page.do_mpage_readpage.mpage_readpages 0.00 ± -1% +Inf% 1.21 ± 45% perf-profile.cpu-cycles.__wake_up.__wake_up_bit.unlock_page.filemap_map_pages.handle_mm_fault 0.00 ± -1% +Inf% 1.35 ± 48% perf-profile.cpu-cycles.__wake_up.__wake_up_bit.unlock_page.handle_mm_fault.__do_page_fault 0.00 ± -1% +Inf% 1.66 ± 34% perf-profile.cpu-cycles.__wake_up_bit.unlock_page.do_mpage_readpage.mpage_readpages.xfs_vm_readpages 0.00 ± -1% +Inf% 1.46 ± 34% perf-profile.cpu-cycles.__wake_up_bit.unlock_page.filemap_map_pages.handle_mm_fault.__do_page_fault 0.00 ± -1% +Inf% 1.38 ± 47% perf-profile.cpu-cycles.__wake_up_bit.unlock_page.handle_mm_fault.__do_page_fault.do_page_fault 0.00 ± -1% +Inf% 1.34 ± 42% perf-profile.cpu-cycles.__wake_up_common.__wake_up.__wake_up_bit.unlock_page.do_mpage_readpage 0.00 ± -1% +Inf% 1.02 ± 49% perf-profile.cpu-cycles.__wake_up_common.__wake_up.__wake_up_bit.unlock_page.handle_mm_fault 0.02 ± 35% +6537.5% 1.33 ± 17% perf-profile.cpu-cycles.__xfs_get_blocks.xfs_get_blocks.do_mpage_readpage.mpage_readpages.xfs_vm_readpages 40.95 ± 1% -99.3% 0.27 ± 93% perf-profile.cpu-cycles._raw_spin_lock.__page_check_address.page_referenced_one.rmap_walk.page_referenced 31.93 ± 3% -99.8% 0.06 ± 36% perf-profile.cpu-cycles._raw_spin_lock.__page_check_address.try_to_unmap_one.rmap_walk.try_to_unmap 0.00 ± -1% +Inf% 12.83 ± 51% perf-profile.cpu-cycles._raw_spin_lock.handle_mm_fault.__do_page_fault.do_page_fault.page_fault 0.02 ± 65% +10777.8% 2.45 ± 33% perf-profile.cpu-cycles._raw_spin_lock_irq.__add_to_page_cache_locked.add_to_page_cache_lru.mpage_readpages.xfs_vm_readpages 0.49 ± 1% +465.5% 2.79 ± 54% perf-profile.cpu-cycles._raw_spin_lock_irqsave.__remove_mapping.shrink_page_list.shrink_inactive_list.shrink_lruvec 0.00 ± -1% +Inf% 0.97 ± 45% perf-profile.cpu-cycles._raw_spin_lock_irqsave.__wake_up.__wake_up_bit.unlock_page.filemap_map_pages 0.01 ± 0% +19950.0% 2.00 ± 41% perf-profile.cpu-cycles.activate_task.ttwu_do_activate.try_to_wake_up.default_wake_function.autoremove_wake_function 0.00 ± -1% +Inf% 1.53 ± 44% perf-profile.cpu-cycles.add_to_page_cache_lru.mpage_readpages.xfs_vm_readpages.__do_page_cache_readahead.filemap_fault 0.36 ± 14% +1172.7% 4.55 ± 19% perf-profile.cpu-cycles.add_to_page_cache_lru.mpage_readpages.xfs_vm_readpages.__do_page_cache_readahead.ondemand_readahead 44.48 ± 8% -94.9% 2.26 ± 73% perf-profile.cpu-cycles.alloc_pages_current.__page_cache_alloc.__do_page_cache_readahead.filemap_fault.xfs_filemap_fault 35.25 ± 9% -91.1% 3.14 ± 32% perf-profile.cpu-cycles.alloc_pages_current.__page_cache_alloc.__do_page_cache_readahead.ondemand_readahead.page_cache_async_readahead 2.82 ± 46% -95.1% 0.14 ±123% perf-profile.cpu-cycles.alloc_pages_current.pte_alloc_one.__pte_alloc.handle_mm_fault.__do_page_fault 0.02 ±-5000% +23587.5% 4.74 ± 17% perf-profile.cpu-cycles.apic_timer_interrupt 0.00 ± -1% +Inf% 2.54 ± 43% perf-profile.cpu-cycles.autoremove_wake_function.wake_bit_function.__wake_up_common.__wake_up.__wake_up_bit 0.00 ± -1% +Inf% 0.84 ± 39% perf-profile.cpu-cycles.bit_wait_io.__wait_on_bit.wait_on_page_bit_killable.__lock_page_or_retry.filemap_fault 0.01 ± 66% +8580.0% 1.08 ± 36% perf-profile.cpu-cycles.call_cpuidle.cpu_startup_entry.start_secondary 0.00 ± -1% +Inf% 8.07 ± 18% perf-profile.cpu-cycles.call_function_interrupt 4.01 ± 1% -100.0% 0.00 ± -1% perf-profile.cpu-cycles.call_function_interrupt._raw_spin_lock.__page_check_address.page_referenced_one.rmap_walk 3.12 ± 1% -100.0% 0.00 ± -1% perf-profile.cpu-cycles.call_function_interrupt._raw_spin_lock.__page_check_address.try_to_unmap_one.rmap_walk 0.02 ± 44% +11320.0% 2.85 ± 37% perf-profile.cpu-cycles.cpu_startup_entry.start_secondary 0.01 ± 66% +8500.0% 1.08 ± 36% perf-profile.cpu-cycles.cpuidle_enter.call_cpuidle.cpu_startup_entry.start_secondary 0.00 ±141% +27200.0% 0.91 ± 41% perf-profile.cpu-cycles.cpuidle_enter_state.cpuidle_enter.call_cpuidle.cpu_startup_entry.start_secondary 0.00 ± -1% +Inf% 1.17 ± 37% perf-profile.cpu-cycles.default_send_IPI_mask_allbutself_phys.physflat_send_IPI_allbutself.native_send_call_func_ipi.smp_call_function_many.try_to_unmap_flush 6.54 ± 6% -99.8% 0.01 ±-10000% perf-profile.cpu-cycles.default_send_IPI_mask_sequence_phys.physflat_send_IPI_mask.native_send_call_func_ipi.smp_call_function_many.native_flush_tlb_others 0.00 ± -1% +Inf% 0.87 ± 14% perf-profile.cpu-cycles.default_send_IPI_mask_sequence_phys.physflat_send_IPI_mask.native_send_call_func_ipi.smp_call_function_many.try_to_unmap_flush 0.00 ± -1% +Inf% 2.51 ± 43% perf-profile.cpu-cycles.default_wake_function.autoremove_wake_function.wake_bit_function.__wake_up_common.__wake_up 1.27 ± 5% +1764.7% 23.63 ± 13% perf-profile.cpu-cycles.do_mpage_readpage.mpage_readpages.xfs_vm_readpages.__do_page_cache_readahead.ondemand_readahead 85.04 ± 0% -27.9% 61.29 ± 6% perf-profile.cpu-cycles.do_page_fault.page_fault 79.44 ± 1% -96.1% 3.06 ± 50% perf-profile.cpu-cycles.do_try_to_free_pages.try_to_free_pages.__alloc_pages_nodemask.alloc_pages_current.__page_cache_alloc 2.82 ± 46% -95.9% 0.12 ±141% perf-profile.cpu-cycles.do_try_to_free_pages.try_to_free_pages.__alloc_pages_nodemask.alloc_pages_current.pte_alloc_one 0.01 ± 35% +8862.5% 1.20 ± 37% perf-profile.cpu-cycles.down_read.rmap_walk.page_referenced.shrink_page_list.shrink_inactive_list 0.01 ± 57% +20200.0% 1.52 ± 42% perf-profile.cpu-cycles.dump_trace.save_stack_trace_tsk.__account_scheduler_latency.enqueue_entity.enqueue_task_fair 0.01 ± 0% +19350.0% 1.94 ± 40% perf-profile.cpu-cycles.enqueue_entity.enqueue_task_fair.enqueue_task.activate_task.ttwu_do_activate 0.01 ± 57% +26600.0% 2.00 ± 42% perf-profile.cpu-cycles.enqueue_task.activate_task.ttwu_do_activate.try_to_wake_up.default_wake_function 0.01 ± 57% +26133.3% 1.97 ± 40% perf-profile.cpu-cycles.enqueue_task_fair.enqueue_task.activate_task.ttwu_do_activate.try_to_wake_up 81.70 ± 1% -51.3% 39.80 ± 11% perf-profile.cpu-cycles.filemap_fault.xfs_filemap_fault.__do_fault.handle_mm_fault.__do_page_fault 0.07 ± 31% +6575.0% 4.67 ± 11% perf-profile.cpu-cycles.filemap_map_pages.handle_mm_fault.__do_page_fault.do_page_fault.page_fault 0.00 ± -1% +Inf% 2.64 ± 22% perf-profile.cpu-cycles.flush_smp_call_function_queue.generic_smp_call_function_single_interrupt.smp_call_function_interrupt.call_function_interrupt 3.38 ± 1% -100.0% 0.00 ± -1% perf-profile.cpu-cycles.flush_smp_call_function_queue.generic_smp_call_function_single_interrupt.smp_call_function_interrupt.call_function_interrupt._raw_spin_lock 3.16 ± 4% -100.0% 0.00 ± -1% perf-profile.cpu-cycles.flush_tlb_func.flush_smp_call_function_queue.generic_smp_call_function_single_interrupt.smp_call_function_interrupt.call_function_interrupt 14.09 ± 6% -100.0% 0.00 ± -1% perf-profile.cpu-cycles.flush_tlb_page.ptep_clear_flush.try_to_unmap_one.rmap_walk.try_to_unmap 0.01 ± 35% +8093.7% 1.09 ± 25% perf-profile.cpu-cycles.free_hot_cold_page.free_hot_cold_page_list.shrink_page_list.shrink_inactive_list.shrink_lruvec 0.02 ± 33% +7666.7% 1.17 ± 25% perf-profile.cpu-cycles.free_hot_cold_page_list.shrink_page_list.shrink_inactive_list.shrink_lruvec.shrink_zone 0.00 ± -1% +Inf% 0.74 ± 31% perf-profile.cpu-cycles.free_pcppages_bulk.free_hot_cold_page.free_hot_cold_page_list.shrink_page_list.shrink_inactive_list 0.00 ± -1% +Inf% 2.69 ± 22% perf-profile.cpu-cycles.generic_smp_call_function_single_interrupt.smp_call_function_interrupt.call_function_interrupt 3.90 ± 2% -100.0% 0.00 ± -1% perf-profile.cpu-cycles.generic_smp_call_function_single_interrupt.smp_call_function_interrupt.call_function_interrupt._raw_spin_lock.__page_check_address 0.15 ± 2% +1391.8% 2.27 ± 28% perf-profile.cpu-cycles.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_current.__page_cache_alloc.__do_page_cache_readahead 85.00 ± 0% -29.4% 59.97 ± 6% perf-profile.cpu-cycles.handle_mm_fault.__do_page_fault.do_page_fault.page_fault 0.01 ±100% +70250.0% 3.52 ± 18% perf-profile.cpu-cycles.hrtimer_interrupt.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt 0.01 ± 70% +12612.5% 0.85 ± 41% perf-profile.cpu-cycles.intel_idle.cpuidle_enter_state.cpuidle_enter.call_cpuidle.cpu_startup_entry 0.00 ± -1% +Inf% 0.83 ± 39% perf-profile.cpu-cycles.io_schedule_timeout.bit_wait_io.__wait_on_bit.wait_on_page_bit_killable.__lock_page_or_retry 0.00 ± -1% +Inf% 1.20 ± 31% perf-profile.cpu-cycles.isolate_lru_pages.isra.44.shrink_inactive_list.shrink_lruvec.shrink_zone.kswapd 6.25 ± 1% +222.2% 20.14 ± 15% perf-profile.cpu-cycles.kswapd.kthread.ret_from_fork 6.28 ± 1% +223.1% 20.29 ± 15% perf-profile.cpu-cycles.kthread.ret_from_fork 3.41 ± 4% -100.0% 0.00 ± -1% perf-profile.cpu-cycles.llist_add_batch.smp_call_function_many.native_flush_tlb_others.flush_tlb_page.ptep_clear_flush 0.00 ± -1% +Inf% 1.35 ± 24% perf-profile.cpu-cycles.llist_add_batch.smp_call_function_many.try_to_unmap_flush.shrink_page_list.shrink_inactive_list 2.13 ± 4% -56.9% 0.92 ± 20% perf-profile.cpu-cycles.llist_reverse_order.flush_smp_call_function_queue.generic_smp_call_function_single_interrupt.smp_call_function_interrupt.call_function_interrupt 0.01 ± 0% +35500.0% 3.56 ± 19% perf-profile.cpu-cycles.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt 0.01 ± 0% +9625.0% 0.97 ± 21% perf-profile.cpu-cycles.lru_cache_add.add_to_page_cache_lru.mpage_readpages.xfs_vm_readpages.__do_page_cache_readahead 0.00 ± -1% +Inf% 1.61 ± 76% perf-profile.cpu-cycles.mpage_readpages.xfs_vm_readpages.__do_page_cache_readahead.filemap_fault.xfs_filemap_fault 1.83 ± 7% +1475.0% 28.78 ± 12% perf-profile.cpu-cycles.mpage_readpages.xfs_vm_readpages.__do_page_cache_readahead.ondemand_readahead.page_cache_async_readahead 13.97 ± 6% -100.0% 0.00 ± -1% perf-profile.cpu-cycles.native_flush_tlb_others.flush_tlb_page.ptep_clear_flush.try_to_unmap_one.rmap_walk 36.63 ± 2% -99.6% 0.14 ±141% perf-profile.cpu-cycles.native_queued_spin_lock_slowpath._raw_spin_lock.__page_check_address.page_referenced_one.rmap_walk 28.56 ± 3% -100.0% 0.00 ±141% perf-profile.cpu-cycles.native_queued_spin_lock_slowpath._raw_spin_lock.__page_check_address.try_to_unmap_one.rmap_walk 0.00 ± -1% +Inf% 12.46 ± 51% perf-profile.cpu-cycles.native_queued_spin_lock_slowpath._raw_spin_lock.handle_mm_fault.__do_page_fault.do_page_fault 0.00 ± -1% +Inf% 2.12 ± 37% perf-profile.cpu-cycles.native_queued_spin_lock_slowpath._raw_spin_lock_irq.__add_to_page_cache_locked.add_to_page_cache_lru.mpage_readpages 0.00 ± -1% +Inf% 2.45 ± 56% perf-profile.cpu-cycles.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.__remove_mapping.shrink_page_list.shrink_inactive_list 0.00 ± -1% +Inf% 1.31 ± 44% perf-profile.cpu-cycles.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.__wake_up.__wake_up_bit.unlock_page 7.30 ± 6% -100.0% 0.00 ± -1% perf-profile.cpu-cycles.native_send_call_func_ipi.smp_call_function_many.native_flush_tlb_others.flush_tlb_page.ptep_clear_flush 0.00 ± -1% +Inf% 2.13 ± 22% perf-profile.cpu-cycles.native_send_call_func_ipi.smp_call_function_many.try_to_unmap_flush.shrink_page_list.shrink_inactive_list 85.05 ± 0% -27.0% 62.09 ± 6% perf-profile.cpu-cycles.page_fault 41.27 ± 1% -90.2% 4.06 ± 31% perf-profile.cpu-cycles.page_referenced.shrink_page_list.shrink_inactive_list.shrink_lruvec.shrink_zone 41.05 ± 1% -96.3% 1.52 ± 32% perf-profile.cpu-cycles.page_referenced_one.rmap_walk.page_referenced.shrink_page_list.shrink_inactive_list 0.01 ±100% +16450.0% 0.83 ± 20% perf-profile.cpu-cycles.pagevec_lru_move_fn.__lru_cache_add.lru_cache_add.add_to_page_cache_lru.mpage_readpages 0.00 ± -1% +Inf% 1.19 ± 36% perf-profile.cpu-cycles.physflat_send_IPI_allbutself.native_send_call_func_ipi.smp_call_function_many.try_to_unmap_flush.shrink_page_list 7.20 ± 6% -100.0% 0.00 ± -1% perf-profile.cpu-cycles.physflat_send_IPI_mask.native_send_call_func_ipi.smp_call_function_many.native_flush_tlb_others.flush_tlb_page 0.00 ± -1% +Inf% 0.89 ± 14% perf-profile.cpu-cycles.physflat_send_IPI_mask.native_send_call_func_ipi.smp_call_function_many.try_to_unmap_flush.shrink_page_list 0.01 ±100% +27400.0% 1.38 ± 42% perf-profile.cpu-cycles.print_context_stack.dump_trace.save_stack_trace_tsk.__account_scheduler_latency.enqueue_entity 2.82 ± 46% -91.6% 0.24 ± 59% perf-profile.cpu-cycles.pte_alloc_one.__pte_alloc.handle_mm_fault.__do_page_fault.do_page_fault 14.16 ± 6% -100.0% 0.00 ± -1% perf-profile.cpu-cycles.ptep_clear_flush.try_to_unmap_one.rmap_walk.try_to_unmap.shrink_page_list 6.28 ± 1% +223.1% 20.29 ± 15% perf-profile.cpu-cycles.ret_from_fork 41.25 ± 1% -90.9% 3.77 ± 32% perf-profile.cpu-cycles.rmap_walk.page_referenced.shrink_page_list.shrink_inactive_list.shrink_lruvec 46.73 ± 0% -94.6% 2.54 ± 23% perf-profile.cpu-cycles.rmap_walk.try_to_unmap.shrink_page_list.shrink_inactive_list.shrink_lruvec 0.01 ±100% +30500.0% 1.53 ± 42% perf-profile.cpu-cycles.save_stack_trace_tsk.__account_scheduler_latency.enqueue_entity.enqueue_task_fair.enqueue_task 0.00 ± -1% +Inf% 0.70 ± 39% perf-profile.cpu-cycles.schedule.schedule_timeout.io_schedule_timeout.bit_wait_io.__wait_on_bit 0.00 ± -1% +Inf% 0.71 ± 40% perf-profile.cpu-cycles.schedule_timeout.io_schedule_timeout.bit_wait_io.__wait_on_bit.wait_on_page_bit_killable 0.18 ± 16% +1061.4% 2.03 ± 16% perf-profile.cpu-cycles.scheduler_tick.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues 82.99 ± 0% -96.2% 3.19 ± 54% perf-profile.cpu-cycles.shrink_inactive_list.shrink_lruvec.shrink_zone.do_try_to_free_pages.try_to_free_pages 6.25 ± 1% +217.8% 19.86 ± 14% perf-profile.cpu-cycles.shrink_inactive_list.shrink_lruvec.shrink_zone.kswapd.kthread 83.00 ± 0% -96.2% 3.19 ± 54% perf-profile.cpu-cycles.shrink_lruvec.shrink_zone.do_try_to_free_pages.try_to_free_pages.__alloc_pages_nodemask 6.25 ± 1% +218.4% 19.90 ± 14% perf-profile.cpu-cycles.shrink_lruvec.shrink_zone.kswapd.kthread.ret_from_fork 82.90 ± 0% -96.4% 3.02 ± 52% perf-profile.cpu-cycles.shrink_page_list.shrink_inactive_list.shrink_lruvec.shrink_zone.do_try_to_free_pages 6.24 ± 1% +185.9% 17.85 ± 14% perf-profile.cpu-cycles.shrink_page_list.shrink_inactive_list.shrink_lruvec.shrink_zone.kswapd 82.50 ± 1% -96.1% 3.20 ± 54% perf-profile.cpu-cycles.shrink_zone.do_try_to_free_pages.try_to_free_pages.__alloc_pages_nodemask.alloc_pages_current 6.25 ± 1% +221.2% 20.07 ± 15% perf-profile.cpu-cycles.shrink_zone.kswapd.kthread.ret_from_fork 0.01 ±-10000% +44325.0% 4.44 ± 18% perf-profile.cpu-cycles.smp_apic_timer_interrupt.apic_timer_interrupt 0.00 ± -1% +Inf% 3.51 ± 22% perf-profile.cpu-cycles.smp_call_function_interrupt.call_function_interrupt 2.55 ± 2% -100.0% 0.00 ± -1% perf-profile.cpu-cycles.smp_call_function_interrupt.call_function_interrupt._raw_spin_lock.__page_check_address.page_referenced_one 2.02 ± 1% -100.0% 0.00 ± -1% perf-profile.cpu-cycles.smp_call_function_interrupt.call_function_interrupt._raw_spin_lock.__page_check_address.try_to_unmap_one 13.73 ± 6% -100.0% 0.00 ± -1% perf-profile.cpu-cycles.smp_call_function_many.native_flush_tlb_others.flush_tlb_page.ptep_clear_flush.try_to_unmap_one 0.00 ± -1% +Inf% 4.84 ± 20% perf-profile.cpu-cycles.smp_call_function_many.try_to_unmap_flush.shrink_page_list.shrink_inactive_list.shrink_lruvec 0.02 ± 44% +11540.0% 2.91 ± 37% perf-profile.cpu-cycles.start_secondary 0.00 ± -1% +Inf% 1.08 ± 22% perf-profile.cpu-cycles.task_tick_fair.scheduler_tick.update_process_times.tick_sched_handle.tick_sched_timer 0.46 ± 3% +513.7% 2.81 ± 16% perf-profile.cpu-cycles.tick_sched_handle.isra.17.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.local_apic_timer_interrupt 0.48 ± 1% +504.6% 2.93 ± 17% perf-profile.cpu-cycles.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.local_apic_timer_interrupt.smp_apic_timer_interrupt 79.44 ± 1% -96.1% 3.06 ± 50% perf-profile.cpu-cycles.try_to_free_pages.__alloc_pages_nodemask.alloc_pages_current.__page_cache_alloc.__do_page_cache_readahead 2.82 ± 46% -95.9% 0.12 ±141% perf-profile.cpu-cycles.try_to_free_pages.__alloc_pages_nodemask.alloc_pages_current.pte_alloc_one.__pte_alloc 46.76 ± 0% -94.2% 2.69 ± 23% perf-profile.cpu-cycles.try_to_unmap.shrink_page_list.shrink_inactive_list.shrink_lruvec.shrink_zone 0.00 ± -1% +Inf% 4.93 ± 20% perf-profile.cpu-cycles.try_to_unmap_flush.shrink_page_list.shrink_inactive_list.shrink_lruvec.shrink_zone 46.37 ± 0% -97.0% 1.39 ± 23% perf-profile.cpu-cycles.try_to_unmap_one.rmap_walk.try_to_unmap.shrink_page_list.shrink_inactive_list 0.00 ± -1% +Inf% 2.50 ± 43% perf-profile.cpu-cycles.try_to_wake_up.default_wake_function.autoremove_wake_function.wake_bit_function.__wake_up_common 0.00 ± -1% +Inf% 2.06 ± 44% perf-profile.cpu-cycles.ttwu_do_activate.constprop.84.try_to_wake_up.default_wake_function.autoremove_wake_function.wake_bit_function 0.00 ± -1% +Inf% 1.85 ± 30% perf-profile.cpu-cycles.unlock_page.do_mpage_readpage.mpage_readpages.xfs_vm_readpages.__do_page_cache_readahead 0.01 ±100% +37950.0% 1.90 ± 25% perf-profile.cpu-cycles.unlock_page.filemap_map_pages.handle_mm_fault.__do_page_fault.do_page_fault 0.00 ± -1% +Inf% 1.42 ± 46% perf-profile.cpu-cycles.unlock_page.handle_mm_fault.__do_page_fault.do_page_fault.page_fault 0.15 ± 10% +388.5% 0.74 ± 31% perf-profile.cpu-cycles.up_read.rmap_walk.page_referenced.shrink_page_list.shrink_inactive_list 0.45 ± 4% +516.8% 2.76 ± 16% perf-profile.cpu-cycles.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt 0.00 ± -1% +Inf% 1.74 ± 44% perf-profile.cpu-cycles.wait_on_page_bit_killable.__lock_page_or_retry.filemap_fault.xfs_filemap_fault.__do_fault 0.00 ± -1% +Inf% 2.55 ± 43% perf-profile.cpu-cycles.wake_bit_function.__wake_up_common.__wake_up.__wake_up_bit.unlock_page 81.70 ± 1% -51.1% 39.99 ± 11% perf-profile.cpu-cycles.xfs_filemap_fault.__do_fault.handle_mm_fault.__do_page_fault.do_page_fault 0.03 ± 34% +5730.0% 1.46 ± 15% perf-profile.cpu-cycles.xfs_get_blocks.do_mpage_readpage.mpage_readpages.xfs_vm_readpages.__do_page_cache_readahead 0.00 ± -1% +Inf% 1.61 ± 76% perf-profile.cpu-cycles.xfs_vm_readpages.__do_page_cache_readahead.filemap_fault.xfs_filemap_fault.__do_fault 1.83 ± 6% +1469.6% 28.80 ± 12% perf-profile.cpu-cycles.xfs_vm_readpages.__do_page_cache_readahead.ondemand_readahead.page_cache_async_readahead.filemap_fault 2494 ± 4% -32.2% 1692 ± 3% sched_debug.cfs_rq[0]:/.tg_load_avg 2469 ± 5% -31.6% 1690 ± 4% sched_debug.cfs_rq[10]:/.tg_load_avg 2466 ± 5% -31.4% 1693 ± 4% sched_debug.cfs_rq[11]:/.tg_load_avg 20.75 ± 17% -24.1% 15.75 ± 2% sched_debug.cfs_rq[12]:/.load_avg 2464 ± 5% -31.4% 1690 ± 4% sched_debug.cfs_rq[12]:/.tg_load_avg 20.75 ± 17% -24.1% 15.75 ± 2% sched_debug.cfs_rq[12]:/.tg_load_avg_contrib 100.50 ± 82% -86.1% 14.00 ± 11% sched_debug.cfs_rq[13]:/.load 138.25 ± 47% -87.3% 17.50 ± 4% sched_debug.cfs_rq[13]:/.load_avg 98.50 ± 83% -85.0% 14.75 ± 12% sched_debug.cfs_rq[13]:/.runnable_load_avg 2468 ± 5% -30.9% 1704 ± 5% sched_debug.cfs_rq[13]:/.tg_load_avg 139.25 ± 47% -87.4% 17.50 ± 4% sched_debug.cfs_rq[13]:/.tg_load_avg_contrib 2467 ± 5% -30.9% 1703 ± 4% sched_debug.cfs_rq[14]:/.tg_load_avg 2467 ± 5% -31.5% 1691 ± 5% sched_debug.cfs_rq[15]:/.tg_load_avg 2.50 ± 44% +270.0% 9.25 ± 43% sched_debug.cfs_rq[16]:/.nr_spread_over -19276 ±-1804% +2126.3% -429159 ± -5% sched_debug.cfs_rq[16]:/.spread0 2467 ± 5% -31.5% 1691 ± 5% sched_debug.cfs_rq[16]:/.tg_load_avg 4.75 ± 60% +105.3% 9.75 ± 26% sched_debug.cfs_rq[17]:/.nr_spread_over 2473 ± 7% -31.6% 1692 ± 5% sched_debug.cfs_rq[17]:/.tg_load_avg -38059 ±-877% +1084.4% -450771 ±-25% sched_debug.cfs_rq[18]:/.spread0 2473 ± 7% -30.8% 1710 ± 4% sched_debug.cfs_rq[18]:/.tg_load_avg 2473 ± 7% -30.6% 1716 ± 4% sched_debug.cfs_rq[19]:/.tg_load_avg 2466 ± 5% -31.4% 1691 ± 4% sched_debug.cfs_rq[1]:/.tg_load_avg -67725 ±-397% +315.1% -281110 ±-17% sched_debug.cfs_rq[20]:/.spread0 2473 ± 7% -30.4% 1722 ± 4% sched_debug.cfs_rq[20]:/.tg_load_avg 100.00 ± 83% -69.5% 30.50 ± 88% sched_debug.cfs_rq[21]:/.runnable_load_avg 2473 ± 7% -30.9% 1708 ± 5% sched_debug.cfs_rq[21]:/.tg_load_avg 2473 ± 7% -31.0% 1706 ± 5% sched_debug.cfs_rq[22]:/.tg_load_avg 732.50 ± 7% +12.1% 821.25 ± 1% sched_debug.cfs_rq[22]:/.util_avg 2472 ± 7% -31.1% 1704 ± 5% sched_debug.cfs_rq[23]:/.tg_load_avg 18.75 ± 5% -14.7% 16.00 ± 4% sched_debug.cfs_rq[24]:/.runnable_load_avg 2472 ± 7% -31.0% 1705 ± 5% sched_debug.cfs_rq[24]:/.tg_load_avg 23.50 ± 29% -37.2% 14.75 ± 11% sched_debug.cfs_rq[25]:/.load_avg 2470 ± 7% -31.1% 1702 ± 5% sched_debug.cfs_rq[25]:/.tg_load_avg 23.50 ± 29% -37.2% 14.75 ± 11% sched_debug.cfs_rq[25]:/.tg_load_avg_contrib 2471 ± 8% -31.1% 1702 ± 4% sched_debug.cfs_rq[26]:/.tg_load_avg 2471 ± 7% -31.0% 1705 ± 5% sched_debug.cfs_rq[27]:/.tg_load_avg 2470 ± 7% -31.0% 1705 ± 5% sched_debug.cfs_rq[28]:/.tg_load_avg 61.00 ±114% -73.4% 16.25 ± 6% sched_debug.cfs_rq[29]:/.load_avg 59.75 ±118% -73.6% 15.75 ± 2% sched_debug.cfs_rq[29]:/.runnable_load_avg 2471 ± 7% -31.0% 1704 ± 5% sched_debug.cfs_rq[29]:/.tg_load_avg 60.75 ±114% -73.3% 16.25 ± 6% sched_debug.cfs_rq[29]:/.tg_load_avg_contrib 2467 ± 5% -31.5% 1689 ± 3% sched_debug.cfs_rq[2]:/.tg_load_avg 2472 ± 7% -30.9% 1707 ± 4% sched_debug.cfs_rq[30]:/.tg_load_avg 2468 ± 7% -30.8% 1708 ± 5% sched_debug.cfs_rq[31]:/.tg_load_avg 2465 ± 7% -30.7% 1708 ± 5% sched_debug.cfs_rq[32]:/.tg_load_avg 2463 ± 7% -30.5% 1710 ± 5% sched_debug.cfs_rq[33]:/.tg_load_avg 2463 ± 7% -30.4% 1714 ± 5% sched_debug.cfs_rq[34]:/.tg_load_avg 2462 ± 7% -32.0% 1675 ± 7% sched_debug.cfs_rq[35]:/.tg_load_avg 2462 ± 7% -31.9% 1678 ± 7% sched_debug.cfs_rq[36]:/.tg_load_avg 16.00 ± 18% +403.1% 80.50 ± 72% sched_debug.cfs_rq[37]:/.load_avg -1797 ±-23058% +21806.9% -393844 ±-35% sched_debug.cfs_rq[37]:/.spread0 2461 ± 7% -31.9% 1676 ± 7% sched_debug.cfs_rq[37]:/.tg_load_avg 16.00 ± 18% +403.1% 80.50 ± 72% sched_debug.cfs_rq[37]:/.tg_load_avg_contrib 2461 ± 7% -32.1% 1670 ± 7% sched_debug.cfs_rq[38]:/.tg_load_avg -11058 ±-3224% +3760.7% -426917 ±-24% sched_debug.cfs_rq[39]:/.spread0 2467 ± 7% -32.2% 1672 ± 8% sched_debug.cfs_rq[39]:/.tg_load_avg 2467 ± 5% -31.5% 1690 ± 3% sched_debug.cfs_rq[3]:/.tg_load_avg 19026 ±1892% -2390.3% -435768 ±-14% sched_debug.cfs_rq[40]:/.spread0 2466 ± 7% -32.1% 1674 ± 8% sched_debug.cfs_rq[40]:/.tg_load_avg 103.00 ± 84% -78.4% 22.25 ± 56% sched_debug.cfs_rq[41]:/.load_avg 59345 ±352% -906.7% -478762 ±-57% sched_debug.cfs_rq[41]:/.spread0 2469 ± 7% -32.1% 1676 ± 8% sched_debug.cfs_rq[41]:/.tg_load_avg 103.00 ± 84% -78.4% 22.25 ± 56% sched_debug.cfs_rq[41]:/.tg_load_avg_contrib 2472 ± 7% -32.1% 1679 ± 8% sched_debug.cfs_rq[42]:/.tg_load_avg -90463 ±-221% +284.2% -347573 ±-17% sched_debug.cfs_rq[43]:/.spread0 2470 ± 7% -31.9% 1682 ± 8% sched_debug.cfs_rq[43]:/.tg_load_avg 2470 ± 7% -31.8% 1684 ± 8% sched_debug.cfs_rq[44]:/.tg_load_avg 2471 ± 7% -31.6% 1690 ± 8% sched_debug.cfs_rq[45]:/.tg_load_avg 85594 ±379% -529.7% -367773 ±-27% sched_debug.cfs_rq[46]:/.spread0 2472 ± 7% -31.7% 1690 ± 8% sched_debug.cfs_rq[46]:/.tg_load_avg 2474 ± 7% -32.1% 1680 ± 8% sched_debug.cfs_rq[47]:/.tg_load_avg 2467 ± 5% -31.5% 1689 ± 4% sched_debug.cfs_rq[4]:/.tg_load_avg 2466 ± 5% -31.6% 1687 ± 4% sched_debug.cfs_rq[5]:/.tg_load_avg 2467 ± 5% -31.6% 1687 ± 4% sched_debug.cfs_rq[6]:/.tg_load_avg 2467 ± 5% -31.6% 1687 ± 4% sched_debug.cfs_rq[7]:/.tg_load_avg 2468 ± 5% -31.6% 1689 ± 4% sched_debug.cfs_rq[8]:/.tg_load_avg 2468 ± 5% -31.5% 1690 ± 4% sched_debug.cfs_rq[9]:/.tg_load_avg 515797 ± 19% +47.0% 758351 ± 12% sched_debug.cpu#10.avg_idle 772344 ± 19% -46.2% 415620 ± 25% sched_debug.cpu#12.avg_idle 90073 ± 25% +399.5% 449913 ± 13% sched_debug.cpu#12.nr_switches 91341 ± 25% +395.1% 452202 ± 13% sched_debug.cpu#12.sched_count 31956 ± 53% +580.7% 217530 ± 13% sched_debug.cpu#12.sched_goidle 45275 ± 26% +396.9% 224988 ± 12% sched_debug.cpu#12.ttwu_count 98.50 ± 83% -85.0% 14.75 ± 7% sched_debug.cpu#13.cpu_load[0] 98.75 ± 83% -84.8% 15.00 ± 4% sched_debug.cpu#13.cpu_load[1] 77821 ± 26% +408.3% 395597 ± 10% sched_debug.cpu#13.nr_switches -30.50 ±-94% -166.4% 20.25 ± 92% sched_debug.cpu#13.nr_uninterruptible 79198 ± 25% +403.1% 398466 ± 10% sched_debug.cpu#13.sched_count 31182 ± 55% +519.7% 193231 ± 10% sched_debug.cpu#13.sched_goidle 38722 ± 22% +418.9% 200938 ± 14% sched_debug.cpu#13.ttwu_count 91017 ± 16% +316.8% 379345 ± 8% sched_debug.cpu#14.nr_switches 93050 ± 15% +310.0% 381510 ± 8% sched_debug.cpu#14.sched_count 33477 ± 56% +454.1% 185513 ± 8% sched_debug.cpu#14.sched_goidle 44850 ± 16% +334.0% 194645 ± 11% sched_debug.cpu#14.ttwu_count 73629 ± 33% +345.7% 328161 ± 10% sched_debug.cpu#15.nr_switches -1.25 ±-1266% -2860.0% 34.50 ± 34% sched_debug.cpu#15.nr_uninterruptible 75039 ± 32% +339.9% 330093 ± 10% sched_debug.cpu#15.sched_count 31218 ± 54% +412.6% 160034 ± 10% sched_debug.cpu#15.sched_goidle 35121 ± 32% +368.9% 164701 ± 10% sched_debug.cpu#15.ttwu_count 71334 ± 34% +405.7% 360746 ± 17% sched_debug.cpu#16.nr_switches 74176 ± 31% +389.6% 363169 ± 17% sched_debug.cpu#16.sched_count 24687 ± 54% +609.7% 175191 ± 17% sched_debug.cpu#16.sched_goidle 34412 ± 43% +458.6% 192220 ± 16% sched_debug.cpu#16.ttwu_count 675440 ± 29% -34.1% 444784 ± 29% sched_debug.cpu#17.avg_idle 68943 ± 48% +395.3% 341448 ± 9% sched_debug.cpu#17.nr_switches 70306 ± 46% +388.7% 343571 ± 9% sched_debug.cpu#17.sched_count 30244 ± 53% +450.6% 166526 ± 9% sched_debug.cpu#17.sched_goidle 34874 ± 46% +404.1% 175792 ± 8% sched_debug.cpu#17.ttwu_count 59643 ± 49% +475.0% 342930 ± 13% sched_debug.cpu#18.nr_switches 61041 ± 47% +467.2% 346247 ± 13% sched_debug.cpu#18.sched_count 25907 ± 54% +546.6% 167516 ± 14% sched_debug.cpu#18.sched_goidle 30665 ± 49% +465.2% 173314 ± 13% sched_debug.cpu#18.ttwu_count 715381 ± 11% -25.0% 536826 ± 27% sched_debug.cpu#19.avg_idle 79154 ± 45% +355.6% 360655 ± 23% sched_debug.cpu#19.nr_switches 80360 ± 44% +351.7% 362968 ± 23% sched_debug.cpu#19.sched_count 33053 ± 52% +428.7% 174766 ± 24% sched_debug.cpu#19.sched_goidle 39022 ± 46% +375.3% 185469 ± 21% sched_debug.cpu#19.ttwu_count 56484 ± 47% +514.8% 347258 ± 8% sched_debug.cpu#20.nr_switches 57707 ± 46% +505.8% 349595 ± 7% sched_debug.cpu#20.sched_count 24050 ± 52% +606.1% 169809 ± 8% sched_debug.cpu#20.sched_goidle 27285 ± 48% +530.2% 171948 ± 9% sched_debug.cpu#20.ttwu_count 100.00 ± 83% -85.0% 15.00 ± 8% sched_debug.cpu#21.cpu_load[0] 100.00 ± 83% -85.0% 15.00 ± 8% sched_debug.cpu#21.cpu_load[1] 98.50 ± 82% -84.5% 15.25 ± 8% sched_debug.cpu#21.cpu_load[2] 91.50 ± 82% -80.6% 17.75 ± 27% sched_debug.cpu#21.cpu_load[3] 81.00 ± 86% -74.1% 21.00 ± 49% sched_debug.cpu#21.cpu_load[4] 73798 ± 40% +399.3% 368467 ± 9% sched_debug.cpu#21.nr_switches -3.00 ±-143% -1075.0% 29.25 ± 53% sched_debug.cpu#21.nr_uninterruptible 75091 ± 39% +393.7% 370696 ± 8% sched_debug.cpu#21.sched_count 28391 ± 50% +532.0% 179443 ± 9% sched_debug.cpu#21.sched_goidle 37067 ± 40% +398.6% 184818 ± 8% sched_debug.cpu#21.ttwu_count 77996 ± 60% +371.5% 367777 ± 14% sched_debug.cpu#22.nr_switches 7.00 ± 94% +321.4% 29.50 ± 23% sched_debug.cpu#22.nr_uninterruptible 79686 ± 58% +363.5% 369376 ± 14% sched_debug.cpu#22.sched_count 30204 ± 54% +490.3% 178293 ± 14% sched_debug.cpu#22.sched_goidle 38043 ± 59% +376.9% 181416 ± 14% sched_debug.cpu#22.ttwu_count 68658 ± 60% +431.2% 364695 ± 6% sched_debug.cpu#23.nr_switches 5.50 ±138% +481.8% 32.00 ± 65% sched_debug.cpu#23.nr_uninterruptible 69820 ± 59% +425.5% 366933 ± 5% sched_debug.cpu#23.sched_count 27598 ± 67% +534.7% 175164 ± 3% sched_debug.cpu#23.sched_goidle 35976 ± 61% +383.5% 173947 ± 6% sched_debug.cpu#23.ttwu_count 539599 ± 31% +68.4% 908760 ± 10% sched_debug.cpu#24.avg_idle 18.75 ± 5% -14.7% 16.00 ± 4% sched_debug.cpu#24.cpu_load[0] 18.75 ± 5% -14.7% 16.00 ± 4% sched_debug.cpu#24.cpu_load[1] 3276 ± 61% -53.5% 1522 ± 15% sched_debug.cpu#25.ttwu_local 581290 ± 12% +41.2% 820671 ± 15% sched_debug.cpu#26.avg_idle 541356 ± 22% +31.6% 712630 ± 8% sched_debug.cpu#27.avg_idle 59.75 ±118% -73.6% 15.75 ± 2% sched_debug.cpu#29.cpu_load[0] 59.75 ±118% -73.6% 15.75 ± 2% sched_debug.cpu#29.cpu_load[1] 59.75 ±118% -73.6% 15.75 ± 2% sched_debug.cpu#29.cpu_load[2] 60.00 ±117% -73.8% 15.75 ± 2% sched_debug.cpu#29.cpu_load[3] 60.00 ±117% -73.8% 15.75 ± 2% sched_debug.cpu#29.cpu_load[4] 523830 ± 40% +54.7% 810484 ± 7% sched_debug.cpu#3.avg_idle 516632 ± 17% +54.2% 796756 ± 6% sched_debug.cpu#31.avg_idle 562160 ± 8% +31.2% 737363 ± 10% sched_debug.cpu#32.avg_idle 502554 ± 14% +55.7% 782527 ± 7% sched_debug.cpu#33.avg_idle 458849 ± 34% +56.6% 718680 ± 16% sched_debug.cpu#34.avg_idle 9.75 ±104% +171.8% 26.50 ± 19% sched_debug.cpu#35.nr_uninterruptible 579050 ± 19% -29.2% 409775 ± 16% sched_debug.cpu#36.avg_idle 70123 ± 36% +480.4% 406962 ± 9% sched_debug.cpu#36.nr_switches 71197 ± 35% +473.4% 408215 ± 9% sched_debug.cpu#36.sched_count 30280 ± 52% +555.9% 198595 ± 9% sched_debug.cpu#36.sched_goidle 35105 ± 34% +478.5% 203089 ± 9% sched_debug.cpu#36.ttwu_count 15.00 ± 24% +410.0% 76.50 ± 79% sched_debug.cpu#37.cpu_load[0] 15.50 ± 21% +354.8% 70.50 ± 79% sched_debug.cpu#37.cpu_load[4] 67749 ± 55% +485.8% 396904 ± 7% sched_debug.cpu#37.nr_switches 8.75 ±123% -414.3% -27.50 ±-15% sched_debug.cpu#37.nr_uninterruptible 68824 ± 54% +479.6% 398914 ± 7% sched_debug.cpu#37.sched_count 28170 ± 54% +590.0% 194374 ± 7% sched_debug.cpu#37.sched_goidle 34370 ± 56% +501.0% 206566 ± 7% sched_debug.cpu#37.ttwu_count 74792 ± 52% +406.2% 378573 ± 12% sched_debug.cpu#38.nr_switches 75882 ± 51% +400.5% 379807 ± 12% sched_debug.cpu#38.sched_count 33835 ± 55% +448.9% 185709 ± 12% sched_debug.cpu#38.sched_goidle 37917 ± 52% +392.5% 186731 ± 11% sched_debug.cpu#38.ttwu_count 65741 ± 40% +441.7% 356109 ± 7% sched_debug.cpu#39.nr_switches 66817 ± 39% +434.7% 357271 ± 7% sched_debug.cpu#39.sched_count 28725 ± 56% +504.8% 173731 ± 7% sched_debug.cpu#39.sched_goidle 32630 ± 38% +463.7% 183933 ± 7% sched_debug.cpu#39.ttwu_count 79417 ± 40% +358.0% 363715 ± 10% sched_debug.cpu#40.nr_switches 81161 ± 40% +349.8% 365076 ± 10% sched_debug.cpu#40.sched_count 30793 ± 61% +477.0% 177691 ± 10% sched_debug.cpu#40.sched_goidle 44088 ± 34% +337.8% 193038 ± 9% sched_debug.cpu#40.ttwu_count 102.50 ± 78% -83.9% 16.50 ± 12% sched_debug.cpu#41.load 65813 ± 43% +476.1% 379163 ± 18% sched_debug.cpu#41.nr_switches -6.25 ±-164% +452.0% -34.50 ±-65% sched_debug.cpu#41.nr_uninterruptible 66877 ± 42% +468.9% 380468 ± 18% sched_debug.cpu#41.sched_count 27169 ± 53% +581.9% 185252 ± 19% sched_debug.cpu#41.sched_goidle 33710 ± 43% +484.1% 196906 ± 19% sched_debug.cpu#41.ttwu_count 69518 ± 34% +456.3% 386698 ± 21% sched_debug.cpu#42.nr_switches -4.00 ±-110% +706.2% -32.25 ±-55% sched_debug.cpu#42.nr_uninterruptible 70567 ± 33% +449.8% 387999 ± 21% sched_debug.cpu#42.sched_count 23890 ± 53% +688.3% 188322 ± 21% sched_debug.cpu#42.sched_goidle 35286 ± 36% +470.2% 201203 ± 21% sched_debug.cpu#42.ttwu_count 63913 ± 49% +527.4% 400989 ± 19% sched_debug.cpu#43.nr_switches 64949 ± 48% +519.3% 402219 ± 19% sched_debug.cpu#43.sched_count 28106 ± 58% +590.5% 194082 ± 20% sched_debug.cpu#43.sched_goidle 29520 ± 46% +575.8% 199487 ± 16% sched_debug.cpu#43.ttwu_count 4160 ± 15% +74.2% 7249 ± 48% sched_debug.cpu#43.ttwu_local 66494 ± 55% +498.3% 397815 ± 17% sched_debug.cpu#44.nr_switches 67552 ± 54% +490.9% 399165 ± 17% sched_debug.cpu#44.sched_count 29362 ± 60% +551.2% 191195 ± 16% sched_debug.cpu#44.sched_goidle 34768 ± 54% +507.8% 211318 ± 16% sched_debug.cpu#44.ttwu_count 3960 ± 45% +118.0% 8632 ± 46% sched_debug.cpu#44.ttwu_local 628043 ± 13% -37.0% 395598 ± 28% sched_debug.cpu#45.avg_idle 72176 ± 45% +449.3% 396434 ± 7% sched_debug.cpu#45.nr_switches -0.75 ±-648% +2600.0% -20.25 ±-74% sched_debug.cpu#45.nr_uninterruptible 73253 ± 44% +442.9% 397719 ± 7% sched_debug.cpu#45.sched_count 30519 ± 56% +535.4% 193909 ± 7% sched_debug.cpu#45.sched_goidle 36857 ± 44% +437.7% 198166 ± 7% sched_debug.cpu#45.ttwu_count 60443 ± 35% +593.8% 419351 ± 19% sched_debug.cpu#46.nr_switches -7.00 ±-69% +189.3% -20.25 ±-19% sched_debug.cpu#46.nr_uninterruptible 61479 ± 34% +584.3% 420684 ± 19% sched_debug.cpu#46.sched_count 25003 ± 54% +713.4% 203377 ± 20% sched_debug.cpu#46.sched_goidle 29356 ± 34% +610.5% 208576 ± 18% sched_debug.cpu#46.ttwu_count 67388 ± 68% +462.2% 378880 ± 8% sched_debug.cpu#47.nr_switches 68470 ± 67% +455.1% 380107 ± 8% sched_debug.cpu#47.sched_count 28477 ± 67% +544.6% 183561 ± 8% sched_debug.cpu#47.sched_goidle 32408 ± 66% +491.2% 191585 ± 8% sched_debug.cpu#47.ttwu_count 454855 ± 20% +58.1% 719171 ± 14% sched_debug.cpu#6.avg_idle 465638 ± 21% +65.1% 768552 ± 3% sched_debug.cpu#8.avg_idle 16.75 ± 4% +14.9% 19.25 ± 4% sched_debug.cpu#9.cpu_load[0] ivb43: Ivytown Ivy Bridge-EP Memory: 64G To reproduce: git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git cd lkp-tests bin/lkp install job.yaml # job file is attached in this email bin/lkp run job.yaml Disclaimer: Results have been estimated based on internal Intel analysis and are provided for informational purposes only. Any difference in system hardware or software design or configuration may affect actual performance. Thanks, Ying Huang View attachment "job.yaml" of type "text/plain" (3358 bytes) View attachment "reproduce" of type "text/plain" (4201 bytes)
Powered by blists - more mailing lists