[<prev] [next>] [day] [month] [year] [list]
Message-ID: <CAOQ4uxhkFq3uy5685ksXK+ugTYA61_bKQ42g+xkCy4YdNqX1Lw@mail.gmail.com>
Date: Thu, 5 Jul 2018 09:01:01 +0300
From: Amir Goldstein <amir73il@...il.com>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Al Viro <viro@...iv.linux.org.uk>, Jan Kara <jack@...e.cz>,
kernel test robot <xiaolong.ye@...el.com>, LKP <lkp@...org>,
linux-kernel <linux-kernel@...r.kernel.org>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>
Subject: Re: [lkp-robot] [fs] 5c6de586e8: vm-scalability.throughput +12.4%
improvement (from reorganizing struct inode?)
On Mon, Jul 2, 2018 at 9:27 AM, Amir Goldstein <amir73il@...il.com> wrote:
> Linus,
>
> This may be a test fluctuation or as a result of moving
> i_blkbits closer to i_bytes and i_lock.
>
> In any case, ping for:
> https://marc.info/?l=linux-fsdevel&m=152882624707975&w=2
>
Linus,
Per your request, I will re-post the origin patch with a link to this
discussion (which has now been made public).
Thanks,
Amir.
>
> ---------- Forwarded message ----------
> From: kernel test robot <xiaolong.ye@...el.com>
> Date: Mon, Jul 2, 2018 at 8:14 AM
> Subject: [lkp-robot] [fs] 5c6de586e8: vm-scalability.throughput
> +12.4% improvement
> To: Amir Goldstein <amir73il@...il.com>
> Cc: lkp@...org
>
>
>
> Greeting,
>
> FYI, we noticed a +12.4% improvement of vm-scalability.throughput due to commit:
>
>
> commit: 5c6de586e899a4a80a0ffa26468639f43dee1009 ("[PATCH] fs: shave 8
> bytes off of struct inode")
> url: https://github.com/0day-ci/linux/commits/Amir-Goldstein/fs-shave-8-bytes-off-of-struct-inode/20180612-192311
>
>
> in testcase: vm-scalability
> on test machine: 56 threads Intel(R) Xeon(R) CPU E5-2695 v3 @ 2.30GHz
> with 256G memory
> with following parameters:
>
> runtime: 300s
> test: small-allocs
> cpufreq_governor: performance
>
> test-description: The motivation behind this suite is to exercise
> functions and regions of the mm/ of the Linux kernel which are of
> interest to us.
> test-url: https://git.kernel.org/cgit/linux/kernel/git/wfg/vm-scalability.git/
>
>
>
> Details are as below:
> -------------------------------------------------------------------------------------------------->
>
>
> To reproduce:
>
> git clone https://github.com/intel/lkp-tests.git
> cd lkp-tests
> bin/lkp install job.yaml # job file is attached in this email
> bin/lkp run job.yaml
>
> =========================================================================================
> compiler/cpufreq_governor/kconfig/rootfs/runtime/tbox_group/test/testcase:
> gcc-7/performance/x86_64-rhel-7.2/debian-x86_64-2018-04-03.cgz/300s/lkp-hsw-ep5/small-allocs/vm-scalability
>
> commit:
> 8efcf34a26 (" ARM: SoC: late updates")
> 5c6de586e8 ("fs: shave 8 bytes off of struct inode")
>
> 8efcf34a263965e4 5c6de586e899a4a80a0ffa2646
> ---------------- --------------------------
> %stddev %change %stddev
> \ | \
> 19335952 +12.4% 21729332 vm-scalability.throughput
> 693688 +11.9% 775935 vm-scalability.median
> 0.56 ą 55% -43.6% 0.32 ą 62% vm-scalability.stddev
> 288.16 -7.0% 267.96 vm-scalability.time.elapsed_time
> 288.16 -7.0% 267.96
> vm-scalability.time.elapsed_time.max
> 48921 ą 6% -3.3% 47314 ą 5%
> vm-scalability.time.involuntary_context_switches
> 3777 -4.4% 3610
> vm-scalability.time.maximum_resident_set_size
> 1.074e+09 +0.0% 1.074e+09
> vm-scalability.time.minor_page_faults
> 4096 +0.0% 4096 vm-scalability.time.page_size
> 2672 +0.6% 2689
> vm-scalability.time.percent_of_cpu_this_job_got
> 5457 -9.4% 4942 vm-scalability.time.system_time
> 2244 +0.9% 2263 vm-scalability.time.user_time
> 5529533 ą 6% -65.2% 1923014 ą 6%
> vm-scalability.time.voluntary_context_switches
> 4.832e+09 +0.0% 4.832e+09 vm-scalability.workload
> 93827 ą 3% -7.0% 87299 ą 3%
> interrupts.CAL:Function_call_interrupts
> 26.50 -2.0% 25.98 ą 3% boot-time.boot
> 16.69 -3.2% 16.16 ą 5% boot-time.dhcp
> 674.18 -1.1% 666.43 ą 2% boot-time.idle
> 17.61 -3.5% 17.00 ą 5% boot-time.kernel_boot
> 15034 ą 62% -30.0% 10528 ą 79% softirqs.NET_RX
> 453251 ą 9% -2.9% 440306 ą 14% softirqs.RCU
> 46795 -14.9% 39806 ą 2% softirqs.SCHED
> 3565160 ą 8% +6.1% 3784023 softirqs.TIMER
> 4.87 ą 4% -0.6 4.25 ą 3% mpstat.cpu.idle%
> 0.00 ą 13% -0.0 0.00 ą 14% mpstat.cpu.iowait%
> 0.00 ą 37% +0.0 0.00 ą 37% mpstat.cpu.soft%
> 67.35 -1.8 65.60 mpstat.cpu.sys%
> 27.78 +2.4 30.14 mpstat.cpu.usr%
> 1038 -0.5% 1033 vmstat.memory.buff
> 1117006 -0.3% 1113807 vmstat.memory.cache
> 2.463e+08 -0.2% 2.457e+08 vmstat.memory.free
> 26.00 +1.9% 26.50 vmstat.procs.r
> 42239 ą 6% -55.9% 18619 ą 5% vmstat.system.cs
> 31359 -0.7% 31132 vmstat.system.in
> 0.00 -100.0% 0.00 numa-numastat.node0.interleave_hit
> 2713006 -1.0% 2685108 numa-numastat.node0.local_node
> 2716657 -1.0% 2689682 numa-numastat.node0.numa_hit
> 3651 ą 36% +25.3% 4576 ą 34% numa-numastat.node0.other_node
> 0.00 -100.0% 0.00 numa-numastat.node1.interleave_hit
> 2713025 -0.5% 2699801 numa-numastat.node1.local_node
> 2714924 -0.5% 2700769 numa-numastat.node1.numa_hit
> 1900 ą 68% -49.1% 968.00 ą162% numa-numastat.node1.other_node
> 21859882 ą 6% -56.1% 9599175 ą 2% cpuidle.C1.time
> 5231991 ą 6% -65.3% 1814200 ą 8% cpuidle.C1.usage
> 620147 ą 9% -22.4% 481528 ą 10% cpuidle.C1E.time
> 7829 ą 6% -34.5% 5126 ą 17% cpuidle.C1E.usage
> 5343219 ą 5% -58.8% 2202020 ą 4% cpuidle.C3.time
> 22942 ą 5% -54.9% 10349 ą 4% cpuidle.C3.usage
> 3.345e+08 ą 3% -15.1% 2.839e+08 ą 3% cpuidle.C6.time
> 355754 ą 3% -16.3% 297683 ą 3% cpuidle.C6.usage
> 248800 ą 6% -74.5% 63413 ą 5% cpuidle.POLL.time
> 90897 ą 6% -76.4% 21409 ą 7% cpuidle.POLL.usage
> 2631 +0.5% 2644 turbostat.Avg_MHz
> 95.35 +0.5 95.88 turbostat.Busy%
> 2759 -0.1% 2757 turbostat.Bzy_MHz
> 5227940 ą 6% -65.4% 1809983 ą 8% turbostat.C1
> 0.27 ą 5% -0.1 0.13 ą 3% turbostat.C1%
> 7646 ą 5% -36.0% 4894 ą 17% turbostat.C1E
> 0.01 +0.0 0.01 turbostat.C1E%
> 22705 ą 5% -56.0% 9995 ą 3% turbostat.C3
> 0.07 ą 7% -0.0 0.03 turbostat.C3%
> 354625 ą 3% -16.3% 296732 ą 3% turbostat.C6
> 4.11 ą 3% -0.4 3.75 ą 2% turbostat.C6%
> 1.68 ą 3% -15.6% 1.42 ą 2% turbostat.CPU%c1
> 0.04 -62.5% 0.01 ą 33% turbostat.CPU%c3
> 2.93 ą 3% -8.1% 2.69 ą 2% turbostat.CPU%c6
> 64.50 -1.6% 63.50 ą 2% turbostat.CoreTmp
> 9095020 -7.6% 8400668 turbostat.IRQ
> 11.78 ą 5% -2.3 9.45 ą 6% turbostat.PKG_%
> 0.10 ą 27% -4.9% 0.10 ą 22% turbostat.Pkg%pc2
> 0.00 ą173% -100.0% 0.00 turbostat.Pkg%pc6
> 68.50 ą 2% +0.0% 68.50 turbostat.PkgTmp
> 230.97 +0.1% 231.10 turbostat.PkgWatt
> 22.45 -0.8% 22.27 turbostat.RAMWatt
> 10304 -8.6% 9415 ą 2% turbostat.SMI
> 2300 +0.0% 2300 turbostat.TSC_MHz
> 269189 -0.6% 267505 meminfo.Active
> 269118 -0.6% 267429 meminfo.Active(anon)
> 167369 -1.7% 164478 meminfo.AnonHugePages
> 245606 +0.2% 246183 meminfo.AnonPages
> 1042 -0.2% 1039 meminfo.Buffers
> 1066883 -0.2% 1064908 meminfo.Cached
> 203421 -0.0% 203421 meminfo.CmaFree
> 204800 +0.0% 204800 meminfo.CmaTotal
> 1.32e+08 -0.0% 1.32e+08 meminfo.CommitLimit
> 485388 ą 15% -5.3% 459546 ą 9% meminfo.Committed_AS
> 2.65e+08 +0.0% 2.65e+08 meminfo.DirectMap1G
> 5240434 ą 8% +0.3% 5257281 ą 16% meminfo.DirectMap2M
> 169088 ą 6% -10.0% 152241 ą 5% meminfo.DirectMap4k
> 2048 +0.0% 2048 meminfo.Hugepagesize
> 150295 +0.1% 150389 meminfo.Inactive
> 149164 +0.1% 149264 meminfo.Inactive(anon)
> 1130 -0.6% 1124 meminfo.Inactive(file)
> 7375 -0.4% 7345 meminfo.KernelStack
> 28041 -0.5% 27895 meminfo.Mapped
> 2.451e+08 -0.2% 2.445e+08 meminfo.MemAvailable
> 2.462e+08 -0.2% 2.457e+08 meminfo.MemFree
> 2.64e+08 -0.0% 2.64e+08 meminfo.MemTotal
> 1179 ą 57% -35.2% 764.50 ą100% meminfo.Mlocked
> 4507108 +3.4% 4660163 meminfo.PageTables
> 50580 -1.9% 49633 meminfo.SReclaimable
> 11620141 +3.3% 12007592 meminfo.SUnreclaim
> 172885 -1.4% 170543 meminfo.Shmem
> 11670722 +3.3% 12057225 meminfo.Slab
> 894024 +0.0% 894327 meminfo.Unevictable
> 3.436e+10 +0.0% 3.436e+10 meminfo.VmallocTotal
> 6.873e+12 -2.1% 6.727e+12 perf-stat.branch-instructions
> 0.08 ą 2% -0.0 0.08 perf-stat.branch-miss-rate%
> 5.269e+09 ą 2% -3.7% 5.072e+09 perf-stat.branch-misses
> 37.34 +1.1 38.44 perf-stat.cache-miss-rate%
> 8.136e+09 ą 2% -7.0% 7.568e+09 ą 3% perf-stat.cache-misses
> 2.179e+10 ą 2% -9.6% 1.969e+10 ą 3% perf-stat.cache-references
> 12287807 ą 6% -59.1% 5020001 ą 5% perf-stat.context-switches
> 0.87 -3.5% 0.84 perf-stat.cpi
> 2.116e+13 -6.5% 1.978e+13 perf-stat.cpu-cycles
> 24160 ą 5% -13.8% 20819 ą 3% perf-stat.cpu-migrations
> 0.16 ą 6% +0.0 0.16 ą 4% perf-stat.dTLB-load-miss-rate%
> 1.057e+10 ą 6% +0.4% 1.061e+10 ą 5% perf-stat.dTLB-load-misses
> 6.792e+12 -3.5% 6.557e+12 perf-stat.dTLB-loads
> 0.00 ą 9% +0.0 0.00 ą 20% perf-stat.dTLB-store-miss-rate%
> 22950816 ą 9% +6.2% 24373827 ą 20% perf-stat.dTLB-store-misses
> 9.067e+11 -0.7% 9.005e+11 perf-stat.dTLB-stores
> 95.10 +2.7 97.81 perf-stat.iTLB-load-miss-rate%
> 2.437e+09 +6.8% 2.604e+09 ą 4% perf-stat.iTLB-load-misses
> 1.257e+08 ą 8% -53.7% 58211601 ą 6% perf-stat.iTLB-loads
> 2.44e+13 -3.1% 2.364e+13 perf-stat.instructions
> 10011 -9.1% 9100 ą 4%
> perf-stat.instructions-per-iTLB-miss
> 1.15 +3.6% 1.20 perf-stat.ipc
> 1.074e+09 -0.0% 1.074e+09 perf-stat.minor-faults
> 64.40 ą 4% -4.2 60.22 ą 5% perf-stat.node-load-miss-rate%
> 4.039e+09 ą 2% -10.9% 3.599e+09 ą 3% perf-stat.node-load-misses
> 2.241e+09 ą 10% +6.7% 2.39e+09 ą 12% perf-stat.node-loads
> 49.24 -1.7 47.53 perf-stat.node-store-miss-rate%
> 9.005e+08 -18.3% 7.357e+08 ą 2% perf-stat.node-store-misses
> 9.282e+08 -12.5% 8.123e+08 ą 2% perf-stat.node-stores
> 1.074e+09 -0.0% 1.074e+09 perf-stat.page-faults
> 5049 -3.1% 4893 perf-stat.path-length
> 67282 -0.6% 66860 proc-vmstat.nr_active_anon
> 61404 +0.2% 61545 proc-vmstat.nr_anon_pages
> 6117769 -0.2% 6104302
> proc-vmstat.nr_dirty_background_threshold
> 12250497 -0.2% 12223531 proc-vmstat.nr_dirty_threshold
> 266952 -0.2% 266464 proc-vmstat.nr_file_pages
> 50855 -0.0% 50855 proc-vmstat.nr_free_cma
> 61552505 -0.2% 61417644 proc-vmstat.nr_free_pages
> 37259 +0.1% 37286 proc-vmstat.nr_inactive_anon
> 282.25 -0.6% 280.50 proc-vmstat.nr_inactive_file
> 7375 -0.4% 7344 proc-vmstat.nr_kernel_stack
> 7124 -0.5% 7085 proc-vmstat.nr_mapped
> 295.00 ą 57% -35.3% 190.75 ą100% proc-vmstat.nr_mlock
> 1125531 +3.4% 1163908 proc-vmstat.nr_page_table_pages
> 43193 -1.3% 42613 proc-vmstat.nr_shmem
> 12644 -1.9% 12407 proc-vmstat.nr_slab_reclaimable
> 2901812 +3.3% 2998814 proc-vmstat.nr_slab_unreclaimable
> 223506 +0.0% 223581 proc-vmstat.nr_unevictable
> 67282 -0.6% 66860 proc-vmstat.nr_zone_active_anon
> 37259 +0.1% 37286 proc-vmstat.nr_zone_inactive_anon
> 282.25 -0.6% 280.50 proc-vmstat.nr_zone_inactive_file
> 223506 +0.0% 223581 proc-vmstat.nr_zone_unevictable
> 2685 ą104% +108.2% 5591 ą 84% proc-vmstat.numa_hint_faults
> 1757 ą140% +77.9% 3125 ą 87% proc-vmstat.numa_hint_faults_local
> 5456982 -0.7% 5417832 proc-vmstat.numa_hit
> 5451431 -0.7% 5412285 proc-vmstat.numa_local
> 5551 -0.1% 5547 proc-vmstat.numa_other
> 977.50 ą 40% +193.7% 2871 ą101% proc-vmstat.numa_pages_migrated
> 10519 ą113% +140.4% 25286 ą 97% proc-vmstat.numa_pte_updates
> 10726 ą 8% -3.8% 10315 ą 7% proc-vmstat.pgactivate
> 8191987 -0.5% 8150192 proc-vmstat.pgalloc_normal
> 1.074e+09 -0.0% 1.074e+09 proc-vmstat.pgfault
> 8143430 -2.7% 7926613 proc-vmstat.pgfree
> 977.50 ą 40% +193.7% 2871 ą101% proc-vmstat.pgmigrate_success
> 2155 -0.4% 2147 proc-vmstat.pgpgin
> 2049 -0.0% 2048 proc-vmstat.pgpgout
> 67375 -0.2% 67232
> slabinfo.Acpi-Namespace.active_objs
> 67375 -0.2% 67232 slabinfo.Acpi-Namespace.num_objs
> 604.00 ą 19% -0.1% 603.50 ą 9% slabinfo.Acpi-ParseExt.active_objs
> 604.00 ą 19% -0.1% 603.50 ą 9% slabinfo.Acpi-ParseExt.num_objs
> 7972 ą 4% -0.3% 7949 ą 2% slabinfo.anon_vma.active_objs
> 7972 ą 4% -0.3% 7949 ą 2% slabinfo.anon_vma.num_objs
> 1697 ą 12% -10.0% 1528 ą 7% slabinfo.avtab_node.active_objs
> 1697 ą 12% -10.0% 1528 ą 7% slabinfo.avtab_node.num_objs
> 58071 -0.1% 58034 slabinfo.dentry.active_objs
> 1351 ą 25% -31.2% 930.50 ą 30%
> slabinfo.dmaengine-unmap-16.active_objs
> 1351 ą 25% -31.2% 930.50 ą 30%
> slabinfo.dmaengine-unmap-16.num_objs
> 1354 ą 6% -7.1% 1258 ą 9% slabinfo.eventpoll_pwq.active_objs
> 1354 ą 6% -7.1% 1258 ą 9% slabinfo.eventpoll_pwq.num_objs
> 8880 ą 5% +0.0% 8883 ą 6% slabinfo.filp.num_objs
> 2715 ą 7% -9.0% 2470 ą 5% slabinfo.kmalloc-1024.active_objs
> 2831 ą 7% -10.2% 2543 ą 3% slabinfo.kmalloc-1024.num_objs
> 12480 -2.7% 12140 ą 2% slabinfo.kmalloc-16.active_objs
> 12480 -2.7% 12140 ą 2% slabinfo.kmalloc-16.num_objs
> 39520 ą 3% +1.0% 39922 slabinfo.kmalloc-32.active_objs
> 37573 +0.1% 37616 slabinfo.kmalloc-64.active_objs
> 37590 +0.2% 37673 slabinfo.kmalloc-64.num_objs
> 17182 +2.1% 17550 ą 2% slabinfo.kmalloc-8.active_objs
> 17663 +2.2% 18045 ą 2% slabinfo.kmalloc-8.num_objs
> 4428 ą 7% -2.9% 4298 ą 4% slabinfo.kmalloc-96.active_objs
> 956.50 ą 10% +4.5% 999.50 ą 16% slabinfo.nsproxy.active_objs
> 956.50 ą 10% +4.5% 999.50 ą 16% slabinfo.nsproxy.num_objs
> 19094 ą 3% -3.3% 18463 ą 5% slabinfo.pid.active_objs
> 19094 ą 3% -3.0% 18523 ą 5% slabinfo.pid.num_objs
> 2088 ą 14% -21.1% 1648 ą 8%
> slabinfo.skbuff_head_cache.active_objs
> 2136 ą 16% -19.5% 1720 ą 7%
> slabinfo.skbuff_head_cache.num_objs
> 872.00 ą 16% -9.3% 791.00 ą 15% slabinfo.task_group.active_objs
> 872.00 ą 16% -9.3% 791.00 ą 15% slabinfo.task_group.num_objs
> 57677936 +3.4% 59655533
> slabinfo.vm_area_struct.active_objs
> 1442050 +3.4% 1491482
> slabinfo.vm_area_struct.active_slabs
> 57682039 +3.4% 59659325 slabinfo.vm_area_struct.num_objs
> 1442050 +3.4% 1491482 slabinfo.vm_area_struct.num_slabs
> 132527 ą 15% +1.9% 135006 ą 2% numa-meminfo.node0.Active
> 132475 ą 15% +1.9% 134985 ą 2% numa-meminfo.node0.Active(anon)
> 86736 ą 32% +6.5% 92413 ą 15% numa-meminfo.node0.AnonHugePages
> 124613 ą 19% +3.4% 128788 ą 4% numa-meminfo.node0.AnonPages
> 556225 ą 11% -5.1% 527584 ą 12% numa-meminfo.node0.FilePages
> 100973 ą 57% -27.7% 73011 ą 89% numa-meminfo.node0.Inactive
> 100128 ą 57% -27.4% 72686 ą 90% numa-meminfo.node0.Inactive(anon)
> 843.75 ą 57% -61.5% 324.75 ą115% numa-meminfo.node0.Inactive(file)
> 4058 ą 3% +3.5% 4200 ą 4% numa-meminfo.node0.KernelStack
> 12846 ą 26% +1.6% 13053 ą 26% numa-meminfo.node0.Mapped
> 1.228e+08 -0.1% 1.226e+08 numa-meminfo.node0.MemFree
> 1.32e+08 +0.0% 1.32e+08 numa-meminfo.node0.MemTotal
> 9191573 +1.7% 9347737 numa-meminfo.node0.MemUsed
> 2321138 +2.2% 2372650 numa-meminfo.node0.PageTables
> 24816 ą 13% -5.8% 23377 ą 16% numa-meminfo.node0.SReclaimable
> 5985628 +2.2% 6116406 numa-meminfo.node0.SUnreclaim
> 108046 ą 57% -26.9% 78955 ą 80% numa-meminfo.node0.Shmem
> 6010444 +2.2% 6139784 numa-meminfo.node0.Slab
> 447344 +0.2% 448404 numa-meminfo.node0.Unevictable
> 136675 ą 13% -3.0% 132521 ą 2% numa-meminfo.node1.Active
> 136655 ą 13% -3.1% 132467 ą 2% numa-meminfo.node1.Active(anon)
> 80581 ą 35% -10.5% 72102 ą 18% numa-meminfo.node1.AnonHugePages
> 120993 ą 20% -3.0% 117407 ą 4% numa-meminfo.node1.AnonPages
> 511901 ą 12% +5.2% 538396 ą 11% numa-meminfo.node1.FilePages
> 49526 ą116% +56.3% 77402 ą 84% numa-meminfo.node1.Inactive
> 49238 ą115% +55.6% 76603 ą 85% numa-meminfo.node1.Inactive(anon)
> 287.75 ą168% +177.6% 798.75 ą 47% numa-meminfo.node1.Inactive(file)
> 3316 ą 3% -5.2% 3143 ą 5% numa-meminfo.node1.KernelStack
> 15225 ą 23% -2.0% 14918 ą 22% numa-meminfo.node1.Mapped
> 1.234e+08 -0.3% 1.23e+08 numa-meminfo.node1.MemFree
> 1.321e+08 -0.0% 1.321e+08 numa-meminfo.node1.MemTotal
> 8664187 +4.4% 9041729 numa-meminfo.node1.MemUsed
> 2185511 +4.6% 2286356 numa-meminfo.node1.PageTables
> 25762 ą 12% +1.9% 26255 ą 15% numa-meminfo.node1.SReclaimable
> 5634774 +4.5% 5887602 numa-meminfo.node1.SUnreclaim
> 65037 ą 92% +40.9% 91621 ą 68% numa-meminfo.node1.Shmem
> 5660536 +4.5% 5913858 numa-meminfo.node1.Slab
> 446680 -0.2% 445922 numa-meminfo.node1.Unevictable
> 15553 ą 18% -14.1% 13366 ą 11% numa-vmstat.node0
> 33116 ą 15% +1.9% 33742 ą 2% numa-vmstat.node0.nr_active_anon
> 31157 ą 19% +3.3% 32196 ą 4% numa-vmstat.node0.nr_anon_pages
> 139001 ą 11% -5.1% 131864 ą 12% numa-vmstat.node0.nr_file_pages
> 30692447 -0.1% 30654328 numa-vmstat.node0.nr_free_pages
> 24983 ą 57% -27.4% 18142 ą 90% numa-vmstat.node0.nr_inactive_anon
> 210.25 ą 57% -61.6% 80.75 ą116% numa-vmstat.node0.nr_inactive_file
> 4058 ą 3% +3.5% 4199 ą 4% numa-vmstat.node0.nr_kernel_stack
> 3304 ą 26% +1.2% 3344 ą 25% numa-vmstat.node0.nr_mapped
> 139.00 ą 60% -20.3% 110.75 ą100% numa-vmstat.node0.nr_mlock
> 579931 +2.1% 592262
> numa-vmstat.node0.nr_page_table_pages
> 26956 ą 57% -26.9% 19707 ą 80% numa-vmstat.node0.nr_shmem
> 6203 ą 13% -5.8% 5844 ą 16%
> numa-vmstat.node0.nr_slab_reclaimable
> 1495541 +2.2% 1527781
> numa-vmstat.node0.nr_slab_unreclaimable
> 111835 +0.2% 112100 numa-vmstat.node0.nr_unevictable
> 33116 ą 15% +1.9% 33742 ą 2%
> numa-vmstat.node0.nr_zone_active_anon
> 24983 ą 57% -27.4% 18142 ą 90%
> numa-vmstat.node0.nr_zone_inactive_anon
> 210.25 ą 57% -61.6% 80.75 ą116%
> numa-vmstat.node0.nr_zone_inactive_file
> 111835 +0.2% 112100
> numa-vmstat.node0.nr_zone_unevictable
> 1840693 ą 2% +1.6% 1869501 numa-vmstat.node0.numa_hit
> 144048 +0.2% 144385 numa-vmstat.node0.numa_interleave
> 1836656 ą 2% +1.5% 1864579 numa-vmstat.node0.numa_local
> 4036 ą 33% +21.9% 4921 ą 31% numa-vmstat.node0.numa_other
> 11577 ą 24% +17.8% 13635 ą 11% numa-vmstat.node1
> 34171 ą 13% -3.1% 33126 ą 2% numa-vmstat.node1.nr_active_anon
> 30247 ą 20% -3.0% 29352 ą 4% numa-vmstat.node1.nr_anon_pages
> 127979 ą 12% +5.2% 134601 ą 11% numa-vmstat.node1.nr_file_pages
> 50855 -0.0% 50855 numa-vmstat.node1.nr_free_cma
> 30858027 -0.3% 30763794 numa-vmstat.node1.nr_free_pages
> 12305 ą116% +55.6% 19145 ą 85% numa-vmstat.node1.nr_inactive_anon
> 71.75 ą168% +177.7% 199.25 ą 47% numa-vmstat.node1.nr_inactive_file
> 3315 ą 3% -5.2% 3144 ą 5% numa-vmstat.node1.nr_kernel_stack
> 3823 ą 23% -1.8% 3754 ą 22% numa-vmstat.node1.nr_mapped
> 155.00 ą 60% -48.2% 80.25 ą100% numa-vmstat.node1.nr_mlock
> 545973 +4.6% 571069
> numa-vmstat.node1.nr_page_table_pages
> 16263 ą 92% +40.9% 22907 ą 68% numa-vmstat.node1.nr_shmem
> 6440 ą 12% +1.9% 6563 ą 15%
> numa-vmstat.node1.nr_slab_reclaimable
> 1407904 +4.5% 1471030
> numa-vmstat.node1.nr_slab_unreclaimable
> 111669 -0.2% 111480 numa-vmstat.node1.nr_unevictable
> 34171 ą 13% -3.1% 33126 ą 2%
> numa-vmstat.node1.nr_zone_active_anon
> 12305 ą116% +55.6% 19145 ą 85%
> numa-vmstat.node1.nr_zone_inactive_anon
> 71.75 ą168% +177.7% 199.25 ą 47%
> numa-vmstat.node1.nr_zone_inactive_file
> 111669 -0.2% 111480
> numa-vmstat.node1.nr_zone_unevictable
> 1846889 ą 2% +1.4% 1872108 numa-vmstat.node1.numa_hit
> 144151 -0.2% 143830 numa-vmstat.node1.numa_interleave
> 1699975 ą 2% +1.6% 1726462 numa-vmstat.node1.numa_local
> 146913 -0.9% 145645 numa-vmstat.node1.numa_other
> 0.00 +1.2e+12% 12083 ą100%
> sched_debug.cfs_rq:/.MIN_vruntime.avg
> 0.00 +3.4e+13% 338347 ą100%
> sched_debug.cfs_rq:/.MIN_vruntime.max
> 0.00 +0.0% 0.00
> sched_debug.cfs_rq:/.MIN_vruntime.min
> 0.00 +1.5e+28% 62789 ą100%
> sched_debug.cfs_rq:/.MIN_vruntime.stddev
> 118226 +0.4% 118681
> sched_debug.cfs_rq:/.exec_clock.avg
> 119425 +0.3% 119724
> sched_debug.cfs_rq:/.exec_clock.max
> 117183 +0.1% 117244
> sched_debug.cfs_rq:/.exec_clock.min
> 395.73 ą 14% +11.0% 439.18 ą 23%
> sched_debug.cfs_rq:/.exec_clock.stddev
> 32398 +14.1% 36980 ą 9% sched_debug.cfs_rq:/.load.avg
> 73141 ą 5% +128.0% 166780 ą 57% sched_debug.cfs_rq:/.load.max
> 17867 ą 19% +30.4% 23301 ą 13% sched_debug.cfs_rq:/.load.min
> 11142 ą 3% +146.4% 27458 ą 62% sched_debug.cfs_rq:/.load.stddev
> 59.52 ą 2% -2.3% 58.13 ą 6% sched_debug.cfs_rq:/.load_avg.avg
> 305.15 ą 10% -8.8% 278.35 ą 3% sched_debug.cfs_rq:/.load_avg.max
> 27.20 ą 6% +8.6% 29.55 sched_debug.cfs_rq:/.load_avg.min
> 71.12 ą 9% -8.1% 65.38 ą 5%
> sched_debug.cfs_rq:/.load_avg.stddev
> 0.00 +1.2e+12% 12083 ą100%
> sched_debug.cfs_rq:/.max_vruntime.avg
> 0.00 +3.4e+13% 338347 ą100%
> sched_debug.cfs_rq:/.max_vruntime.max
> 0.00 +0.0% 0.00
> sched_debug.cfs_rq:/.max_vruntime.min
> 0.00 +1.5e+28% 62789 ą100%
> sched_debug.cfs_rq:/.max_vruntime.stddev
> 3364874 +0.8% 3391671
> sched_debug.cfs_rq:/.min_vruntime.avg
> 3399316 +0.7% 3423051
> sched_debug.cfs_rq:/.min_vruntime.max
> 3303899 +0.9% 3335083
> sched_debug.cfs_rq:/.min_vruntime.min
> 20061 ą 14% -2.5% 19552 ą 16%
> sched_debug.cfs_rq:/.min_vruntime.stddev
> 0.87 +3.1% 0.89 ą 2%
> sched_debug.cfs_rq:/.nr_running.avg
> 1.00 +5.0% 1.05 ą 8%
> sched_debug.cfs_rq:/.nr_running.max
> 0.50 ą 19% +20.0% 0.60
> sched_debug.cfs_rq:/.nr_running.min
> 0.16 ą 14% -6.0% 0.15 ą 11%
> sched_debug.cfs_rq:/.nr_running.stddev
> 4.14 ą 6% -6.5% 3.87 ą 10%
> sched_debug.cfs_rq:/.nr_spread_over.avg
> 15.10 ą 7% +29.8% 19.60 ą 12%
> sched_debug.cfs_rq:/.nr_spread_over.max
> 1.50 ą 11% -16.7% 1.25 ą 20%
> sched_debug.cfs_rq:/.nr_spread_over.min
> 2.82 ą 8% +23.0% 3.47 ą 13%
> sched_debug.cfs_rq:/.nr_spread_over.stddev
> 7.31 -6.2% 6.86 ą 69%
> sched_debug.cfs_rq:/.removed.load_avg.avg
> 204.80 -28.2% 147.10 ą 57%
> sched_debug.cfs_rq:/.removed.load_avg.max
> 38.01 -20.3% 30.29 ą 60%
> sched_debug.cfs_rq:/.removed.load_avg.stddev
> 337.44 -6.0% 317.11 ą 69%
> sched_debug.cfs_rq:/.removed.runnable_sum.avg
> 9448 -27.8% 6819 ą 57%
> sched_debug.cfs_rq:/.removed.runnable_sum.max
> 1753 -20.2% 1399 ą 60%
> sched_debug.cfs_rq:/.removed.runnable_sum.stddev
> 2.16 ą 56% -25.7% 1.60 ą 57%
> sched_debug.cfs_rq:/.removed.util_avg.avg
> 60.40 ą 56% -37.3% 37.90 ą 61%
> sched_debug.cfs_rq:/.removed.util_avg.max
> 11.21 ą 56% -33.8% 7.42 ą 58%
> sched_debug.cfs_rq:/.removed.util_avg.stddev
> 30.32 ą 2% +0.6% 30.52 ą 3%
> sched_debug.cfs_rq:/.runnable_load_avg.avg
> 80.90 ą 14% -19.2% 65.35 ą 34%
> sched_debug.cfs_rq:/.runnable_load_avg.max
> 14.95 ą 24% +47.2% 22.00 ą 12%
> sched_debug.cfs_rq:/.runnable_load_avg.min
> 12.10 ą 19% -27.7% 8.74 ą 42%
> sched_debug.cfs_rq:/.runnable_load_avg.stddev
> 30853 +13.2% 34914 ą 10%
> sched_debug.cfs_rq:/.runnable_weight.avg
> 61794 +152.3% 155911 ą 64%
> sched_debug.cfs_rq:/.runnable_weight.max
> 17867 ą 19% +30.4% 23300 ą 13%
> sched_debug.cfs_rq:/.runnable_weight.min
> 8534 ą 7% +193.7% 25066 ą 72%
> sched_debug.cfs_rq:/.runnable_weight.stddev
> 45914 ą 45% -50.7% 22619 ą 62% sched_debug.cfs_rq:/.spread0.avg
> 80385 ą 26% -32.8% 53990 ą 34% sched_debug.cfs_rq:/.spread0.max
> -15013 +126.3% -33973 sched_debug.cfs_rq:/.spread0.min
> 20053 ą 14% -2.5% 19541 ą 16%
> sched_debug.cfs_rq:/.spread0.stddev
> 964.19 -0.2% 962.12 sched_debug.cfs_rq:/.util_avg.avg
> 1499 ą 14% -13.2% 1301 ą 4% sched_debug.cfs_rq:/.util_avg.max
> 510.75 ą 12% +35.6% 692.65 ą 13% sched_debug.cfs_rq:/.util_avg.min
> 177.84 ą 21% -34.4% 116.72 ą 23%
> sched_debug.cfs_rq:/.util_avg.stddev
> 768.04 +5.1% 807.40
> sched_debug.cfs_rq:/.util_est_enqueued.avg
> 1192 ą 14% -22.5% 924.20
> sched_debug.cfs_rq:/.util_est_enqueued.max
> 170.90 ą 99% +89.6% 324.05 ą 21%
> sched_debug.cfs_rq:/.util_est_enqueued.min
> 201.22 ą 17% -36.7% 127.44 ą 15%
> sched_debug.cfs_rq:/.util_est_enqueued.stddev
> 111567 ą 4% +7.6% 120067 ą 7% sched_debug.cpu.avg_idle.avg
> 549432 ą 16% -16.6% 458264 ą 5% sched_debug.cpu.avg_idle.max
> 4419 ą 79% +87.7% 8293 ą 31% sched_debug.cpu.avg_idle.min
> 123967 ą 13% -8.9% 112928 ą 15% sched_debug.cpu.avg_idle.stddev
> 147256 -0.4% 146724 sched_debug.cpu.clock.avg
> 147258 -0.4% 146728 sched_debug.cpu.clock.max
> 147252 -0.4% 146720 sched_debug.cpu.clock.min
> 1.70 ą 11% +39.5% 2.37 ą 29% sched_debug.cpu.clock.stddev
> 147256 -0.4% 146724 sched_debug.cpu.clock_task.avg
> 147258 -0.4% 146728 sched_debug.cpu.clock_task.max
> 147252 -0.4% 146720 sched_debug.cpu.clock_task.min
> 1.70 ą 11% +39.4% 2.37 ą 29% sched_debug.cpu.clock_task.stddev
> 30.85 +0.4% 30.97 ą 3% sched_debug.cpu.cpu_load[0].avg
> 84.15 ą 13% -13.8% 72.55 ą 21% sched_debug.cpu.cpu_load[0].max
> 17.35 ą 24% +26.5% 21.95 ą 24% sched_debug.cpu.cpu_load[0].min
> 12.75 ą 18% -19.9% 10.22 ą 22% sched_debug.cpu.cpu_load[0].stddev
> 30.88 +1.1% 31.22 ą 2% sched_debug.cpu.cpu_load[1].avg
> 77.35 ą 19% -9.8% 69.75 ą 18% sched_debug.cpu.cpu_load[1].max
> 18.60 ą 19% +31.2% 24.40 ą 10% sched_debug.cpu.cpu_load[1].min
> 11.10 ą 24% -20.5% 8.82 ą 25% sched_debug.cpu.cpu_load[1].stddev
> 31.13 +2.3% 31.84 ą 2% sched_debug.cpu.cpu_load[2].avg
> 71.40 ą 26% +0.4% 71.70 ą 20% sched_debug.cpu.cpu_load[2].max
> 19.45 ą 21% +32.9% 25.85 ą 5% sched_debug.cpu.cpu_load[2].min
> 9.61 ą 36% -8.3% 8.81 ą 27% sched_debug.cpu.cpu_load[2].stddev
> 31.79 +2.9% 32.71 ą 3% sched_debug.cpu.cpu_load[3].avg
> 76.75 ą 19% +8.1% 82.95 ą 37% sched_debug.cpu.cpu_load[3].max
> 20.25 ą 14% +33.3% 27.00 ą 3% sched_debug.cpu.cpu_load[3].min
> 9.88 ą 28% +6.7% 10.54 ą 52% sched_debug.cpu.cpu_load[3].stddev
> 32.95 +2.3% 33.71 ą 3% sched_debug.cpu.cpu_load[4].avg
> 107.65 ą 9% +3.9% 111.90 ą 33% sched_debug.cpu.cpu_load[4].max
> 20.35 ą 8% +30.5% 26.55 sched_debug.cpu.cpu_load[4].min
> 15.33 ą 12% +1.2% 15.51 ą 46% sched_debug.cpu.cpu_load[4].stddev
> 1244 +2.0% 1269 sched_debug.cpu.curr->pid.avg
> 4194 -0.7% 4165 sched_debug.cpu.curr->pid.max
> 655.90 ą 22% +16.7% 765.75 ą 3% sched_debug.cpu.curr->pid.min
> 656.54 -0.8% 651.46 sched_debug.cpu.curr->pid.stddev
> 32481 +13.8% 36973 ą 9% sched_debug.cpu.load.avg
> 73132 ą 5% +130.1% 168294 ą 56% sched_debug.cpu.load.max
> 17867 ą 19% +20.3% 21497 sched_debug.cpu.load.min
> 11194 ą 3% +148.6% 27826 ą 61% sched_debug.cpu.load.stddev
> 500000 +0.0% 500000
> sched_debug.cpu.max_idle_balance_cost.avg
> 500000 +0.0% 500000
> sched_debug.cpu.max_idle_balance_cost.max
> 500000 +0.0% 500000
> sched_debug.cpu.max_idle_balance_cost.min
> 4294 -0.0% 4294 sched_debug.cpu.next_balance.avg
> 4294 -0.0% 4294 sched_debug.cpu.next_balance.max
> 4294 -0.0% 4294 sched_debug.cpu.next_balance.min
> 0.00 ą 4% -3.6% 0.00 ą 5%
> sched_debug.cpu.next_balance.stddev
> 126191 -0.0% 126167
> sched_debug.cpu.nr_load_updates.avg
> 133303 -0.6% 132467
> sched_debug.cpu.nr_load_updates.max
> 124404 +0.4% 124852
> sched_debug.cpu.nr_load_updates.min
> 1826 ą 13% -8.5% 1672 ą 5%
> sched_debug.cpu.nr_load_updates.stddev
> 0.90 +2.2% 0.92 ą 2% sched_debug.cpu.nr_running.avg
> 1.80 ą 7% -5.6% 1.70 ą 5% sched_debug.cpu.nr_running.max
> 0.50 ą 19% +20.0% 0.60 sched_debug.cpu.nr_running.min
> 0.29 ą 7% -13.6% 0.25 ą 4% sched_debug.cpu.nr_running.stddev
> 204123 ą 8% -56.8% 88239 ą 5% sched_debug.cpu.nr_switches.avg
> 457439 ą 15% -60.4% 181234 ą 8% sched_debug.cpu.nr_switches.max
> 116531 ą 16% -62.8% 43365 ą 13% sched_debug.cpu.nr_switches.min
> 72910 ą 22% -50.5% 36095 ą 13% sched_debug.cpu.nr_switches.stddev
> 0.03 ą 19% -52.9% 0.01 ą 35%
> sched_debug.cpu.nr_uninterruptible.avg
> 16.50 ą 14% -11.5% 14.60 ą 16%
> sched_debug.cpu.nr_uninterruptible.max
> -16.90 -26.3% -12.45
> sched_debug.cpu.nr_uninterruptible.min
> 7.97 ą 12% -19.4% 6.42 ą 15%
> sched_debug.cpu.nr_uninterruptible.stddev
> 210655 ą 8% -56.7% 91289 ą 5% sched_debug.cpu.sched_count.avg
> 467518 ą 15% -60.4% 185362 ą 9% sched_debug.cpu.sched_count.max
> 120764 ą 16% -62.5% 45233 ą 14% sched_debug.cpu.sched_count.min
> 74259 ą 23% -51.0% 36403 ą 14% sched_debug.cpu.sched_count.stddev
> 89621 ą 8% -63.5% 32750 ą 7% sched_debug.cpu.sched_goidle.avg
> 189668 ą 8% -67.5% 61630 ą 18% sched_debug.cpu.sched_goidle.max
> 54342 ą 16% -68.0% 17412 ą 15% sched_debug.cpu.sched_goidle.min
> 28685 ą 15% -58.7% 11834 ą 16%
> sched_debug.cpu.sched_goidle.stddev
> 109820 ą 8% -56.0% 48303 ą 5% sched_debug.cpu.ttwu_count.avg
> 144975 ą 13% -41.1% 85424 ą 7% sched_debug.cpu.ttwu_count.max
> 96409 ą 8% -61.1% 37542 ą 6% sched_debug.cpu.ttwu_count.min
> 12310 ą 20% -2.4% 12009 ą 14% sched_debug.cpu.ttwu_count.stddev
> 9749 ą 10% -5.0% 9257 ą 6% sched_debug.cpu.ttwu_local.avg
> 45094 ą 30% +4.8% 47270 ą 13% sched_debug.cpu.ttwu_local.max
> 1447 ą 6% +1.2% 1465 ą 7% sched_debug.cpu.ttwu_local.min
> 12231 ą 24% -6.1% 11487 ą 13% sched_debug.cpu.ttwu_local.stddev
> 147253 -0.4% 146720 sched_debug.cpu_clk
> 996147 +0.0% 996147 sched_debug.dl_rq:.dl_bw->bw.avg
> 996147 +0.0% 996147 sched_debug.dl_rq:.dl_bw->bw.max
> 996147 +0.0% 996147 sched_debug.dl_rq:.dl_bw->bw.min
> 4.295e+09 -0.0% 4.295e+09 sched_debug.jiffies
> 147253 -0.4% 146720 sched_debug.ktime
> 950.00 +0.0% 950.00 sched_debug.rt_rq:/.rt_runtime.avg
> 950.00 +0.0% 950.00 sched_debug.rt_rq:/.rt_runtime.max
> 950.00 +0.0% 950.00 sched_debug.rt_rq:/.rt_runtime.min
> 0.00 ą146% -69.8% 0.00 ą100% sched_debug.rt_rq:/.rt_time.avg
> 0.04 ą146% -69.8% 0.01 ą100% sched_debug.rt_rq:/.rt_time.max
> 0.01 ą146% -69.8% 0.00 ą100% sched_debug.rt_rq:/.rt_time.stddev
> 147626 -0.3% 147114 sched_debug.sched_clk
> 1.00 +0.0% 1.00 sched_debug.sched_clock_stable()
> 4118331 +0.0% 4118331
> sched_debug.sysctl_sched.sysctl_sched_features
> 24.00 +0.0% 24.00
> sched_debug.sysctl_sched.sysctl_sched_latency
> 3.00 +0.0% 3.00
> sched_debug.sysctl_sched.sysctl_sched_min_granularity
> 1.00 +0.0% 1.00
> sched_debug.sysctl_sched.sysctl_sched_tunable_scaling
> 4.00 +0.0% 4.00
> sched_debug.sysctl_sched.sysctl_sched_wakeup_granularity
> 68.63 -2.2 66.43
> perf-profile.calltrace.cycles-pp.osq_lock.rwsem_down_write_failed.call_rwsem_down_write_failed.down_write.vma_link
> 73.63 -1.9 71.69
> perf-profile.calltrace.cycles-pp.call_rwsem_down_write_failed.down_write.vma_link.mmap_region.do_mmap
> 73.63 -1.9 71.69
> perf-profile.calltrace.cycles-pp.rwsem_down_write_failed.call_rwsem_down_write_failed.down_write.vma_link.mmap_region
> 73.99 -1.9 72.05
> perf-profile.calltrace.cycles-pp.down_write.vma_link.mmap_region.do_mmap.vm_mmap_pgoff
> 78.36 -1.7 76.67
> perf-profile.calltrace.cycles-pp.vma_link.mmap_region.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff
> 79.68 -1.5 78.19
> perf-profile.calltrace.cycles-pp.mmap_region.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64
> 81.34 -1.2 80.10
> perf-profile.calltrace.cycles-pp.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
> 83.36 -1.2 82.15
> perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
> 81.59 -1.2 80.39
> perf-profile.calltrace.cycles-pp.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
> 83.38 -1.2 82.18
> perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
> 82.00 -1.2 80.83
> perf-profile.calltrace.cycles-pp.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
> 2.44 ą 5% -0.7 1.79
> perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
> 3.17 ą 4% -0.6 2.62
> perf-profile.calltrace.cycles-pp.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
> 1.15 ą 2% -0.2 0.97 ą 2%
> perf-profile.calltrace.cycles-pp.up_write.vma_link.mmap_region.do_mmap.vm_mmap_pgoff
> 1.27 ą 18% -0.0 1.24 ą 16%
> perf-profile.calltrace.cycles-pp.task_numa_work.task_work_run.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
> 1.27 ą 18% -0.0 1.24 ą 16%
> perf-profile.calltrace.cycles-pp.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
> 1.27 ą 18% -0.0 1.24 ą 16%
> perf-profile.calltrace.cycles-pp.task_work_run.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
> 0.55 ą 4% +0.0 0.58 ą 3%
> perf-profile.calltrace.cycles-pp.__rb_insert_augmented.vma_link.mmap_region.do_mmap.vm_mmap_pgoff
> 0.94 ą 14% +0.1 1.02 ą 12%
> perf-profile.calltrace.cycles-pp.task_numa_work.task_work_run.exit_to_usermode_loop.prepare_exit_to_usermode.swapgs_restore_regs_and_return_to_usermode
> 0.94 ą 14% +0.1 1.02 ą 12%
> perf-profile.calltrace.cycles-pp.exit_to_usermode_loop.prepare_exit_to_usermode.swapgs_restore_regs_and_return_to_usermode
> 0.94 ą 14% +0.1 1.02 ą 12%
> perf-profile.calltrace.cycles-pp.task_work_run.exit_to_usermode_loop.prepare_exit_to_usermode.swapgs_restore_regs_and_return_to_usermode
> 6.67 +0.1 6.76
> perf-profile.calltrace.cycles-pp.__do_page_fault.do_page_fault.page_fault
> 0.95 ą 14% +0.1 1.04 ą 11%
> perf-profile.calltrace.cycles-pp.prepare_exit_to_usermode.swapgs_restore_regs_and_return_to_usermode
> 6.70 +0.1 6.79
> perf-profile.calltrace.cycles-pp.do_page_fault.page_fault
> 6.75 +0.1 6.86
> perf-profile.calltrace.cycles-pp.page_fault
> 0.61 ą 6% +0.1 0.75 ą 3%
> perf-profile.calltrace.cycles-pp.___perf_sw_event.__perf_sw_event.__do_page_fault.do_page_fault.page_fault
> 1.09 ą 2% +0.1 1.23 ą 3%
> perf-profile.calltrace.cycles-pp.vmacache_find.find_vma.__do_page_fault.do_page_fault.page_fault
> 0.74 ą 4% +0.2 0.91 ą 3%
> perf-profile.calltrace.cycles-pp.__perf_sw_event.__do_page_fault.do_page_fault.page_fault
> 1.35 +0.2 1.56
> perf-profile.calltrace.cycles-pp.unmapped_area_topdown.arch_get_unmapped_area_topdown.get_unmapped_area.do_mmap.vm_mmap_pgoff
> 1.39 +0.2 1.61
> perf-profile.calltrace.cycles-pp.arch_get_unmapped_area_topdown.get_unmapped_area.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff
> 1.56 +0.2 1.78 ą 2%
> perf-profile.calltrace.cycles-pp.find_vma.__do_page_fault.do_page_fault.page_fault
> 1.50 +0.2 1.72
> perf-profile.calltrace.cycles-pp.get_unmapped_area.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64
> 2.71 +0.3 3.05
> perf-profile.calltrace.cycles-pp.native_irq_return_iret
> 2.15 +0.4 2.50 ą 3%
> perf-profile.calltrace.cycles-pp.vma_interval_tree_insert.vma_link.mmap_region.do_mmap.vm_mmap_pgoff
> 2.70 ą 5% +0.4 3.07 ą 3%
> perf-profile.calltrace.cycles-pp.swapgs_restore_regs_and_return_to_usermode
> 3.83 +0.4 4.26
> perf-profile.calltrace.cycles-pp.rwsem_spin_on_owner.rwsem_down_write_failed.call_rwsem_down_write_failed.down_write.vma_link
> 0.00 +0.5 0.52
> perf-profile.calltrace.cycles-pp.perf_event_mmap.mmap_region.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff
> 68.65 -2.2 66.45
> perf-profile.children.cycles-pp.osq_lock
> 73.63 -1.9 71.69
> perf-profile.children.cycles-pp.call_rwsem_down_write_failed
> 73.63 -1.9 71.69
> perf-profile.children.cycles-pp.rwsem_down_write_failed
> 73.99 -1.9 72.05
> perf-profile.children.cycles-pp.down_write
> 78.36 -1.7 76.68
> perf-profile.children.cycles-pp.vma_link
> 79.70 -1.5 78.21
> perf-profile.children.cycles-pp.mmap_region
> 81.35 -1.2 80.12
> perf-profile.children.cycles-pp.do_mmap
> 81.61 -1.2 80.41
> perf-profile.children.cycles-pp.vm_mmap_pgoff
> 83.48 -1.2 82.28
> perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
> 83.46 -1.2 82.26
> perf-profile.children.cycles-pp.do_syscall_64
> 82.01 -1.2 80.85
> perf-profile.children.cycles-pp.ksys_mmap_pgoff
> 2.51 ą 5% -0.6 1.87
> perf-profile.children.cycles-pp.__handle_mm_fault
> 3.24 ą 4% -0.5 2.70
> perf-profile.children.cycles-pp.handle_mm_fault
> 1.23 ą 2% -0.2 1.06 ą 2%
> perf-profile.children.cycles-pp.up_write
> 0.20 ą 12% -0.1 0.10 ą 7%
> perf-profile.children.cycles-pp.do_idle
> 0.20 ą 12% -0.1 0.10 ą 7%
> perf-profile.children.cycles-pp.secondary_startup_64
> 0.20 ą 12% -0.1 0.10 ą 7%
> perf-profile.children.cycles-pp.cpu_startup_entry
> 0.18 ą 13% -0.1 0.09 ą 13%
> perf-profile.children.cycles-pp.start_secondary
> 0.26 ą 11% -0.1 0.18 ą 8%
> perf-profile.children.cycles-pp.rwsem_wake
> 0.26 ą 11% -0.1 0.18 ą 6%
> perf-profile.children.cycles-pp.call_rwsem_wake
> 0.07 ą 17% -0.1 0.00
> perf-profile.children.cycles-pp.intel_idle
> 0.08 ą 14% -0.1 0.01 ą173%
> perf-profile.children.cycles-pp.cpuidle_enter_state
> 0.06 ą 13% -0.1 0.00
> perf-profile.children.cycles-pp.schedule
> 0.06 ą 6% -0.1 0.00
> perf-profile.children.cycles-pp.save_stack_trace_tsk
> 0.18 ą 9% -0.1 0.12
> perf-profile.children.cycles-pp.wake_up_q
> 0.06 ą 7% -0.1 0.00
> perf-profile.children.cycles-pp.sched_ttwu_pending
> 0.06 ą 7% -0.1 0.00
> perf-profile.children.cycles-pp.__save_stack_trace
> 0.18 ą 8% -0.1 0.13
> perf-profile.children.cycles-pp.try_to_wake_up
> 0.05 -0.1 0.00
> perf-profile.children.cycles-pp.unwind_next_frame
> 0.11 ą 6% -0.0 0.06 ą 13%
> perf-profile.children.cycles-pp.enqueue_task_fair
> 0.12 ą 9% -0.0 0.07 ą 10%
> perf-profile.children.cycles-pp.ttwu_do_activate
> 0.08 ą 5% -0.0 0.04 ą 57%
> perf-profile.children.cycles-pp.__account_scheduler_latency
> 0.11 ą 10% -0.0 0.06 ą 13%
> perf-profile.children.cycles-pp.enqueue_entity
> 0.10 ą 17% -0.0 0.06 ą 7%
> perf-profile.children.cycles-pp.__schedule
> 0.36 ą 5% -0.0 0.32 ą 7%
> perf-profile.children.cycles-pp.osq_unlock
> 0.03 ą100% -0.0 0.00
> perf-profile.children.cycles-pp.serial8250_console_write
> 0.03 ą100% -0.0 0.00
> perf-profile.children.cycles-pp.uart_console_write
> 0.03 ą100% -0.0 0.00
> perf-profile.children.cycles-pp.wait_for_xmitr
> 0.03 ą100% -0.0 0.00
> perf-profile.children.cycles-pp.serial8250_console_putchar
> 0.03 ą100% -0.0 0.00
> perf-profile.children.cycles-pp.__softirqentry_text_start
> 0.08 ą 11% -0.0 0.05 ą 9%
> perf-profile.children.cycles-pp._raw_spin_lock_irqsave
> 0.03 ą100% -0.0 0.01 ą173%
> perf-profile.children.cycles-pp.console_unlock
> 0.05 ą 9% -0.0 0.04 ą 58%
> perf-profile.children.cycles-pp.update_load_avg
> 0.03 ą100% -0.0 0.01 ą173%
> perf-profile.children.cycles-pp.irq_work_run_list
> 0.03 ą100% -0.0 0.01 ą173%
> perf-profile.children.cycles-pp.perf_mux_hrtimer_handler
> 0.03 ą100% -0.0 0.01 ą173%
> perf-profile.children.cycles-pp.irq_exit
> 0.01 ą173% -0.0 0.00
> perf-profile.children.cycles-pp.process_one_work
> 0.01 ą173% -0.0 0.00
> perf-profile.children.cycles-pp.ktime_get
> 0.01 ą173% -0.0 0.00
> perf-profile.children.cycles-pp.__vma_link_file
> 0.37 ą 3% -0.0 0.36 ą 5%
> perf-profile.children.cycles-pp.apic_timer_interrupt
> 0.24 ą 6% -0.0 0.23 ą 6%
> perf-profile.children.cycles-pp.__hrtimer_run_queues
> 0.37 ą 4% -0.0 0.36 ą 5%
> perf-profile.children.cycles-pp.smp_apic_timer_interrupt
> 0.06 ą 14% -0.0 0.05 ą 9%
> perf-profile.children.cycles-pp.file_has_perm
> 0.31 ą 5% -0.0 0.31 ą 7%
> perf-profile.children.cycles-pp.hrtimer_interrupt
> 0.16 ą 10% -0.0 0.15 ą 7%
> perf-profile.children.cycles-pp.update_process_times
> 0.04 ą 57% -0.0 0.04 ą 58%
> perf-profile.children.cycles-pp.write
> 0.06 ą 7% -0.0 0.06 ą 9%
> perf-profile.children.cycles-pp.native_iret
> 0.62 ą 5% +0.0 0.62 ą 2%
> perf-profile.children.cycles-pp.__rb_insert_augmented
> 0.16 ą 10% +0.0 0.16 ą 9%
> perf-profile.children.cycles-pp.tick_sched_handle
> 0.01 ą173% +0.0 0.01 ą173%
> perf-profile.children.cycles-pp.ksys_write
> 0.01 ą173% +0.0 0.01 ą173%
> perf-profile.children.cycles-pp.worker_thread
> 0.18 ą 9% +0.0 0.18 ą 8%
> perf-profile.children.cycles-pp.tick_sched_timer
> 0.05 ą 8% +0.0 0.06 ą 9%
> perf-profile.children.cycles-pp._cond_resched
> 0.11 ą 4% +0.0 0.11 ą 3%
> perf-profile.children.cycles-pp.scheduler_tick
> 0.08 ą 10% +0.0 0.09 ą 13%
> perf-profile.children.cycles-pp.mem_cgroup_from_task
> 0.08 ą 5% +0.0 0.09 ą 7%
> perf-profile.children.cycles-pp.task_tick_fair
> 0.06 +0.0 0.07 ą 7%
> perf-profile.children.cycles-pp.security_mmap_addr
> 0.07 ą 7% +0.0 0.07 ą 5%
> perf-profile.children.cycles-pp.pmd_devmap_trans_unstable
> 0.09 ą 5% +0.0 0.10 ą 5%
> perf-profile.children.cycles-pp.vma_gap_callbacks_rotate
> 0.01 ą173% +0.0 0.03 ą100%
> perf-profile.children.cycles-pp.ret_from_fork
> 0.01 ą173% +0.0 0.03 ą100%
> perf-profile.children.cycles-pp.kthread
> 0.06 ą 13% +0.0 0.07 ą 5%
> perf-profile.children.cycles-pp.fput
> 0.07 ą 7% +0.0 0.08 ą 6%
> perf-profile.children.cycles-pp.__slab_alloc
> 0.05 +0.0 0.06 ą 6%
> perf-profile.children.cycles-pp.prepend_path
> 0.05 +0.0 0.06 ą 6%
> perf-profile.children.cycles-pp.new_slab
> 0.00 +0.0 0.01 ą173%
> perf-profile.children.cycles-pp.ktime_get_update_offsets_now
> 0.00 +0.0 0.01 ą173%
> perf-profile.children.cycles-pp.vfs_write
> 0.00 +0.0 0.01 ą173%
> perf-profile.children.cycles-pp.down_write_killable
> 0.06 ą 6% +0.0 0.08 ą 6%
> perf-profile.children.cycles-pp.___slab_alloc
> 0.07 +0.0 0.08 ą 5%
> perf-profile.children.cycles-pp.perf_exclude_event
> 0.04 ą 58% +0.0 0.06 ą 9%
> perf-profile.children.cycles-pp.get_page_from_freelist
> 0.04 ą 58% +0.0 0.06 ą 7%
> perf-profile.children.cycles-pp.__alloc_pages_nodemask
> 0.03 ą100% +0.0 0.04 ą 58%
> perf-profile.children.cycles-pp.__pte_alloc
> 0.21 ą 3% +0.0 0.23 ą 6%
> perf-profile.children.cycles-pp.__entry_trampoline_start
> 0.21 ą 4% +0.0 0.23 ą 2%
> perf-profile.children.cycles-pp.vma_interval_tree_augment_rotate
> 0.11 ą 7% +0.0 0.14 ą 8%
> perf-profile.children.cycles-pp.selinux_mmap_file
> 0.04 ą 57% +0.0 0.06
> perf-profile.children.cycles-pp.kfree
> 0.04 ą 57% +0.0 0.06 ą 11%
> perf-profile.children.cycles-pp.avc_has_perm
> 0.17 ą 4% +0.0 0.19 ą 4%
> perf-profile.children.cycles-pp.d_path
> 0.00 +0.0 0.03 ą100%
> perf-profile.children.cycles-pp.pte_alloc_one
> 0.01 ą173% +0.0 0.04 ą 57%
> perf-profile.children.cycles-pp.perf_swevent_event
> 0.12 ą 10% +0.0 0.15 ą 7%
> perf-profile.children.cycles-pp.security_mmap_file
> 0.18 ą 3% +0.0 0.21 ą 3%
> perf-profile.children.cycles-pp.up_read
> 0.14 ą 3% +0.0 0.17 ą 3%
> perf-profile.children.cycles-pp.__might_sleep
> 0.32 ą 3% +0.0 0.35 ą 4%
> perf-profile.children.cycles-pp.__fget
> 0.12 ą 3% +0.0 0.15 ą 2%
> perf-profile.children.cycles-pp.kmem_cache_alloc_trace
> 0.16 ą 4% +0.0 0.19 ą 5%
> perf-profile.children.cycles-pp.kmem_cache_alloc
> 0.21 ą 3% +0.0 0.25 ą 4%
> perf-profile.children.cycles-pp.down_read_trylock
> 0.25 ą 5% +0.0 0.29 ą 4%
> perf-profile.children.cycles-pp.__vma_link_rb
> 0.17 ą 6% +0.0 0.21 ą 7%
> perf-profile.children.cycles-pp.vma_compute_subtree_gap
> 2.22 ą 15% +0.0 2.26 ą 14%
> perf-profile.children.cycles-pp.exit_to_usermode_loop
> 0.21 ą 7% +0.0 0.26 ą 4%
> perf-profile.children.cycles-pp._raw_spin_lock
> 0.00 +0.0 0.04 ą 58%
> perf-profile.children.cycles-pp.perf_iterate_sb
> 2.22 ą 15% +0.0 2.27 ą 14%
> perf-profile.children.cycles-pp.task_numa_work
> 0.16 ą 5% +0.0 0.20 ą 4%
> perf-profile.children.cycles-pp.___might_sleep
> 0.24 ą 16% +0.0 0.29 ą 13%
> perf-profile.children.cycles-pp.vma_policy_mof
> 2.21 ą 15% +0.0 2.26 ą 14%
> perf-profile.children.cycles-pp.task_work_run
> 0.37 ą 3% +0.0 0.42 ą 5%
> perf-profile.children.cycles-pp.sync_regs
> 0.08 ą 23% +0.0 0.13 ą 14%
> perf-profile.children.cycles-pp.get_task_policy
> 0.39 ą 2% +0.1 0.45
> perf-profile.children.cycles-pp.syscall_return_via_sysret
> 0.46 +0.1 0.53 ą 2%
> perf-profile.children.cycles-pp.perf_event_mmap
> 0.97 ą 14% +0.1 1.05 ą 11%
> perf-profile.children.cycles-pp.prepare_exit_to_usermode
> 6.75 +0.1 6.85
> perf-profile.children.cycles-pp.do_page_fault
> 6.76 +0.1 6.86
> perf-profile.children.cycles-pp.page_fault
> 6.77 +0.1 6.87
> perf-profile.children.cycles-pp.__do_page_fault
> 1.10 ą 2% +0.1 1.25 ą 3%
> perf-profile.children.cycles-pp.vmacache_find
> 0.63 ą 6% +0.2 0.78 ą 3%
> perf-profile.children.cycles-pp.___perf_sw_event
> 0.75 ą 4% +0.2 0.93 ą 3%
> perf-profile.children.cycles-pp.__perf_sw_event
> 1.35 +0.2 1.56
> perf-profile.children.cycles-pp.unmapped_area_topdown
> 1.42 +0.2 1.63
> perf-profile.children.cycles-pp.arch_get_unmapped_area_topdown
> 1.58 +0.2 1.81 ą 2%
> perf-profile.children.cycles-pp.find_vma
> 1.50 +0.2 1.73
> perf-profile.children.cycles-pp.get_unmapped_area
> 2.72 +0.3 3.06
> perf-profile.children.cycles-pp.native_irq_return_iret
> 2.15 +0.4 2.50 ą 3%
> perf-profile.children.cycles-pp.vma_interval_tree_insert
> 2.70 ą 5% +0.4 3.07 ą 3%
> perf-profile.children.cycles-pp.swapgs_restore_regs_and_return_to_usermode
> 3.83 +0.4 4.26
> perf-profile.children.cycles-pp.rwsem_spin_on_owner
> 68.47 -2.2 66.29
> perf-profile.self.cycles-pp.osq_lock
> 2.16 ą 7% -0.7 1.45
> perf-profile.self.cycles-pp.__handle_mm_fault
> 0.74 ą 3% -0.1 0.64 ą 2%
> perf-profile.self.cycles-pp.rwsem_down_write_failed
> 0.97 -0.1 0.89
> perf-profile.self.cycles-pp.up_write
> 0.07 ą 17% -0.1 0.00
> perf-profile.self.cycles-pp.intel_idle
> 0.04 ą 58% -0.0 0.00
> perf-profile.self.cycles-pp._raw_spin_lock_irqsave
> 0.36 ą 5% -0.0 0.32 ą 7%
> perf-profile.self.cycles-pp.osq_unlock
> 2.00 ą 14% -0.0 1.99 ą 14%
> perf-profile.self.cycles-pp.task_numa_work
> 0.01 ą173% -0.0 0.00
> perf-profile.self.cycles-pp.__vma_link_file
> 0.62 ą 5% -0.0 0.61 ą 2%
> perf-profile.self.cycles-pp.__rb_insert_augmented
> 0.06 ą 7% -0.0 0.06 ą 9%
> perf-profile.self.cycles-pp.native_iret
> 0.01 ą173% +0.0 0.01 ą173%
> perf-profile.self.cycles-pp.ksys_mmap_pgoff
> 0.01 ą173% +0.0 0.01 ą173%
> perf-profile.self.cycles-pp._cond_resched
> 0.06 ą 9% +0.0 0.06 ą 14%
> perf-profile.self.cycles-pp.arch_get_unmapped_area_topdown
> 0.34 ą 5% +0.0 0.35 ą 3%
> perf-profile.self.cycles-pp.down_write
> 0.08 ą 5% +0.0 0.09 ą 4%
> perf-profile.self.cycles-pp.perf_event_mmap
> 0.11 ą 7% +0.0 0.11 ą 3%
> perf-profile.self.cycles-pp.__vma_link_rb
> 0.08 ą 10% +0.0 0.09 ą 13%
> perf-profile.self.cycles-pp.mem_cgroup_from_task
> 0.07 +0.0 0.08 ą 5%
> perf-profile.self.cycles-pp.do_page_fault
> 0.06 ą 6% +0.0 0.07 ą 10%
> perf-profile.self.cycles-pp.pmd_devmap_trans_unstable
> 0.07 ą 7% +0.0 0.07 ą 5%
> perf-profile.self.cycles-pp.vma_gap_callbacks_rotate
> 0.12 ą 8% +0.0 0.13 ą 6% perf-profile.self.cycles-pp.d_path
> 0.06 +0.0 0.07 ą 10%
> perf-profile.self.cycles-pp.kmem_cache_alloc
> 0.00 +0.0 0.01 ą173%
> perf-profile.self.cycles-pp.kmem_cache_alloc_trace
> 0.00 +0.0 0.01 ą173%
> perf-profile.self.cycles-pp.prepend_path
> 0.06 ą 11% +0.0 0.07 ą 5% perf-profile.self.cycles-pp.fput
> 0.07 +0.0 0.08 ą 5%
> perf-profile.self.cycles-pp.perf_exclude_event
> 0.21 ą 4% +0.0 0.22
> perf-profile.self.cycles-pp.vma_interval_tree_augment_rotate
> 0.21 ą 3% +0.0 0.23 ą 6%
> perf-profile.self.cycles-pp.__entry_trampoline_start
> 0.04 ą 57% +0.0 0.06 ą 7% perf-profile.self.cycles-pp.kfree
> 0.04 ą 57% +0.0 0.06 ą 11%
> perf-profile.self.cycles-pp.avc_has_perm
> 0.16 ą 13% +0.0 0.19 ą 13%
> perf-profile.self.cycles-pp.vma_policy_mof
> 0.01 ą173% +0.0 0.04 ą 57%
> perf-profile.self.cycles-pp.perf_swevent_event
> 0.14 ą 3% +0.0 0.16 ą 2%
> perf-profile.self.cycles-pp.__might_sleep
> 0.32 ą 3% +0.0 0.34 ą 5% perf-profile.self.cycles-pp.__fget
> 0.18 ą 3% +0.0 0.21 ą 3%
> perf-profile.self.cycles-pp.up_read
> 0.15 ą 7% +0.0 0.17 ą 2%
> perf-profile.self.cycles-pp.__perf_sw_event
> 0.14 ą 10% +0.0 0.18 ą 4%
> perf-profile.self.cycles-pp.do_mmap
> 0.21 ą 3% +0.0 0.25 ą 4%
> perf-profile.self.cycles-pp.down_read_trylock
> 0.17 ą 6% +0.0 0.21 ą 7%
> perf-profile.self.cycles-pp.vma_compute_subtree_gap
> 0.21 ą 7% +0.0 0.25 ą 3%
> perf-profile.self.cycles-pp._raw_spin_lock
> 0.00 +0.0 0.04 ą 58%
> perf-profile.self.cycles-pp.perf_iterate_sb
> 0.16 ą 5% +0.0 0.20 ą 4%
> perf-profile.self.cycles-pp.___might_sleep
> 0.08 ą 23% +0.0 0.12 ą 14%
> perf-profile.self.cycles-pp.get_task_policy
> 0.37 ą 3% +0.0 0.42 ą 5%
> perf-profile.self.cycles-pp.sync_regs
> 0.00 +0.1 0.05
> perf-profile.self.cycles-pp.do_syscall_64
> 0.39 ą 2% +0.1 0.45
> perf-profile.self.cycles-pp.syscall_return_via_sysret
> 0.48 +0.1 0.55
> perf-profile.self.cycles-pp.find_vma
> 0.69 ą 2% +0.1 0.77 ą 2%
> perf-profile.self.cycles-pp.mmap_region
> 0.70 ą 3% +0.1 0.81 ą 2%
> perf-profile.self.cycles-pp.handle_mm_fault
> 0.54 ą 7% +0.1 0.69 ą 3%
> perf-profile.self.cycles-pp.___perf_sw_event
> 1.10 ą 2% +0.1 1.25 ą 3%
> perf-profile.self.cycles-pp.vmacache_find
> 0.73 +0.1 0.88 ą 5%
> perf-profile.self.cycles-pp.__do_page_fault
> 1.35 +0.2 1.56
> perf-profile.self.cycles-pp.unmapped_area_topdown
> 1.75 +0.3 2.03
> perf-profile.self.cycles-pp.swapgs_restore_regs_and_return_to_usermode
> 2.72 +0.3 3.06
> perf-profile.self.cycles-pp.native_irq_return_iret
> 2.15 +0.3 2.49 ą 3%
> perf-profile.self.cycles-pp.vma_interval_tree_insert
> 3.82 +0.4 4.25
> perf-profile.self.cycles-pp.rwsem_spin_on_owner
>
>
>
> vm-scalability.throughput
>
> 2.25e+07 +-+--------------------------------------------------------------+
> | O O |
> 2.2e+07 O-+O O OO O O OO O O O O O |
> 2.15e+07 +-+ O O O |
> | O O |
> 2.1e+07 +-+ |
> | |
> 2.05e+07 +-+ |
> | |
> 2e+07 +-+ +. +. .++.+ ++ +. + + +.+ |
> 1.95e+07 +-+ .+ : + + + + + .+ +.++.+ + .+ + : +.+ .|
> | ++ +: ++ + + +.+ + +.+ + + |
> 1.9e+07 +-+ + + + :+ : |
> | + + |
> 1.85e+07 +-+--------------------------------------------------------------+
>
>
> vm-scalability.median
>
> 800000 +-O----------------------------------------------------------------+
> O O O O O O O O O |
> 780000 +-+O O O O O O O |
> | O O O O |
> 760000 +-+ |
> | |
> 740000 +-+ |
> | |
> 720000 +-+ .+ .+ |
> | + +.+.++.+ +. + + .+. .+ .+.+ +.+ ++. .+ |
> 700000 +-+ + : : +. : ++ + + :+ +. + + : .+ + +.|
> | ++ :: + + + :.+ :+ + : |
> 680000 +-+ + + + + |
> | |
> 660000 +-+----------------------------------------------------------------+
>
>
> vm-scalability.time.voluntary_context_switches
>
> 8e+06 +-+-----------------------------------------------------------------+
> | +. +. |
> 7e+06 +-+ : + + +. + +. + + |
> | : :+ + : +. + + : :: .+ |
> 6e+06 +-++.+.+. + + + + : : : .+.++.+ +.+.++. .++.+. |
> |+ : + + +.+ +.+ + .|
> 5e+06 +-+ + |
> | |
> 4e+06 +-+ |
> | |
> 3e+06 +-O O O O OO O |
> O O O O O O |
> 2e+06 +-+ O O OO O OO O |
> | |
> 1e+06 +-+-----------------------------------------------------------------+
>
>
> [*] bisect-good sample
> [O] bisect-bad sample
>
>
>
> Disclaimer:
> Results have been estimated based on internal Intel analysis and are provided
> for informational purposes only. Any difference in system hardware or software
> design or configuration may affect actual performance.
>
>
> Thanks,
> Xiaolong
Powered by blists - more mailing lists