[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20201104071613.GC15746@xsang-OptiPlex-9020>
Date: Wed, 4 Nov 2020 15:16:13 +0800
From: kernel test robot <oliver.sang@...el.com>
To: Joonsoo Kim <iamjoonsoo.kim@....com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Johannes Weiner <hannes@...xchg.org>,
Vlastimil Babka <vbabka@...e.cz>,
Michal Hocko <mhocko@...nel.org>,
Hugh Dickins <hughd@...gle.com>,
Minchan Kim <minchan@...nel.org>,
Mel Gorman <mgorman@...hsingularity.net>,
Matthew Wilcox <willy@...radead.org>,
LKML <linux-kernel@...r.kernel.org>, lkp@...ts.01.org,
lkp@...el.com, ying.huang@...el.com, feng.tang@...el.com,
zhengjun.xing@...el.com
Subject: [mm/vmscan] ccc5dc6734: vm-scalability.throughput 53.9% improvement
Greeting,
FYI, we noticed a 53.9% improvement of vm-scalability.throughput due to commit:
8ca39e6874f812a3 ccc5dc67340c109e624e07e0279
---------------- ---------------------------
%stddev %change %stddev
\ | \
99173 +53.9% 152631 ± 17% vm-scalability.throughput
commit: ccc5dc67340c109e624e07e02790e9fbdec900d6 ("mm/vmscan: make active/inactive ratio as 1:1 for anon lru")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: vm-scalability
on test machine: 96 threads Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 256G memory
with following parameters:
runtime: 300
thp_enabled: never
thp_defrag: always
nr_task: 8
nr_pmem: 1
test: swap-w-rand-mt
bp_memmap: 96G!18G
cpufreq_governor: performance
ucode: 0x5002f01
test-description: The motivation behind this suite is to exercise functions and regions of the mm/ of the Linux kernel which are of interest to us.
test-url: https://git.kernel.org/cgit/linux/kernel/git/wfg/vm-scalability.git/
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
bp_memmap/compiler/cpufreq_governor/kconfig/nr_pmem/nr_task/rootfs/runtime/tbox_group/test/testcase/thp_defrag/thp_enabled/ucode:
96G!18G/gcc-9/performance/x86_64-rhel-8.3/1/8/debian-10.4-x86_64-20200603.cgz/300/lkp-csl-2sp6/swap-w-rand-mt/vm-scalability/always/never/0x5002f01
commit:
8ca39e6874 ("mm/hugetlb: add mempolicy check in the reservation routine")
ccc5dc6734 ("mm/vmscan: make active/inactive ratio as 1:1 for anon lru")
8ca39e6874f812a3 ccc5dc67340c109e624e07e0279
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
2:4 -7% 2:4 perf-profile.children.cycles-pp.error_entry
0:4 -2% 0:4 perf-profile.self.cycles-pp.error_entry
%stddev %change %stddev
\ | \
12454 -4.4% 11911 vm-scalability.median
18.89 ± 25% +562.0 580.87 ± 21% vm-scalability.stddev%
99173 +53.9% 152631 ± 17% vm-scalability.throughput
1394871 ± 2% -57.8% 588214 ± 4% vm-scalability.time.involuntary_context_switches
1.1e+08 -19.4% 88627621 ± 6% vm-scalability.time.major_page_faults
623.50 +2.5% 639.00 vm-scalability.time.percent_of_cpu_this_job_got
1865 -10.2% 1674 ± 5% vm-scalability.time.system_time
163.10 +149.4% 406.84 ± 20% vm-scalability.time.user_time
131644 ± 7% -77.6% 29502 ± 19% vm-scalability.time.voluntary_context_switches
30559593 +52.5% 46611337 ± 17% vm-scalability.workload
9.84 -9.0% 8.95 ± 3% iostat.cpu.system
0.56 +139.0% 1.34 ± 19% iostat.cpu.user
1.72 -0.2 1.56 mpstat.cpu.all.irq%
0.56 +0.8 1.34 ± 19% mpstat.cpu.all.usr%
3436 ± 7% -17.5% 2834 ± 6% slabinfo.fsnotify_mark_connector.active_objs
3436 ± 7% -17.5% 2834 ± 6% slabinfo.fsnotify_mark_connector.num_objs
1341896 -19.7% 1076894 ± 6% vmstat.swap.si
1449976 -18.3% 1185233 ± 6% vmstat.swap.so
13309 ± 3% -52.2% 6357 ± 4% vmstat.system.cs
1626769 -8.3% 1491165 ± 3% vmstat.system.in
1.442e+08 -41.9% 83705019 meminfo.Active
1.442e+08 -41.9% 83704685 meminfo.Active(anon)
4860414 +1226.9% 64490789 meminfo.Inactive
4860327 +1226.9% 64490699 meminfo.Inactive(anon)
7954 +12.1% 8918 ± 7% meminfo.Shmem
29150473 ± 12% -25.3% 21770631 ± 5% numa-numastat.node0.numa_foreign
88599411 ± 4% -14.2% 76027051 ± 7% numa-numastat.node1.local_node
88623554 ± 4% -14.2% 76054367 ± 7% numa-numastat.node1.numa_hit
29150473 ± 12% -25.3% 21770631 ± 5% numa-numastat.node1.numa_miss
29174621 ± 12% -25.3% 21797950 ± 5% numa-numastat.node1.other_node
5.77e+08 ±171% -99.6% 2203670 ± 14% cpuidle.C1.time
7436838 ±169% -99.4% 46871 ± 10% cpuidle.C1.usage
1.788e+10 ± 45% +55.6% 2.782e+10 cpuidle.C1E.time
36345242 ± 32% +56.0% 56688635 cpuidle.C1E.usage
9.509e+09 ± 90% -97.1% 2.802e+08 ± 98% cpuidle.C6.time
12999588 ± 90% -97.0% 389403 ± 90% cpuidle.C6.usage
276814 ± 72% -51.7% 133787 ± 2% cpuidle.POLL.time
64237 ± 21% -22.0% 50128 cpuidle.POLL.usage
26330586 -43.6% 14857692 numa-meminfo.node0.Active
26330447 -43.6% 14857498 numa-meminfo.node0.Active(anon)
1588021 +712.4% 12901597 numa-meminfo.node0.Inactive
1587949 +712.5% 12901569 numa-meminfo.node0.Inactive(anon)
51904 ± 3% +13.6% 58975 ± 3% numa-meminfo.node0.KReclaimable
51904 ± 3% +13.6% 58975 ± 3% numa-meminfo.node0.SReclaimable
1.179e+08 -41.7% 68768978 numa-meminfo.node1.Active
1.179e+08 -41.7% 68768837 numa-meminfo.node1.Active(anon)
3275017 +1473.6% 51536440 numa-meminfo.node1.Inactive
3275001 +1473.6% 51536378 numa-meminfo.node1.Inactive(anon)
54169 ± 4% -8.5% 49546 ± 3% numa-meminfo.node1.KReclaimable
13031 ± 2% +10.1% 14344 ± 7% numa-meminfo.node1.Mapped
54169 ± 4% -8.5% 49546 ± 3% numa-meminfo.node1.SReclaimable
92533 ± 13% -17.7% 76135 ± 15% numa-meminfo.node1.SUnreclaim
146703 ± 8% -14.3% 125681 ± 10% numa-meminfo.node1.Slab
6582337 -43.5% 3716448 numa-vmstat.node0.nr_active_anon
396933 +712.8% 3226178 numa-vmstat.node0.nr_inactive_anon
12980 ± 3% +13.6% 14748 ± 3% numa-vmstat.node0.nr_slab_reclaimable
6582207 -43.5% 3716223 numa-vmstat.node0.nr_zone_active_anon
397086 +712.5% 3226343 numa-vmstat.node0.nr_zone_inactive_anon
22939310 ± 11% -23.2% 17612170 ± 7% numa-vmstat.node0.numa_foreign
7.25 ± 59% -69.0% 2.25 ±148% numa-vmstat.node0.workingset_nodes
29477664 -41.7% 17199535 numa-vmstat.node1.nr_active_anon
818480 +1474.6% 12887554 numa-vmstat.node1.nr_inactive_anon
3288 +10.1% 3619 ± 7% numa-vmstat.node1.nr_mapped
13529 ± 4% -8.6% 12368 ± 3% numa-vmstat.node1.nr_slab_reclaimable
23132 ± 13% -17.7% 19033 ± 15% numa-vmstat.node1.nr_slab_unreclaimable
40747607 -22.3% 31660035 ± 8% numa-vmstat.node1.nr_vmscan_write
40747687 -22.3% 31660035 ± 8% numa-vmstat.node1.nr_written
29477662 -41.7% 17199534 numa-vmstat.node1.nr_zone_active_anon
818473 +1474.6% 12887548 numa-vmstat.node1.nr_zone_inactive_anon
22939681 ± 11% -23.2% 17612377 ± 7% numa-vmstat.node1.numa_miss
23083964 ± 11% -23.1% 17760503 ± 7% numa-vmstat.node1.numa_other
1572 ± 13% +82.0% 2862 ± 17% sched_debug.cfs_rq:/.exec_clock.min
84944 ± 9% +30.3% 110668 ± 10% sched_debug.cfs_rq:/.min_vruntime.avg
296431 ± 11% +28.1% 379678 ± 9% sched_debug.cfs_rq:/.min_vruntime.max
24132 ± 7% +53.6% 37058 ± 16% sched_debug.cfs_rq:/.min_vruntime.min
54931 ± 7% +23.3% 67729 ± 10% sched_debug.cfs_rq:/.min_vruntime.stddev
321.43 ± 2% +12.7% 362.24 ± 3% sched_debug.cfs_rq:/.runnable_avg.stddev
-102663 +94.3% -199520 sched_debug.cfs_rq:/.spread0.avg
-163484 +67.1% -273121 sched_debug.cfs_rq:/.spread0.min
54932 ± 7% +23.3% 67730 ± 10% sched_debug.cfs_rq:/.spread0.stddev
312.70 ± 2% +13.7% 355.45 ± 3% sched_debug.cfs_rq:/.util_avg.stddev
271.20 ± 3% +12.4% 304.75 ± 5% sched_debug.cfs_rq:/.util_est_enqueued.stddev
9762 ± 21% +504.6% 59027 ± 63% sched_debug.cpu.avg_idle.min
155355 ± 8% +9.7% 170390 ± 7% sched_debug.cpu.clock_task.avg
19564 ± 7% -38.0% 12127 ± 10% sched_debug.cpu.nr_switches.avg
70332 ± 5% -47.4% 36973 ± 15% sched_debug.cpu.nr_switches.max
12334 ± 6% -51.3% 6009 ± 10% sched_debug.cpu.nr_switches.stddev
49.82 ± 16% -36.0% 31.89 ± 15% sched_debug.cpu.nr_uninterruptible.max
17898 ± 8% -41.5% 10474 ± 12% sched_debug.cpu.sched_count.avg
66032 ± 7% -50.5% 32692 ± 15% sched_debug.cpu.sched_count.max
11873 ± 6% -54.2% 5441 ± 9% sched_debug.cpu.sched_count.stddev
1771 ± 7% -17.5% 1461 ± 10% sched_debug.cpu.sched_goidle.avg
8954 ± 8% -42.4% 5161 ± 12% sched_debug.cpu.ttwu_count.avg
33426 ± 16% -46.7% 17829 ± 16% sched_debug.cpu.ttwu_count.max
6026 ± 12% -50.7% 2972 ± 12% sched_debug.cpu.ttwu_count.stddev
4030 ± 10% -18.9% 3266 ± 9% sched_debug.cpu.ttwu_local.avg
14833 ± 11% -24.5% 11196 ± 11% sched_debug.cpu.ttwu_local.max
2711 ± 9% -33.7% 1797 ± 6% sched_debug.cpu.ttwu_local.stddev
859740 ± 3% -71.4% 245531 ± 2% proc-vmstat.allocstall_movable
6757 ± 28% -67.8% 2175 ± 12% proc-vmstat.allocstall_normal
1624 ± 30% -68.6% 510.75 ± 9% proc-vmstat.compact_fail
1628 ± 30% -68.5% 512.25 ± 10% proc-vmstat.compact_stall
36064684 -42.0% 20909042 proc-vmstat.nr_active_anon
279685 +8.2% 302731 ± 3% proc-vmstat.nr_dirty_background_threshold
560054 +8.2% 606205 ± 3% proc-vmstat.nr_dirty_threshold
2891898 +9.0% 3152225 ± 3% proc-vmstat.nr_free_pages
1215673 +1225.2% 16109913 proc-vmstat.nr_inactive_anon
6558 +5.0% 6886 ± 3% proc-vmstat.nr_mapped
1989 +12.2% 2232 ± 7% proc-vmstat.nr_shmem
59747040 -14.2% 51246547 ± 4% proc-vmstat.nr_vmscan_write
1.189e+08 -17.9% 97555379 ± 6% proc-vmstat.nr_written
36064534 -42.0% 20908793 proc-vmstat.nr_zone_active_anon
1215839 +1225.0% 16110093 proc-vmstat.nr_zone_inactive_anon
49601967 ± 8% -18.8% 40257022 ± 3% proc-vmstat.numa_foreign
49601967 ± 8% -18.8% 40257022 ± 3% proc-vmstat.numa_miss
49634612 ± 8% -18.8% 40289876 ± 3% proc-vmstat.numa_other
88657427 ± 5% +10.4% 97906332 ± 4% proc-vmstat.numa_pte_updates
1.198e+08 +78.3% 2.136e+08 proc-vmstat.pgactivate
1.869e+08 -9.6% 1.69e+08 ± 5% proc-vmstat.pgalloc_normal
1.3e+08 +86.5% 2.424e+08 proc-vmstat.pgdeactivate
2.195e+08 ± 2% -8.8% 2.003e+08 ± 3% proc-vmstat.pgfault
1.903e+08 -9.7% 1.719e+08 ± 5% proc-vmstat.pgfree
1.1e+08 -19.4% 88636625 ± 6% proc-vmstat.pgmajfault
1.3e+08 +86.5% 2.424e+08 proc-vmstat.pgrefill
2.317e+08 +31.8% 3.053e+08 proc-vmstat.pgscan_anon
1.295e+08 +41.3% 1.83e+08 ± 4% proc-vmstat.pgscan_direct
1.022e+08 ± 3% +19.7% 1.223e+08 ± 3% proc-vmstat.pgscan_kswapd
1.189e+08 -17.9% 97557027 ± 6% proc-vmstat.pgsteal_anon
74703022 -20.1% 59666129 ± 8% proc-vmstat.pgsteal_direct
44192949 ± 2% -14.3% 37891233 ± 2% proc-vmstat.pgsteal_kswapd
1.1e+08 -19.4% 88638566 ± 6% proc-vmstat.pswpin
1.189e+08 -17.9% 97556657 ± 6% proc-vmstat.pswpout
3319 ± 3% -49.9% 1663 ± 9% proc-vmstat.swap_ra
2616 ± 4% -48.2% 1355 ± 9% proc-vmstat.swap_ra_hit
292680 ± 48% -60.8% 114708 ± 17% softirqs.CPU12.RCU
167221 ± 11% -31.4% 114642 ± 12% softirqs.CPU13.RCU
182643 ± 19% -26.8% 133672 ± 14% softirqs.CPU14.RCU
169551 ± 23% -40.2% 101408 ± 20% softirqs.CPU15.RCU
198588 ± 16% -39.1% 120910 ± 18% softirqs.CPU18.RCU
165853 ± 12% -23.0% 127716 ± 22% softirqs.CPU19.RCU
257552 ± 43% -55.1% 115743 ± 18% softirqs.CPU20.RCU
218199 ± 24% -49.1% 111118 ± 15% softirqs.CPU23.RCU
794680 ± 6% -49.4% 401782 ± 3% softirqs.CPU24.RCU
726497 -62.0% 275804 ± 25% softirqs.CPU25.RCU
27794 ± 2% +16.9% 32484 ± 6% softirqs.CPU25.SCHED
579940 ± 11% -60.0% 231869 ± 24% softirqs.CPU26.RCU
475739 ± 23% -52.0% 228245 ± 19% softirqs.CPU27.RCU
455498 ± 24% -51.8% 219617 ± 19% softirqs.CPU28.RCU
463703 ± 40% -49.6% 233572 ± 12% softirqs.CPU29.RCU
365433 ± 24% -41.9% 212144 ± 5% softirqs.CPU30.RCU
287141 ± 4% -36.4% 182687 ± 15% softirqs.CPU31.RCU
422261 ± 13% -54.0% 194440 ± 13% softirqs.CPU33.RCU
388169 ± 30% -46.6% 207172 ± 26% softirqs.CPU35.RCU
363714 ± 37% -47.1% 192513 ± 17% softirqs.CPU36.RCU
286676 ± 20% -33.1% 191850 ± 22% softirqs.CPU37.RCU
261391 ± 11% -30.7% 181140 ± 10% softirqs.CPU40.RCU
171037 ± 16% -42.9% 97733 ± 3% softirqs.CPU48.RCU
194725 ± 55% -42.5% 112027 ± 6% softirqs.CPU55.RCU
145683 ± 34% -36.5% 92543 ± 14% softirqs.CPU56.RCU
131285 ± 11% -27.1% 95672 ± 18% softirqs.CPU62.RCU
144922 ± 9% -32.8% 97371 ± 12% softirqs.CPU67.RCU
171386 ± 30% -41.9% 99652 ± 13% softirqs.CPU69.RCU
220153 ± 32% -45.1% 120793 ± 8% softirqs.CPU7.RCU
37856 ± 4% -9.4% 34312 ± 6% softirqs.CPU73.SCHED
241210 ± 18% -27.1% 175722 ± 14% softirqs.CPU75.RCU
283836 ± 13% -36.5% 180327 ± 17% softirqs.CPU76.RCU
176565 ± 18% -28.5% 126207 ± 10% softirqs.CPU87.RCU
222392 ± 11% -37.4% 139231 ± 32% softirqs.CPU95.RCU
5571 ± 89% +149.9% 13923 ± 26% softirqs.NET_RX
21886361 -30.6% 15178412 ± 3% softirqs.RCU
16.81 ± 2% -9.5% 15.20 perf-stat.i.MPKI
2.364e+09 -4.5% 2.259e+09 ± 2% perf-stat.i.branch-instructions
1.10 ± 4% -0.2 0.94 ± 3% perf-stat.i.branch-miss-rate%
25206857 -17.8% 20727045 ± 5% perf-stat.i.branch-misses
49.78 ± 2% +9.8 59.58 ± 2% perf-stat.i.cache-miss-rate%
1.026e+08 +4.2% 1.069e+08 ± 2% perf-stat.i.cache-misses
2.07e+08 -13.3% 1.795e+08 ± 2% perf-stat.i.cache-references
13385 ± 3% -52.4% 6371 ± 4% perf-stat.i.context-switches
2.36 +3.7% 2.45 perf-stat.i.cpi
2.891e+10 -1.3% 2.854e+10 perf-stat.i.cpu-cycles
296.39 -3.5% 285.88 perf-stat.i.cycles-between-cache-misses
3.21e+09 -3.8% 3.088e+09 ± 2% perf-stat.i.dTLB-loads
0.28 +0.4 0.66 ± 27% perf-stat.i.dTLB-store-miss-rate%
5263103 ± 2% +129.1% 12055961 ± 27% perf-stat.i.dTLB-store-misses
1.844e+09 -6.6% 1.724e+09 ± 2% perf-stat.i.dTLB-stores
83.93 -2.3 81.64 perf-stat.i.iTLB-load-miss-rate%
13107620 -15.4% 11088884 ± 5% perf-stat.i.iTLB-load-misses
1.22e+10 -5.6% 1.151e+10 ± 2% perf-stat.i.instructions
0.43 -3.3% 0.41 perf-stat.i.ipc
338474 -19.8% 271504 ± 6% perf-stat.i.major-faults
0.30 -1.3% 0.30 perf-stat.i.metric.GHz
79.99 -4.6% 76.27 ± 2% perf-stat.i.metric.M/sec
61.55 ± 4% -10.1 51.46 ± 8% perf-stat.i.node-store-miss-rate%
7488013 ± 5% +86.5% 13964332 ± 18% perf-stat.i.node-stores
674510 ± 2% -9.0% 613952 ± 3% perf-stat.i.page-faults
16.98 -8.1% 15.60 perf-stat.overall.MPKI
1.07 -0.1 0.92 ± 4% perf-stat.overall.branch-miss-rate%
49.55 +10.0 59.57 ± 3% perf-stat.overall.cache-miss-rate%
2.37 +4.6% 2.48 perf-stat.overall.cpi
281.87 -5.2% 267.12 ± 2% perf-stat.overall.cycles-between-cache-misses
0.28 +0.4 0.70 ± 28% perf-stat.overall.dTLB-store-miss-rate%
86.85 -2.3 84.58 perf-stat.overall.iTLB-load-miss-rate%
930.71 +11.7% 1039 ± 4% perf-stat.overall.instructions-per-iTLB-miss
0.42 -4.4% 0.40 perf-stat.overall.ipc
61.56 ± 4% -13.2 48.37 ± 11% perf-stat.overall.node-store-miss-rate%
129743 -35.9% 83182 ± 18% perf-stat.overall.path-length
2.358e+09 -4.4% 2.254e+09 ± 2% perf-stat.ps.branch-instructions
25152668 -17.8% 20686928 ± 5% perf-stat.ps.branch-misses
1.023e+08 +4.3% 1.067e+08 ± 2% perf-stat.ps.cache-misses
2.065e+08 -13.2% 1.792e+08 ± 2% perf-stat.ps.cache-references
13346 ± 3% -52.4% 6356 ± 4% perf-stat.ps.context-switches
2.883e+10 -1.2% 2.847e+10 perf-stat.ps.cpu-cycles
3.201e+09 -3.7% 3.081e+09 ± 2% perf-stat.ps.dTLB-loads
5250537 ± 2% +129.1% 12028662 ± 27% perf-stat.ps.dTLB-store-misses
1.84e+09 -6.5% 1.72e+09 ± 2% perf-stat.ps.dTLB-stores
13070196 -15.3% 11066200 ± 5% perf-stat.ps.iTLB-load-misses
1.216e+10 -5.6% 1.148e+10 ± 2% perf-stat.ps.instructions
337516 -19.7% 271081 ± 6% perf-stat.ps.major-faults
7470262 ± 5% +86.6% 13937961 ± 18% perf-stat.ps.node-stores
673362 ± 2% -9.1% 612423 ± 3% perf-stat.ps.page-faults
3.965e+12 -5.3% 3.755e+12 ± 2% perf-stat.total.instructions
1279 ±100% -97.6% 31.00 ±173% interrupts.91:PCI-MSI.31981624-edge.i40e-eth0-TxRx-55
4.703e+08 -9.1% 4.276e+08 ± 3% interrupts.CAL:Function_call_interrupts
6802 ± 24% -74.2% 1756 ± 38% interrupts.CPU0.RES:Rescheduling_interrupts
3434 ± 11% -73.0% 928.75 ± 33% interrupts.CPU1.RES:Rescheduling_interrupts
2534685 ± 41% +39.0% 3523993 ± 35% interrupts.CPU11.CAL:Function_call_interrupts
4786 ± 57% -80.4% 937.50 ± 7% interrupts.CPU11.RES:Rescheduling_interrupts
4421 ± 35% -87.4% 556.25 ± 52% interrupts.CPU12.RES:Rescheduling_interrupts
759.50 ± 52% -71.3% 218.00 ± 82% interrupts.CPU13.NMI:Non-maskable_interrupts
759.50 ± 52% -71.3% 218.00 ± 82% interrupts.CPU13.PMI:Performance_monitoring_interrupts
2975 ± 22% -78.2% 647.50 ± 44% interrupts.CPU13.RES:Rescheduling_interrupts
2035 ± 97% -95.4% 92.75 ± 27% interrupts.CPU14.NMI:Non-maskable_interrupts
2035 ± 97% -95.4% 92.75 ± 27% interrupts.CPU14.PMI:Performance_monitoring_interrupts
3264 ± 35% -80.4% 639.25 ± 75% interrupts.CPU14.RES:Rescheduling_interrupts
3009187 ± 23% -45.7% 1633635 ± 49% interrupts.CPU15.CAL:Function_call_interrupts
2855 ± 37% -79.6% 583.25 ± 56% interrupts.CPU15.RES:Rescheduling_interrupts
6211065 ± 23% -50.7% 3060002 ± 49% interrupts.CPU15.TLB:TLB_shootdowns
1120 ± 49% -77.9% 247.25 ± 98% interrupts.CPU16.NMI:Non-maskable_interrupts
1120 ± 49% -77.9% 247.25 ± 98% interrupts.CPU16.PMI:Performance_monitoring_interrupts
2169 ± 35% -70.8% 633.75 ± 73% interrupts.CPU16.RES:Rescheduling_interrupts
3701 ± 51% -78.4% 800.25 ± 69% interrupts.CPU17.RES:Rescheduling_interrupts
3463 ± 25% -81.9% 627.25 ± 58% interrupts.CPU18.RES:Rescheduling_interrupts
2947951 ± 20% -44.4% 1639511 ± 18% interrupts.CPU19.CAL:Function_call_interrupts
2472 ± 25% -72.4% 682.00 ± 61% interrupts.CPU19.RES:Rescheduling_interrupts
6044560 ± 21% -49.3% 3066273 ± 18% interrupts.CPU19.TLB:TLB_shootdowns
4351 ± 46% -77.9% 961.75 ± 50% interrupts.CPU2.RES:Rescheduling_interrupts
5228 ± 66% -89.1% 570.50 ± 57% interrupts.CPU20.RES:Rescheduling_interrupts
2830 ± 20% -79.0% 594.75 ± 70% interrupts.CPU21.RES:Rescheduling_interrupts
2446 ± 42% -76.5% 575.25 ± 60% interrupts.CPU22.RES:Rescheduling_interrupts
3991 ± 36% -88.1% 476.50 ± 35% interrupts.CPU23.RES:Rescheduling_interrupts
26515116 ± 10% -27.1% 19342534 ± 18% interrupts.CPU24.CAL:Function_call_interrupts
37955 ± 13% -87.5% 4733 ± 13% interrupts.CPU24.RES:Rescheduling_interrupts
54737110 ± 11% -34.6% 35812005 ± 20% interrupts.CPU24.TLB:TLB_shootdowns
25510682 ± 4% -55.0% 11479772 ± 10% interrupts.CPU25.CAL:Function_call_interrupts
1689 ± 64% -78.9% 355.75 ± 71% interrupts.CPU25.NMI:Non-maskable_interrupts
1689 ± 64% -78.9% 355.75 ± 71% interrupts.CPU25.PMI:Performance_monitoring_interrupts
35118 ± 4% -91.6% 2967 ± 34% interrupts.CPU25.RES:Rescheduling_interrupts
51931209 ± 4% -59.7% 20942623 ± 12% interrupts.CPU25.TLB:TLB_shootdowns
17878063 ± 18% -43.7% 10070813 ± 33% interrupts.CPU26.CAL:Function_call_interrupts
29152 ± 17% -91.9% 2351 ± 37% interrupts.CPU26.RES:Rescheduling_interrupts
36462934 ± 19% -50.2% 18157645 ± 31% interrupts.CPU26.TLB:TLB_shootdowns
322.75 ± 53% +389.8% 1580 ± 46% interrupts.CPU27.NMI:Non-maskable_interrupts
322.75 ± 53% +389.8% 1580 ± 46% interrupts.CPU27.PMI:Performance_monitoring_interrupts
22670 ± 28% -89.7% 2342 ± 10% interrupts.CPU27.RES:Rescheduling_interrupts
29167807 ± 21% -50.1% 14548003 ± 43% interrupts.CPU27.TLB:TLB_shootdowns
12425859 ± 30% -50.7% 6124270 ± 43% interrupts.CPU28.CAL:Function_call_interrupts
19789 ± 26% -89.7% 2031 ± 11% interrupts.CPU28.RES:Rescheduling_interrupts
24985757 ± 30% -54.5% 11379975 ± 46% interrupts.CPU28.TLB:TLB_shootdowns
20102 ± 45% -89.5% 2118 ± 17% interrupts.CPU29.RES:Rescheduling_interrupts
3238 ± 40% -73.2% 869.25 ± 21% interrupts.CPU3.RES:Rescheduling_interrupts
13553 ± 25% -85.5% 1963 ± 16% interrupts.CPU30.RES:Rescheduling_interrupts
8202656 ± 13% -31.5% 5619979 ± 33% interrupts.CPU31.CAL:Function_call_interrupts
10441 ± 15% -85.4% 1523 ± 18% interrupts.CPU31.RES:Rescheduling_interrupts
16404018 ± 13% -37.1% 10318480 ± 34% interrupts.CPU31.TLB:TLB_shootdowns
11288 ± 52% -83.5% 1864 ± 12% interrupts.CPU32.RES:Rescheduling_interrupts
11373533 ± 11% -52.0% 5463388 ± 17% interrupts.CPU33.CAL:Function_call_interrupts
14534 ± 20% -87.7% 1789 ± 26% interrupts.CPU33.RES:Rescheduling_interrupts
22698579 ± 11% -56.8% 9800102 ± 17% interrupts.CPU33.TLB:TLB_shootdowns
10533 ± 46% -83.4% 1743 ± 45% interrupts.CPU34.RES:Rescheduling_interrupts
14600 ± 43% -85.8% 2078 ± 53% interrupts.CPU35.RES:Rescheduling_interrupts
17967950 ± 33% -36.9% 11332563 ± 34% interrupts.CPU35.TLB:TLB_shootdowns
12890 ± 42% -86.5% 1734 ± 19% interrupts.CPU36.RES:Rescheduling_interrupts
10343 ± 44% -84.9% 1563 ± 27% interrupts.CPU37.RES:Rescheduling_interrupts
9135 ± 44% -82.9% 1559 ± 18% interrupts.CPU38.RES:Rescheduling_interrupts
1394 ± 74% -82.0% 251.00 ± 70% interrupts.CPU39.NMI:Non-maskable_interrupts
1394 ± 74% -82.0% 251.00 ± 70% interrupts.CPU39.PMI:Performance_monitoring_interrupts
9665 ± 57% -87.4% 1222 ± 29% interrupts.CPU39.RES:Rescheduling_interrupts
5365150 ± 87% -65.7% 1842514 ± 31% interrupts.CPU4.CAL:Function_call_interrupts
4647 ± 75% -78.5% 998.75 ± 50% interrupts.CPU4.RES:Rescheduling_interrupts
11236367 ± 88% -69.3% 3446909 ± 32% interrupts.CPU4.TLB:TLB_shootdowns
8139 ± 21% -80.2% 1612 ± 20% interrupts.CPU40.RES:Rescheduling_interrupts
9404 ± 52% -81.2% 1766 ± 50% interrupts.CPU41.RES:Rescheduling_interrupts
7566 ± 47% -71.9% 2123 ± 46% interrupts.CPU42.RES:Rescheduling_interrupts
9885 ± 44% -79.4% 2035 ± 26% interrupts.CPU43.RES:Rescheduling_interrupts
6659 ± 39% -72.6% 1821 ± 33% interrupts.CPU44.RES:Rescheduling_interrupts
7066 ± 57% -78.5% 1519 ± 30% interrupts.CPU46.RES:Rescheduling_interrupts
2902 ± 29% -85.4% 425.00 ± 31% interrupts.CPU48.RES:Rescheduling_interrupts
2224 ± 60% -66.4% 748.00 ± 24% interrupts.CPU5.RES:Rescheduling_interrupts
2765 ± 44% -84.7% 422.25 ± 43% interrupts.CPU50.RES:Rescheduling_interrupts
409.50 ± 72% +454.7% 2271 ± 84% interrupts.CPU51.NMI:Non-maskable_interrupts
409.50 ± 72% +454.7% 2271 ± 84% interrupts.CPU51.PMI:Performance_monitoring_interrupts
3303 ± 33% -84.3% 517.75 ± 55% interrupts.CPU51.RES:Rescheduling_interrupts
2060 ± 29% -70.2% 613.25 ± 66% interrupts.CPU52.RES:Rescheduling_interrupts
5442901 ± 44% -50.2% 2711157 ± 41% interrupts.CPU52.TLB:TLB_shootdowns
660.25 ± 81% +187.5% 1898 ± 47% interrupts.CPU53.NMI:Non-maskable_interrupts
660.25 ± 81% +187.5% 1898 ± 47% interrupts.CPU53.PMI:Performance_monitoring_interrupts
1649 ± 48% -62.8% 613.50 ± 36% interrupts.CPU53.RES:Rescheduling_interrupts
1279 ±100% -97.6% 30.75 ±173% interrupts.CPU55.91:PCI-MSI.31981624-edge.i40e-eth0-TxRx-55
1508 ± 24% -73.3% 402.75 ± 46% interrupts.CPU55.RES:Rescheduling_interrupts
887.00 ± 69% -81.1% 167.50 ± 52% interrupts.CPU56.NMI:Non-maskable_interrupts
887.00 ± 69% -81.1% 167.50 ± 52% interrupts.CPU56.PMI:Performance_monitoring_interrupts
2222 ± 82% -85.6% 319.25 ± 40% interrupts.CPU56.RES:Rescheduling_interrupts
3004 ± 33% -79.4% 617.50 ± 51% interrupts.CPU57.RES:Rescheduling_interrupts
919724 ± 35% +189.6% 2663269 ± 61% interrupts.CPU59.CAL:Function_call_interrupts
1755075 ± 35% +173.0% 4791668 ± 63% interrupts.CPU59.TLB:TLB_shootdowns
4004 ± 51% -86.3% 548.25 ± 44% interrupts.CPU6.RES:Rescheduling_interrupts
1365 ± 12% -65.5% 471.25 ± 29% interrupts.CPU60.RES:Rescheduling_interrupts
1039 ± 76% -71.5% 296.00 ± 92% interrupts.CPU61.NMI:Non-maskable_interrupts
1039 ± 76% -71.5% 296.00 ± 92% interrupts.CPU61.PMI:Performance_monitoring_interrupts
1520 ± 23% -60.0% 608.25 ± 77% interrupts.CPU61.RES:Rescheduling_interrupts
1718 ± 65% -92.5% 129.50 ± 30% interrupts.CPU62.NMI:Non-maskable_interrupts
1718 ± 65% -92.5% 129.50 ± 30% interrupts.CPU62.PMI:Performance_monitoring_interrupts
1867 ± 44% -86.8% 245.75 ± 40% interrupts.CPU62.RES:Rescheduling_interrupts
1584 ± 11% -63.9% 572.50 ±102% interrupts.CPU64.RES:Rescheduling_interrupts
2065 ± 47% -79.1% 431.75 ± 36% interrupts.CPU65.RES:Rescheduling_interrupts
2210 ± 43% -75.3% 546.75 ± 51% interrupts.CPU66.RES:Rescheduling_interrupts
1964 ± 23% -81.0% 372.50 ± 47% interrupts.CPU67.RES:Rescheduling_interrupts
2528 ± 50% -87.2% 324.50 ± 39% interrupts.CPU69.RES:Rescheduling_interrupts
3910 ± 53% -82.8% 674.00 ± 36% interrupts.CPU7.RES:Rescheduling_interrupts
2207 ± 24% -74.8% 555.50 ± 63% interrupts.CPU70.RES:Rescheduling_interrupts
2035850 ± 19% +49.8% 3049921 ± 22% interrupts.CPU71.CAL:Function_call_interrupts
1595 ± 15% -68.3% 506.00 ± 23% interrupts.CPU71.RES:Rescheduling_interrupts
3828002 ± 18% +38.4% 5296277 ± 17% interrupts.CPU71.TLB:TLB_shootdowns
4123148 ± 27% +105.1% 8456105 ± 9% interrupts.CPU72.CAL:Function_call_interrupts
4751 ± 40% -52.2% 2269 ± 24% interrupts.CPU72.RES:Rescheduling_interrupts
7974544 ± 27% +87.9% 14986445 ± 8% interrupts.CPU72.TLB:TLB_shootdowns
5125 ± 64% -70.4% 1517 ± 14% interrupts.CPU73.RES:Rescheduling_interrupts
7759 ± 18% -75.7% 1886 ± 34% interrupts.CPU75.RES:Rescheduling_interrupts
7061412 ± 29% -48.0% 3672731 ± 42% interrupts.CPU76.CAL:Function_call_interrupts
10274 ± 24% -84.7% 1572 ± 33% interrupts.CPU76.RES:Rescheduling_interrupts
13660982 ± 29% -51.5% 6624464 ± 44% interrupts.CPU76.TLB:TLB_shootdowns
5500 ± 41% -71.7% 1558 ± 27% interrupts.CPU77.RES:Rescheduling_interrupts
4925 ± 83% -72.5% 1355 ± 28% interrupts.CPU79.RES:Rescheduling_interrupts
859.50 ± 55% -87.0% 112.00 ± 51% interrupts.CPU8.NMI:Non-maskable_interrupts
859.50 ± 55% -87.0% 112.00 ± 51% interrupts.CPU8.PMI:Performance_monitoring_interrupts
3101 ± 48% -76.1% 741.50 ± 43% interrupts.CPU8.RES:Rescheduling_interrupts
2251799 ± 65% +93.5% 4356723 ± 23% interrupts.CPU80.CAL:Function_call_interrupts
4283063 ± 67% +80.9% 7746230 ± 24% interrupts.CPU80.TLB:TLB_shootdowns
4997 ± 28% -73.9% 1304 ± 36% interrupts.CPU81.RES:Rescheduling_interrupts
6096230 ± 24% +64.4% 10022556 ± 19% interrupts.CPU82.CAL:Function_call_interrupts
6483 ± 30% -69.3% 1989 ± 53% interrupts.CPU82.RES:Rescheduling_interrupts
3177452 ± 33% +79.4% 5700410 ± 13% interrupts.CPU83.CAL:Function_call_interrupts
5925420 ± 33% +69.9% 10065548 ± 14% interrupts.CPU83.TLB:TLB_shootdowns
4484 ± 68% -77.8% 995.75 ± 41% interrupts.CPU85.RES:Rescheduling_interrupts
881.00 ± 37% -60.9% 344.75 ± 92% interrupts.CPU87.NMI:Non-maskable_interrupts
881.00 ± 37% -60.9% 344.75 ± 92% interrupts.CPU87.PMI:Performance_monitoring_interrupts
4258 ± 45% -80.3% 838.75 ± 12% interrupts.CPU87.RES:Rescheduling_interrupts
3215 ± 43% -82.7% 556.25 ± 14% interrupts.CPU9.RES:Rescheduling_interrupts
581.50 ± 54% -57.2% 249.00 ± 98% interrupts.CPU91.NMI:Non-maskable_interrupts
581.50 ± 54% -57.2% 249.00 ± 98% interrupts.CPU91.PMI:Performance_monitoring_interrupts
1797 ± 36% -50.7% 886.75 ± 24% interrupts.CPU93.RES:Rescheduling_interrupts
5200 ± 34% -82.1% 929.75 ± 62% interrupts.CPU95.RES:Rescheduling_interrupts
588629 ± 5% -81.0% 112120 ± 15% interrupts.RES:Rescheduling_interrupts
9.318e+08 -17.4% 7.7e+08 ± 6% interrupts.TLB:TLB_shootdowns
20.58 ± 3% -4.0 16.60 ± 8% perf-profile.calltrace.cycles-pp.do_swap_page.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault
14.56 ± 5% -2.5 12.04 ± 8% perf-profile.calltrace.cycles-pp.alloc_pages_vma.do_swap_page.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
14.52 ± 5% -2.5 12.00 ± 8% perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.alloc_pages_vma.do_swap_page.__handle_mm_fault.handle_mm_fault
14.36 ± 5% -2.5 11.88 ± 8% perf-profile.calltrace.cycles-pp.__alloc_pages_slowpath.__alloc_pages_nodemask.alloc_pages_vma.do_swap_page.__handle_mm_fault
14.03 ± 5% -2.4 11.67 ± 7% perf-profile.calltrace.cycles-pp.try_to_free_pages.__alloc_pages_slowpath.__alloc_pages_nodemask.alloc_pages_vma.do_swap_page
1.39 ± 5% -1.1 0.30 ±100% perf-profile.calltrace.cycles-pp.shrink_slab.shrink_node.do_try_to_free_pages.try_to_free_pages.__alloc_pages_slowpath
1.84 ± 13% -0.8 1.04 ± 28% perf-profile.calltrace.cycles-pp.page_referenced_one.rmap_walk_anon.page_referenced.shrink_page_list.shrink_inactive_list
1.64 ± 18% -0.8 0.87 ± 35% perf-profile.calltrace.cycles-pp.end_page_writeback.pmem_rw_page.bdev_write_page.__swap_writepage.pageout
1.67 ± 13% -0.5 1.14 ± 27% perf-profile.calltrace.cycles-pp.mem_cgroup_swapout.__remove_mapping.shrink_page_list.shrink_inactive_list.shrink_lruvec
3.31 -0.5 2.84 ± 11% perf-profile.calltrace.cycles-pp.try_to_unmap.shrink_page_list.shrink_inactive_list.shrink_lruvec.shrink_node
3.19 -0.5 2.73 ± 10% perf-profile.calltrace.cycles-pp.rmap_walk_anon.try_to_unmap.shrink_page_list.shrink_inactive_list.shrink_lruvec
2.76 ± 2% -0.4 2.33 ± 11% perf-profile.calltrace.cycles-pp.try_to_unmap_one.rmap_walk_anon.try_to_unmap.shrink_page_list.shrink_inactive_list
1.55 ± 2% -0.3 1.21 ± 10% perf-profile.calltrace.cycles-pp.swap_readpage.do_swap_page.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
1.46 ± 2% -0.3 1.14 ± 10% perf-profile.calltrace.cycles-pp.bdev_read_page.swap_readpage.do_swap_page.__handle_mm_fault.handle_mm_fault
1.44 ± 2% -0.3 1.13 ± 10% perf-profile.calltrace.cycles-pp.pmem_rw_page.bdev_read_page.swap_readpage.do_swap_page.__handle_mm_fault
1.36 ± 3% -0.3 1.07 ± 10% perf-profile.calltrace.cycles-pp.pmem_do_read.pmem_rw_page.bdev_read_page.swap_readpage.do_swap_page
1.29 ± 3% -0.3 1.01 ± 10% perf-profile.calltrace.cycles-pp.__memcpy_mcsafe.pmem_do_read.pmem_rw_page.bdev_read_page.swap_readpage
1.37 ± 3% -0.3 1.10 ± 10% perf-profile.calltrace.cycles-pp.mem_cgroup_charge.do_swap_page.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
0.95 -0.2 0.75 ± 13% perf-profile.calltrace.cycles-pp.mem_cgroup_charge.do_anonymous_page.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
0.80 ± 5% -0.2 0.62 ± 10% perf-profile.calltrace.cycles-pp.mem_cgroup_uncharge_swap.mem_cgroup_charge.do_swap_page.__handle_mm_fault.handle_mm_fault
0.76 ± 5% -0.1 0.62 ± 7% perf-profile.calltrace.cycles-pp.__swap_entry_free.do_swap_page.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
0.00 +0.6 0.62 ± 19% perf-profile.calltrace.cycles-pp.page_referenced.shrink_active_list.shrink_lruvec.shrink_node.balance_pgdat
1.62 ± 4% +0.7 2.29 ± 11% perf-profile.calltrace.cycles-pp.page_referenced.shrink_active_list.shrink_lruvec.shrink_node.do_try_to_free_pages
2.72 ± 4% +0.8 3.55 ± 12% perf-profile.calltrace.cycles-pp.shrink_active_list.shrink_lruvec.shrink_node.do_try_to_free_pages.try_to_free_pages
0.64 ± 3% +0.9 1.50 ± 33% perf-profile.calltrace.cycles-pp.page_vma_mapped_walk.page_referenced_one.rmap_walk_anon.page_referenced.shrink_active_list
0.00 +0.9 0.88 ± 20% perf-profile.calltrace.cycles-pp.shrink_active_list.shrink_lruvec.shrink_node.balance_pgdat.kswapd
1.53 ± 4% +1.0 2.49 ± 16% perf-profile.calltrace.cycles-pp.rmap_walk_anon.page_referenced.shrink_active_list.shrink_lruvec.shrink_node
0.89 ± 26% +1.1 2.04 ± 17% perf-profile.calltrace.cycles-pp.page_referenced_one.rmap_walk_anon.page_referenced.shrink_active_list.shrink_lruvec
30.61 +7.8 38.43 ± 18% perf-profile.calltrace.cycles-pp.secondary_startup_64
30.34 +7.9 38.24 ± 17% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
30.35 +7.9 38.25 ± 17% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
30.35 +7.9 38.25 ± 17% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
42.42 -5.2 37.17 ± 11% perf-profile.children.cycles-pp.shrink_inactive_list
20.65 ± 3% -4.0 16.64 ± 8% perf-profile.children.cycles-pp.do_swap_page
6.84 ± 2% -1.0 5.86 ± 12% perf-profile.children.cycles-pp.pmem_rw_page
1.46 ± 4% -0.6 0.85 ± 11% perf-profile.children.cycles-pp.shrink_slab
1.63 ± 6% -0.6 1.07 ± 12% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
3.39 -0.5 2.86 ± 11% perf-profile.children.cycles-pp.try_to_unmap
2.90 ± 2% -0.5 2.40 ± 10% perf-profile.children.cycles-pp.try_to_unmap_one
1.12 ± 6% -0.5 0.64 ± 12% perf-profile.children.cycles-pp.do_shrink_slab
2.32 ± 2% -0.5 1.85 ± 12% perf-profile.children.cycles-pp._raw_spin_lock
2.37 -0.5 1.90 ± 11% perf-profile.children.cycles-pp.mem_cgroup_charge
1.32 ± 6% -0.4 0.93 ± 13% perf-profile.children.cycles-pp._raw_spin_lock_irq
0.86 ± 4% -0.4 0.48 ± 10% perf-profile.children.cycles-pp.worker_thread
0.82 ± 4% -0.4 0.46 ± 13% perf-profile.children.cycles-pp.count_shadow_nodes
1.55 ± 2% -0.3 1.21 ± 10% perf-profile.children.cycles-pp.swap_readpage
0.79 ± 3% -0.3 0.45 ± 11% perf-profile.children.cycles-pp.process_one_work
1.69 -0.3 1.35 ± 12% perf-profile.children.cycles-pp.get_page_from_freelist
1.46 ± 2% -0.3 1.14 ± 10% perf-profile.children.cycles-pp.bdev_read_page
0.74 ± 4% -0.3 0.42 ± 10% perf-profile.children.cycles-pp.drain_local_pages_wq
0.74 ± 4% -0.3 0.42 ± 10% perf-profile.children.cycles-pp.drain_pages
0.73 ± 4% -0.3 0.42 ± 10% perf-profile.children.cycles-pp.drain_pages_zone
1.97 ± 3% -0.3 1.65 ± 11% perf-profile.children.cycles-pp.end_page_writeback
1.36 ± 3% -0.3 1.07 ± 10% perf-profile.children.cycles-pp.pmem_do_read
1.35 ± 3% -0.3 1.06 ± 10% perf-profile.children.cycles-pp.__memcpy_mcsafe
0.72 ± 4% -0.3 0.44 ± 11% perf-profile.children.cycles-pp.free_pcppages_bulk
0.88 ± 2% -0.2 0.64 ± 11% perf-profile.children.cycles-pp.cpumask_next
1.37 ± 4% -0.2 1.13 ± 11% perf-profile.children.cycles-pp.page_counter_uncharge
1.35 ± 4% -0.2 1.12 ± 12% perf-profile.children.cycles-pp.page_counter_cancel
0.93 ± 5% -0.2 0.73 ± 11% perf-profile.children.cycles-pp.mem_cgroup_uncharge_swap
0.76 ± 2% -0.2 0.57 ± 12% perf-profile.children.cycles-pp.lru_cache_add
0.74 -0.2 0.55 ± 13% perf-profile.children.cycles-pp.pagevec_lru_move_fn
0.73 ± 6% -0.2 0.55 ± 19% perf-profile.children.cycles-pp._find_next_bit
0.63 ± 4% -0.2 0.46 ± 15% perf-profile.children.cycles-pp.rmqueue
0.74 ± 3% -0.1 0.60 ± 12% perf-profile.children.cycles-pp.__test_set_page_writeback
0.76 ± 5% -0.1 0.62 ± 7% perf-profile.children.cycles-pp.__swap_entry_free
0.55 ± 6% -0.1 0.43 ± 18% perf-profile.children.cycles-pp.workingset_eviction
0.67 ± 4% -0.1 0.55 ± 14% perf-profile.children.cycles-pp.prep_new_page
0.34 ± 4% -0.1 0.21 ± 17% perf-profile.children.cycles-pp.rmqueue_bulk
0.84 ± 3% -0.1 0.72 ± 12% perf-profile.children.cycles-pp.xas_store
0.61 ± 4% -0.1 0.50 ± 14% perf-profile.children.cycles-pp.clear_page_erms
0.14 ± 15% -0.1 0.03 ±100% perf-profile.children.cycles-pp.drain_all_pages
0.67 ± 3% -0.1 0.56 ± 14% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.45 ± 4% -0.1 0.34 ± 10% perf-profile.children.cycles-pp.try_charge
0.56 -0.1 0.46 ± 11% perf-profile.children.cycles-pp.__list_del_entry_valid
0.21 ± 3% -0.1 0.11 ± 11% perf-profile.children.cycles-pp.zone_reclaimable_pages
0.54 ± 2% -0.1 0.45 ± 10% perf-profile.children.cycles-pp.mem_cgroup_id_get_online
0.20 ± 3% -0.1 0.11 ± 11% perf-profile.children.cycles-pp.throttle_direct_reclaim
0.20 ± 3% -0.1 0.11 ± 11% perf-profile.children.cycles-pp.allow_direct_reclaim
0.59 ± 3% -0.1 0.51 ± 10% perf-profile.children.cycles-pp.call_rcu
0.61 ± 5% -0.1 0.53 ± 9% perf-profile.children.cycles-pp.swap_cgroup_record
0.20 ± 8% -0.1 0.12 ± 13% perf-profile.children.cycles-pp.__sched_text_start
0.34 -0.1 0.27 ± 16% perf-profile.children.cycles-pp.__pagevec_lru_add_fn
0.32 ± 4% -0.1 0.26 ± 13% perf-profile.children.cycles-pp.test_clear_page_writeback
0.16 ± 11% -0.1 0.10 ± 14% perf-profile.children.cycles-pp.mem_cgroup_iter
0.22 -0.1 0.16 ± 13% perf-profile.children.cycles-pp.lru_note_cost
0.14 ± 20% -0.1 0.08 ± 16% perf-profile.children.cycles-pp.super_cache_count
0.09 ± 13% -0.1 0.03 ±100% perf-profile.children.cycles-pp.ttwu_do_activate
0.41 ± 3% -0.1 0.35 ± 8% perf-profile.children.cycles-pp.lookup_swap_cgroup_id
0.12 ± 7% -0.1 0.06 ± 20% perf-profile.children.cycles-pp.mem_cgroup_page_lruvec
0.08 ± 14% -0.1 0.03 ±100% perf-profile.children.cycles-pp.enqueue_entity
0.32 ± 3% -0.1 0.27 ± 8% perf-profile.children.cycles-pp.__mod_memcg_lruvec_state
0.31 ± 9% -0.1 0.26 ± 8% perf-profile.children.cycles-pp.cgroup_throttle_swaprate
0.21 ± 13% -0.1 0.16 ± 11% perf-profile.children.cycles-pp.lookup_swap_cache
0.12 ± 12% -0.0 0.07 ± 11% perf-profile.children.cycles-pp.mem_cgroup_calculate_protection
0.09 ± 12% -0.0 0.04 ± 58% perf-profile.children.cycles-pp.try_to_wake_up
0.08 ± 17% -0.0 0.04 ± 58% perf-profile.children.cycles-pp.enqueue_task_fair
0.12 ± 13% -0.0 0.08 ± 15% perf-profile.children.cycles-pp.schedule
0.15 ± 6% -0.0 0.11 ± 19% perf-profile.children.cycles-pp.lru_add_drain
0.20 ± 5% -0.0 0.16 ± 11% perf-profile.children.cycles-pp.mem_cgroup_id_put_many
0.12 ± 13% -0.0 0.08 ± 12% perf-profile.children.cycles-pp.page_add_new_anon_rmap
0.22 ± 7% -0.0 0.19 ± 5% perf-profile.children.cycles-pp.__frontswap_store
0.11 ± 13% -0.0 0.08 ± 5% perf-profile.children.cycles-pp.find_next_bit
0.12 ± 11% -0.0 0.09 ± 9% perf-profile.children.cycles-pp.get_swap_device
0.10 ± 4% -0.0 0.07 ± 11% perf-profile.children.cycles-pp.do_page_add_anon_rmap
0.08 ± 12% -0.0 0.05 ± 60% perf-profile.children.cycles-pp.swap_range_free
0.11 ± 14% -0.0 0.08 ± 14% perf-profile.children.cycles-pp.__swap_count
0.11 ± 7% -0.0 0.09 ± 16% perf-profile.children.cycles-pp.xas_load
0.08 ± 12% -0.0 0.06 ± 14% perf-profile.children.cycles-pp.pick_next_task_fair
3.49 ± 2% +0.5 4.02 ± 10% perf-profile.children.cycles-pp.page_referenced_one
3.20 +1.4 4.56 ± 10% perf-profile.children.cycles-pp.shrink_active_list
30.61 +7.8 38.43 ± 18% perf-profile.children.cycles-pp.secondary_startup_64
30.61 +7.8 38.43 ± 18% perf-profile.children.cycles-pp.cpu_startup_entry
30.61 +7.8 38.43 ± 18% perf-profile.children.cycles-pp.do_idle
30.35 +7.9 38.25 ± 17% perf-profile.children.cycles-pp.start_secondary
8.47 ± 3% -1.1 7.38 ± 11% perf-profile.self.cycles-pp.smp_call_function_many_cond
1.63 ± 5% -0.6 1.06 ± 12% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
1.17 ± 3% -0.2 0.94 ± 9% perf-profile.self.cycles-pp.__memcpy_mcsafe
1.52 -0.2 1.31 ± 11% perf-profile.self.cycles-pp._raw_spin_lock
1.29 ± 4% -0.2 1.08 ± 7% perf-profile.self.cycles-pp.try_to_unmap_one
1.31 ± 3% -0.2 1.11 ± 11% perf-profile.self.cycles-pp.end_page_writeback
0.39 ± 2% -0.2 0.23 ± 16% perf-profile.self.cycles-pp.count_shadow_nodes
0.59 ± 6% -0.2 0.44 ± 19% perf-profile.self.cycles-pp._find_next_bit
0.50 ± 4% -0.1 0.39 ± 14% perf-profile.self.cycles-pp.__test_set_page_writeback
0.54 -0.1 0.45 ± 10% perf-profile.self.cycles-pp.mem_cgroup_id_get_online
0.54 ± 2% -0.1 0.45 ± 10% perf-profile.self.cycles-pp.__list_del_entry_valid
0.42 -0.1 0.33 ± 13% perf-profile.self.cycles-pp._raw_spin_lock_irq
0.29 ± 9% -0.1 0.21 ± 18% perf-profile.self.cycles-pp.workingset_eviction
0.28 ± 5% -0.1 0.21 ± 10% perf-profile.self.cycles-pp.try_charge
0.22 ± 9% -0.1 0.15 ± 14% perf-profile.self.cycles-pp.cpumask_next
0.19 ± 11% -0.1 0.12 ± 12% perf-profile.self.cycles-pp.free_pcppages_bulk
0.15 ± 2% -0.1 0.09 ± 26% perf-profile.self.cycles-pp.rmqueue_bulk
0.22 -0.1 0.16 ± 13% perf-profile.self.cycles-pp.lru_note_cost
0.11 ± 7% -0.1 0.05 ± 60% perf-profile.self.cycles-pp.mem_cgroup_page_lruvec
0.24 ± 7% -0.1 0.18 ± 15% perf-profile.self.cycles-pp.workingset_age_nonresident
0.25 -0.1 0.19 ± 11% perf-profile.self.cycles-pp.mem_cgroup_charge
0.12 ± 3% -0.1 0.07 ± 12% perf-profile.self.cycles-pp.zone_reclaimable_pages
0.14 ± 7% -0.1 0.08 ± 13% perf-profile.self.cycles-pp.shrink_lruvec
0.29 ± 4% -0.0 0.24 ± 11% perf-profile.self.cycles-pp.page_swap_info
0.40 ± 10% -0.0 0.35 ± 12% perf-profile.self.cycles-pp.page_lock_anon_vma_read
0.10 ± 7% -0.0 0.06 ± 59% perf-profile.self.cycles-pp.ktime_get_update_offsets_now
0.11 ± 13% -0.0 0.07 ± 19% perf-profile.self.cycles-pp.rmqueue
0.10 ± 16% -0.0 0.06 ± 11% perf-profile.self.cycles-pp.mem_cgroup_calculate_protection
0.17 ± 4% -0.0 0.14 ± 13% perf-profile.self.cycles-pp.mem_cgroup_id_put_many
0.10 ± 12% -0.0 0.07 ± 20% perf-profile.self.cycles-pp.mem_cgroup_iter
0.12 ± 19% -0.0 0.09 ± 7% perf-profile.self.cycles-pp.handle_mm_fault
0.08 ± 10% -0.0 0.05 ± 9% perf-profile.self.cycles-pp.find_next_bit
0.13 ± 9% -0.0 0.11 ± 7% perf-profile.self.cycles-pp.lru_cache_add
0.12 ± 7% -0.0 0.09 ± 4% perf-profile.self.cycles-pp.__frontswap_store
0.08 ± 12% -0.0 0.06 ± 11% perf-profile.self.cycles-pp.lookup_swap_cache
0.06 ± 14% +0.0 0.08 ± 8% perf-profile.self.cycles-pp.ptep_clear_flush_young
0.12 ± 12% +0.0 0.16 ± 8% perf-profile.self.cycles-pp.shrink_active_list
0.23 ± 6% +0.0 0.28 ± 9% perf-profile.self.cycles-pp.__isolate_lru_page
vm-scalability.throughput
200000 +------------------------------------------------------------------+
| O |
180000 |-+ |
| O O O |
| O O O O O |
160000 |-+ O O O |
| O O O O O O |
140000 |-+ O O O O O O |
| O O O |
120000 |-+ O |
| O O |
| |
100000 |.+.+.+.+.+..+.+.+.+.+.+.+.+.+.+..+.+.+.+.+.+.+.+.+.+.+..+.+.+.+ |
| |
80000 +------------------------------------------------------------------+
vm-scalability.time.percent_of_cpu_this_job_got
650 +---------------------------------------------------------------------+
| O O O O O O O O |
645 |-O O O O O O O O O |
| O O O |
640 |-+ O O O O O O O O O |
| |
635 |-+ |
| |
630 |-+ O |
| .+. |
625 |-+ + + +. .+..+ |
| .+.. .+. +. + + .+. .+. .+. .+. .. +.+ |
620 |.+ + +.+.+.. + +.+ +..+ + +..+ + + |
| + |
615 +---------------------------------------------------------------------+
vm-scalability.time.voluntary_context_switches
200000 +------------------------------------------------------------------+
180000 |-+ +. |
| + + + : +. |
160000 |-+ + : : : + : : +.+ |
140000 |-+ + +. + : : : .+ : +.. + + + + .+ |
| + + + + +.+ + : + + + + + + + .+.+ |
120000 |.+ + +.+ + +.+ + + |
100000 |-+ |
80000 |-+ |
| |
60000 |-+ |
40000 |-+ O |
| O O |
20000 |-+ O O |
0 +------------------------------------------------------------------+
vm-scalability.time.involuntary_context_switches
1.8e+06 +-----------------------------------------------------------------+
| +. +. +. |
1.6e+06 |++ +.+.+ .+. + +.+.+. + +.+.+. .+. |
| + .+. .+.+.+.+ + + +. +.+. .+. .+ |
1.4e+06 |-+ + +. + + |
1.2e+06 |-+ |
| |
1e+06 |-+ |
| |
800000 |-+ |
600000 |-+ O O |
| O |
400000 |-+ |
| O O |
200000 +-----------------------------------------------------------------+
vm-scalability.stddev_
800 +---------------------------------------------------------------------+
| |
700 |-+ O |
600 |-+ O O O |
| O O O O O O O O O |
500 |-O O O O O O O |
| O O O O O |
400 |-+ O O O O |
| O |
300 |-+ O |
200 |-+ |
| |
100 |-+ |
| .+..+. .+. .+.. |
0 +---------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Oliver Sang
View attachment "config-5.8.0-12306-gccc5dc67340c10" of type "text/plain" (170117 bytes)
View attachment "job-script" of type "text/plain" (8250 bytes)
View attachment "job.yaml" of type "text/plain" (6084 bytes)
View attachment "reproduce" of type "text/plain" (968 bytes)
Powered by blists - more mailing lists