[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20181225003532.GN23332@shao2-debian>
Date: Tue, 25 Dec 2018 08:35:32 +0800
From: kernel test robot <rong.a.chen@...el.com>
To: Josef Bacik <josef@...icpanda.com>
Cc: 0day robot <lkp@...el.com>, Johannes Weiner <hannes@...xchg.org>,
LKML <linux-kernel@...r.kernel.org>, lkp@...org
Subject: [LKP] [filemap] 48dc11646a: vm-scalability.throughput -9.5%
regression
Greeting,
FYI, we noticed a -9.5% regression of vm-scalability.throughput due to commit:
commit: 48dc11646ad404e441179e9d48cc2139b1de6a24 ("filemap: drop the mmap_sem for all blocking operations")
https://github.com/0day-ci/linux UPDATE-20181214-093104/Josef-Bacik/drop-the-mmap_sem-when-doing-IO-in-the-fault-path/20181214-073658
in testcase: vm-scalability
on test machine: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
with following parameters:
runtime: 300s
test: lru-file-mmap-read-rand
cpufreq_governor: performance
test-description: The motivation behind this suite is to exercise functions and regions of the mm/ of the Linux kernel which are of interest to us.
test-url: https://git.kernel.org/cgit/linux/kernel/git/wfg/vm-scalability.git/
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/tbox_group/test/testcase:
gcc-7/performance/x86_64-rhel-7.2/debian-x86_64-2018-04-03.cgz/300s/lkp-bdw-ep2/lru-file-mmap-read-rand/vm-scalability
commit:
f3a1d5298a ("filemap: pass vm_fault to the mmap ra helpers")
48dc11646a ("filemap: drop the mmap_sem for all blocking operations")
f3a1d5298a1ddaed 48dc11646ad404e441179e9d48
---------------- --------------------------
%stddev %change %stddev
\ | \
0.12 ± 3% -27.1% 0.09 ± 2% vm-scalability.free_time
20552 -9.7% 18556 vm-scalability.median
0.07 ± 10% +1054.1% 0.78 ± 48% vm-scalability.stddev
1808804 -9.5% 1637219 vm-scalability.throughput
381.59 -5.8% 359.57 vm-scalability.time.elapsed_time
381.59 -5.8% 359.57 vm-scalability.time.elapsed_time.max
591506 -5.9% 556579 vm-scalability.time.involuntary_context_switches
9.345e+08 -95.2% 44848645 vm-scalability.time.major_page_faults
2146973 ± 3% -18.5% 1748942 ± 5% vm-scalability.time.maximum_resident_set_size
62893 +677.0% 488688 vm-scalability.time.minor_page_faults
6812 +15.9% 7898 vm-scalability.time.percent_of_cpu_this_job_got
25703 +10.4% 28379 vm-scalability.time.system_time
291.80 ± 3% -92.8% 21.13 ± 2% vm-scalability.time.user_time
5285 ± 2% -71.4% 1512 vm-scalability.time.voluntary_context_switches
5.444e+08 -1.3% 5.372e+08 vm-scalability.workload
72342842 -2.1% 70857245 interrupts.CAL:Function_call_interrupts
419.88 +3.5% 434.76 pmeter.Average_Active_Power
4308 -12.6% 3765 pmeter.performance_per_watt
853130 ± 17% +218.7% 2719020 ± 3% softirqs.RCU
1107823 ± 3% -44.6% 613497 ± 13% softirqs.SCHED
4.988e+09 ± 32% -86.2% 6.876e+08 ± 96% cpuidle.C6.time
6762575 -87.7% 832245 ± 81% cpuidle.C6.usage
2153739 ±167% -99.0% 20749 ± 14% cpuidle.POLL.time
202425 ±152% -95.5% 9011 ± 12% cpuidle.POLL.usage
21.41 ± 2% -11.3 10.06 ± 18% mpstat.cpu.idle%
0.03 ± 74% +0.1 0.10 ± 2% mpstat.cpu.soft%
77.65 +12.1 89.72 ± 2% mpstat.cpu.sys%
0.92 ± 3% -0.8 0.11 ± 6% mpstat.cpu.usr%
7.00 +39.3% 9.75 ± 8% vmstat.memory.buff
16321034 ± 2% -45.7% 8865070 ± 27% vmstat.memory.free
70.50 +13.5% 80.00 ± 2% vmstat.procs.r
3654 -4.0% 3506 vmstat.system.cs
341735 ± 7% +8.2% 369854 vmstat.system.in
2217 +13.8% 2523 ± 2% turbostat.Avg_MHz
79.57 +10.9 90.44 turbostat.Busy%
6752761 -87.8% 825468 ± 82% turbostat.C6
14.82 ± 33% -12.7 2.15 ± 98% turbostat.C6%
7.75 ± 33% -43.4% 4.39 ± 25% turbostat.CPU%c1
9.60 ± 53% -88.7% 1.09 ±118% turbostat.CPU%c6
6.89 ± 13% -60.1% 2.75 ± 17% turbostat.Pkg%pc2
214.33 +5.0% 225.12 turbostat.PkgWatt
4.303e+08 +54.2% 6.634e+08 numa-numastat.node0.local_node
39651017 ± 3% +19.5% 47390228 numa-numastat.node0.numa_foreign
4.303e+08 +54.2% 6.634e+08 numa-numastat.node0.numa_hit
37375189 ± 3% +23.2% 46053218 ± 2% numa-numastat.node0.numa_miss
37386763 ± 3% +23.2% 46064734 ± 2% numa-numastat.node0.other_node
4.307e+08 +55.8% 6.71e+08 numa-numastat.node1.local_node
37375189 ± 3% +23.2% 46053218 ± 2% numa-numastat.node1.numa_foreign
4.307e+08 +55.8% 6.71e+08 numa-numastat.node1.numa_hit
39651017 ± 3% +19.5% 47390228 numa-numastat.node1.numa_miss
39656846 ± 3% +19.5% 47396091 numa-numastat.node1.other_node
8107 ± 7% +12.9% 9150 ± 6% slabinfo.kmalloc-512.active_objs
8182 ± 6% +13.3% 9272 ± 4% slabinfo.kmalloc-512.num_objs
1566 ± 18% -24.0% 1189 ± 23% slabinfo.mnt_cache.active_objs
1566 ± 18% -24.0% 1189 ± 23% slabinfo.mnt_cache.num_objs
11448 ± 7% -28.2% 8217 ± 4% slabinfo.proc_inode_cache.active_objs
11698 ± 7% -26.5% 8601 ± 4% slabinfo.proc_inode_cache.num_objs
14921320 -65.7% 5122948 ± 2% slabinfo.radix_tree_node.active_objs
269062 -63.8% 97383 ± 2% slabinfo.radix_tree_node.active_slabs
15051525 -63.8% 5443816 ± 2% slabinfo.radix_tree_node.num_objs
269062 -63.8% 97383 ± 2% slabinfo.radix_tree_node.num_slabs
1922093 -73.6% 507337 meminfo.Active
1582608 -89.4% 168086 meminfo.Active(file)
98694491 +12.9% 1.115e+08 ± 2% meminfo.Cached
29455 -42.4% 16956 ± 22% meminfo.CmaFree
96120058 +14.8% 1.103e+08 ± 2% meminfo.Inactive
96038460 +14.8% 1.102e+08 ± 2% meminfo.Inactive(file)
8657374 -63.5% 3162742 ± 2% meminfo.KReclaimable
84680398 -18.8% 68794429 ± 2% meminfo.Mapped
17180298 ± 2% -47.7% 8980048 ± 29% meminfo.MemFree
6435029 +12.9% 7267370 ± 2% meminfo.PageTables
8657374 -63.5% 3162742 ± 2% meminfo.SReclaimable
8793999 -62.5% 3300102 ± 2% meminfo.Slab
277.90 ± 15% -29.0% 197.45 sched_debug.cfs_rq:/.exec_clock.stddev
13.57 ± 3% +43.0% 19.40 ± 2% sched_debug.cfs_rq:/.nr_spread_over.avg
41.68 ± 10% +38.9% 57.88 ± 10% sched_debug.cfs_rq:/.nr_spread_over.stddev
764.67 -49.1% 389.53 ± 3% sched_debug.cfs_rq:/.util_est_enqueued.avg
1466 ± 7% -21.5% 1151 ± 6% sched_debug.cfs_rq:/.util_est_enqueued.max
305.29 ± 26% -99.7% 0.83 sched_debug.cfs_rq:/.util_est_enqueued.min
162.68 ± 11% +56.9% 255.29 sched_debug.cfs_rq:/.util_est_enqueued.stddev
8523 -16.8% 7092 sched_debug.cpu.nr_switches.avg
15335 ± 8% +44.9% 22215 ± 25% sched_debug.cpu.nr_switches.max
6135 -23.3% 4707 sched_debug.cpu.nr_switches.min
1953 ± 7% +44.9% 2830 ± 16% sched_debug.cpu.nr_switches.stddev
8940 -13.8% 7704 sched_debug.cpu.sched_count.avg
5946 -24.3% 4501 sched_debug.cpu.sched_count.min
111.38 ± 6% -20.7% 88.30 ± 7% sched_debug.cpu.sched_goidle.avg
18.71 ± 57% -81.7% 3.42 ±108% sched_debug.cpu.sched_goidle.min
3767 -17.8% 3095 sched_debug.cpu.ttwu_count.avg
6628 ± 6% +54.3% 10228 ± 26% sched_debug.cpu.ttwu_count.max
2517 -12.8% 2194 sched_debug.cpu.ttwu_count.min
850.58 ± 8% +41.8% 1206 ± 16% sched_debug.cpu.ttwu_count.stddev
5134 ± 4% +91.0% 9805 ± 28% sched_debug.cpu.ttwu_local.max
647.52 ± 7% +75.8% 1138 ± 21% sched_debug.cpu.ttwu_local.stddev
955375 ± 2% -75.6% 233145 ± 16% numa-meminfo.node0.Active
798638 -89.5% 83955 ± 2% numa-meminfo.node0.Active(file)
49091311 +12.9% 55435668 ± 2% numa-meminfo.node0.FilePages
47810959 +14.8% 54880967 ± 2% numa-meminfo.node0.Inactive
47783818 +14.8% 54852687 ± 2% numa-meminfo.node0.Inactive(file)
4268630 -62.7% 1592915 ± 2% numa-meminfo.node0.KReclaimable
8053 ± 8% +11.8% 9002 ± 4% numa-meminfo.node0.KernelStack
41991015 -18.7% 34155091 ± 2% numa-meminfo.node0.Mapped
8890951 ± 5% -48.7% 4557426 ± 31% numa-meminfo.node0.MemFree
351.75 ± 65% +164.7% 931.25 ± 11% numa-meminfo.node0.Mlocked
3141687 +17.0% 3676960 ± 2% numa-meminfo.node0.PageTables
4268630 -62.7% 1592915 ± 2% numa-meminfo.node0.SReclaimable
4336412 -61.6% 1666879 ± 2% numa-meminfo.node0.Slab
959515 ± 2% -71.4% 274128 ± 13% numa-meminfo.node1.Active
776442 ± 2% -89.2% 83982 ± 2% numa-meminfo.node1.Active(file)
21082 ± 5% -52.4% 10031 ± 65% numa-meminfo.node1.AnonHugePages
49322233 +13.5% 55964018 ± 2% numa-meminfo.node1.FilePages
48034663 +15.3% 55361230 ± 2% numa-meminfo.node1.Inactive
47980224 +15.3% 55308414 ± 2% numa-meminfo.node1.Inactive(file)
4364164 ± 2% -64.0% 1570171 numa-meminfo.node1.KReclaimable
42131890 -18.1% 34522229 ± 2% numa-meminfo.node1.Mapped
8639732 ± 2% -47.8% 4511111 ± 28% numa-meminfo.node1.MemFree
3250849 +10.0% 3576222 ± 2% numa-meminfo.node1.PageTables
4364164 ± 2% -64.0% 1570171 numa-meminfo.node1.SReclaimable
4433013 -63.2% 1633564 numa-meminfo.node1.Slab
5.046e+12 +3.9% 5.243e+12 perf-stat.branch-instructions
0.59 -0.3 0.29 perf-stat.branch-miss-rate%
2.997e+10 -48.5% 1.543e+10 perf-stat.branch-misses
27.67 -15.0 12.63 perf-stat.cache-miss-rate%
3.825e+10 -52.8% 1.805e+10 perf-stat.cache-misses
1.382e+11 +3.4% 1.429e+11 perf-stat.cache-references
1395940 -8.8% 1272990 perf-stat.context-switches
3.34 +5.9% 3.53 perf-stat.cpi
7.431e+13 +8.4% 8.059e+13 perf-stat.cpu-cycles
25159 -5.5% 23769 ± 2% perf-stat.cpu-migrations
0.29 -0.2 0.12 perf-stat.dTLB-load-miss-rate%
1.635e+10 -56.0% 7.197e+09 perf-stat.dTLB-load-misses
5.634e+12 +2.4% 5.771e+12 perf-stat.dTLB-loads
0.03 ± 2% -0.0 0.02 ± 4% perf-stat.dTLB-store-miss-rate%
6.14e+08 ± 3% -48.8% 3.146e+08 ± 5% perf-stat.dTLB-store-misses
1.762e+12 -18.7% 1.432e+12 perf-stat.dTLB-stores
94.46 -8.3 86.12 perf-stat.iTLB-load-miss-rate%
2.586e+09 ± 3% -74.2% 6.682e+08 ± 2% perf-stat.iTLB-load-misses
1.517e+08 ± 5% -29.1% 1.076e+08 ± 7% perf-stat.iTLB-loads
2.226e+13 +2.4% 2.28e+13 perf-stat.instructions
8616 ± 3% +296.3% 34147 ± 2% perf-stat.instructions-per-iTLB-miss
0.30 -5.6% 0.28 perf-stat.ipc
9.345e+08 -95.2% 44848645 perf-stat.major-faults
965378 +42.3% 1373997 perf-stat.minor-faults
51.17 -3.1 48.05 perf-stat.node-load-miss-rate%
9.876e+09 -69.9% 2.972e+09 perf-stat.node-load-misses
9.423e+09 -65.9% 3.213e+09 perf-stat.node-loads
44.36 -7.9 36.42 perf-stat.node-store-miss-rate%
3.391e+09 -33.1% 2.267e+09 perf-stat.node-store-misses
4.255e+09 -7.0% 3.959e+09 perf-stat.node-stores
9.355e+08 -95.1% 46222658 perf-stat.page-faults
40896 +3.8% 42441 perf-stat.path-length
199127 -89.5% 20942 ± 2% numa-vmstat.node0.nr_active_file
12240314 +13.0% 13826734 ± 2% numa-vmstat.node0.nr_file_pages
2261502 ± 4% -48.0% 1175164 ± 31% numa-vmstat.node0.nr_free_pages
11913896 +14.8% 13681031 ± 2% numa-vmstat.node0.nr_inactive_file
8052 ± 8% +11.7% 8995 ± 4% numa-vmstat.node0.nr_kernel_stack
10450116 -18.5% 8512102 ± 2% numa-vmstat.node0.nr_mapped
88.50 ± 65% +163.3% 233.00 ± 11% numa-vmstat.node0.nr_mlock
781949 +17.2% 916184 ± 2% numa-vmstat.node0.nr_page_table_pages
1064405 -62.6% 397596 ± 2% numa-vmstat.node0.nr_slab_reclaimable
199104 -89.5% 20943 ± 2% numa-vmstat.node0.nr_zone_active_file
11913779 +14.8% 13680979 ± 2% numa-vmstat.node0.nr_zone_inactive_file
2.832e+08 +33.1% 3.769e+08 ± 2% numa-vmstat.node0.numa_hit
2.832e+08 +33.1% 3.769e+08 ± 2% numa-vmstat.node0.numa_local
22302329 ± 4% +12.1% 25008317 ± 2% numa-vmstat.node0.numa_miss
22314719 ± 4% +12.1% 25020155 ± 2% numa-vmstat.node0.numa_other
127045 -91.3% 11097 numa-vmstat.node0.workingset_activate
2202 ±173% +5.2e+05% 11464801 ± 2% numa-vmstat.node0.workingset_nodereclaim
1337989 +21.7% 1627764 ± 2% numa-vmstat.node0.workingset_nodes
79616893 -53.3% 37191188 ± 2% numa-vmstat.node0.workingset_refault
1649 ± 8% -99.8% 3.00 ± 97% numa-vmstat.node0.workingset_restore
193685 ± 2% -89.2% 20949 ± 2% numa-vmstat.node1.nr_active_file
12299059 +13.4% 13953274 ± 2% numa-vmstat.node1.nr_file_pages
7573 -42.8% 4332 ± 24% numa-vmstat.node1.nr_free_cma
2197262 -46.8% 1168841 ± 28% numa-vmstat.node1.nr_free_pages
11963989 +15.3% 13789386 ± 2% numa-vmstat.node1.nr_inactive_file
10487860 -18.0% 8598884 ± 2% numa-vmstat.node1.nr_mapped
809178 +10.1% 891301 ± 2% numa-vmstat.node1.nr_page_table_pages
1088566 ± 2% -64.0% 391927 ± 2% numa-vmstat.node1.nr_slab_reclaimable
193685 ± 2% -89.2% 20949 ± 2% numa-vmstat.node1.nr_zone_active_file
11963951 +15.3% 13789323 ± 2% numa-vmstat.node1.nr_zone_inactive_file
22306203 ± 4% +12.1% 25012099 ± 2% numa-vmstat.node1.numa_foreign
2.836e+08 +34.5% 3.814e+08 numa-vmstat.node1.numa_hit
2.835e+08 +34.5% 3.813e+08 numa-vmstat.node1.numa_local
123831 -91.4% 10707 ± 6% numa-vmstat.node1.workingset_activate
56126 ±110% +18982.1% 10710099 ± 2% numa-vmstat.node1.workingset_nodereclaim
1357699 ± 2% +20.7% 1638071 ± 2% numa-vmstat.node1.workingset_nodes
79679425 -50.3% 39577975 numa-vmstat.node1.workingset_refault
1449 ± 6% -99.9% 1.75 ± 84% numa-vmstat.node1.workingset_restore
254570 +6.5% 271109 proc-vmstat.allocstall_movable
1304 ± 7% +547.6% 8449 ± 2% proc-vmstat.allocstall_normal
384.00 ±110% +1144.3% 4778 ± 91% proc-vmstat.compact_daemon_migrate_scanned
8947 ± 49% +83.0% 16376 ± 21% proc-vmstat.compact_migrate_scanned
280.00 ± 7% -71.9% 78.75 ± 25% proc-vmstat.kswapd_high_wmark_hit_quickly
3467 +74.4% 6046 proc-vmstat.kswapd_low_wmark_hit_quickly
394755 -89.4% 41972 proc-vmstat.nr_active_file
2852371 +4.0% 2965744 proc-vmstat.nr_dirty_background_threshold
5711720 +4.0% 5938741 proc-vmstat.nr_dirty_threshold
24631585 +13.0% 27836594 ± 2% proc-vmstat.nr_file_pages
7457 -42.8% 4263 ± 24% proc-vmstat.nr_free_cma
4345960 ± 2% -47.5% 2279894 ± 29% proc-vmstat.nr_free_pages
23968354 +14.8% 27527081 ± 2% proc-vmstat.nr_inactive_file
760.50 +3.5% 787.00 ± 2% proc-vmstat.nr_isolated_file
15751 +2.4% 16135 proc-vmstat.nr_kernel_stack
21096689 -18.6% 17171082 ± 2% proc-vmstat.nr_mapped
1603148 +13.1% 1813706 ± 2% proc-vmstat.nr_page_table_pages
41524 -1.7% 40805 proc-vmstat.nr_shmem
2161162 -63.4% 790160 ± 2% proc-vmstat.nr_slab_reclaimable
137.50 -33.6% 91.25 proc-vmstat.nr_vmscan_immediate_reclaim
394720 -89.4% 41973 proc-vmstat.nr_zone_active_file
23968300 +14.8% 27527095 ± 2% proc-vmstat.nr_zone_inactive_file
77026206 ± 3% +21.3% 93443446 proc-vmstat.numa_foreign
2803 ± 33% +128.3% 6400 ± 52% proc-vmstat.numa_hint_faults
8.61e+08 +55.0% 1.334e+09 proc-vmstat.numa_hit
8.61e+08 +55.0% 1.334e+09 proc-vmstat.numa_local
77026206 ± 3% +21.3% 93443446 proc-vmstat.numa_miss
77043607 ± 3% +21.3% 93460831 proc-vmstat.numa_other
3782 +63.1% 6169 proc-vmstat.pageoutrun
11826522 -96.3% 440588 proc-vmstat.pgactivate
12851963 +35.8% 17451710 proc-vmstat.pgalloc_dma32
9.274e+08 +52.2% 1.411e+09 proc-vmstat.pgalloc_normal
11685199 -96.6% 399017 proc-vmstat.pgdeactivate
9.355e+08 -90.3% 91120084 proc-vmstat.pgfault
9.402e+08 +51.9% 1.428e+09 proc-vmstat.pgfree
9.345e+08 -95.2% 44848645 proc-vmstat.pgmajfault
11685199 -96.6% 399017 proc-vmstat.pgrefill
1.779e+09 +12.8% 2.006e+09 proc-vmstat.pgscan_direct
61892678 +13.3% 70103257 proc-vmstat.pgscan_kswapd
8.87e+08 +51.7% 1.346e+09 proc-vmstat.pgsteal_direct
18790387 +141.1% 45309429 proc-vmstat.pgsteal_kswapd
666677 ± 63% +9170.8% 61806264 proc-vmstat.slabs_scanned
249712 -91.3% 21714 ± 3% proc-vmstat.workingset_activate
58284 ±106% +37789.9% 22083863 ± 2% proc-vmstat.workingset_nodereclaim
2707516 +20.9% 3272243 ± 2% proc-vmstat.workingset_nodes
1.581e+08 -51.6% 76466386 ± 2% proc-vmstat.workingset_refault
3084 ± 3% -99.8% 5.00 ± 61% proc-vmstat.workingset_restore
84.93 -84.9 0.00 perf-profile.calltrace.cycles-pp.pagecache_get_page.filemap_fault.__xfs_filemap_fault.__do_fault.__handle_mm_fault
67.30 -67.3 0.00 perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.pagecache_get_page.filemap_fault.__xfs_filemap_fault.__do_fault
66.21 -66.2 0.00 perf-profile.calltrace.cycles-pp.__alloc_pages_slowpath.__alloc_pages_nodemask.pagecache_get_page.filemap_fault.__xfs_filemap_fault
65.03 -65.0 0.00 perf-profile.calltrace.cycles-pp.try_to_free_pages.__alloc_pages_slowpath.__alloc_pages_nodemask.pagecache_get_page.filemap_fault
65.03 -65.0 0.00 perf-profile.calltrace.cycles-pp.do_try_to_free_pages.try_to_free_pages.__alloc_pages_slowpath.__alloc_pages_nodemask.pagecache_get_page
16.98 ± 3% -17.0 0.00 perf-profile.calltrace.cycles-pp.add_to_page_cache_lru.pagecache_get_page.filemap_fault.__xfs_filemap_fault.__do_fault
15.27 ± 3% -15.3 0.00 perf-profile.calltrace.cycles-pp.__lru_cache_add.add_to_page_cache_lru.pagecache_get_page.filemap_fault.__xfs_filemap_fault
15.23 ± 3% -15.2 0.00 perf-profile.calltrace.cycles-pp.pagevec_lru_move_fn.__lru_cache_add.add_to_page_cache_lru.pagecache_get_page.filemap_fault
14.53 ± 4% -14.5 0.00 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.pagevec_lru_move_fn.__lru_cache_add.add_to_page_cache_lru.pagecache_get_page
19.12 ± 2% -6.5 12.63 perf-profile.calltrace.cycles-pp.shrink_page_list.shrink_inactive_list.shrink_node_memcg.shrink_node.do_try_to_free_pages
65.00 -4.5 60.53 perf-profile.calltrace.cycles-pp.shrink_node_memcg.shrink_node.do_try_to_free_pages.try_to_free_pages.__alloc_pages_slowpath
64.49 -4.1 60.43 perf-profile.calltrace.cycles-pp.shrink_inactive_list.shrink_node_memcg.shrink_node.do_try_to_free_pages.try_to_free_pages
6.74 ± 3% -3.0 3.73 ± 3% perf-profile.calltrace.cycles-pp.__remove_mapping.shrink_page_list.shrink_inactive_list.shrink_node_memcg.shrink_node
6.66 ± 2% -1.7 4.96 perf-profile.calltrace.cycles-pp.try_to_unmap_flush.shrink_page_list.shrink_inactive_list.shrink_node_memcg.shrink_node
6.66 ± 2% -1.7 4.96 perf-profile.calltrace.cycles-pp.arch_tlbbatch_flush.try_to_unmap_flush.shrink_page_list.shrink_inactive_list.shrink_node_memcg
6.64 ± 2% -1.7 4.95 perf-profile.calltrace.cycles-pp.on_each_cpu_cond_mask.arch_tlbbatch_flush.try_to_unmap_flush.shrink_page_list.shrink_inactive_list
6.61 ± 2% -1.7 4.92 perf-profile.calltrace.cycles-pp.on_each_cpu_mask.on_each_cpu_cond_mask.arch_tlbbatch_flush.try_to_unmap_flush.shrink_page_list
2.85 -1.7 1.16 perf-profile.calltrace.cycles-pp.__delete_from_page_cache.__remove_mapping.shrink_page_list.shrink_inactive_list.shrink_node_memcg
2.95 -1.7 1.27 perf-profile.calltrace.cycles-pp.page_referenced.shrink_page_list.shrink_inactive_list.shrink_node_memcg.shrink_node
6.36 ± 2% -1.6 4.71 perf-profile.calltrace.cycles-pp.smp_call_function_many.on_each_cpu_mask.on_each_cpu_cond_mask.arch_tlbbatch_flush.try_to_unmap_flush
2.67 -1.6 1.05 perf-profile.calltrace.cycles-pp.rmap_walk_file.page_referenced.shrink_page_list.shrink_inactive_list.shrink_node_memcg
2.30 -1.5 0.79 perf-profile.calltrace.cycles-pp.xas_store.__delete_from_page_cache.__remove_mapping.shrink_page_list.shrink_inactive_list
2.13 -1.5 0.66 perf-profile.calltrace.cycles-pp.page_referenced_one.rmap_walk_file.page_referenced.shrink_page_list.shrink_inactive_list
3.37 ± 6% -1.4 1.95 ± 4% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.__remove_mapping.shrink_page_list.shrink_inactive_list
3.42 ± 6% -1.4 2.05 ± 4% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.__remove_mapping.shrink_page_list.shrink_inactive_list.shrink_node_memcg
1.90 -1.3 0.60 ± 2% perf-profile.calltrace.cycles-pp.xas_create.xas_store.__delete_from_page_cache.__remove_mapping.shrink_page_list
1.19 -0.4 0.82 perf-profile.calltrace.cycles-pp.try_to_unmap.shrink_page_list.shrink_inactive_list.shrink_node_memcg.shrink_node
1.08 -0.3 0.75 perf-profile.calltrace.cycles-pp.rmap_walk_file.try_to_unmap.shrink_page_list.shrink_inactive_list.shrink_node_memcg
0.77 -0.2 0.53 perf-profile.calltrace.cycles-pp.try_to_unmap_one.rmap_walk_file.try_to_unmap.shrink_page_list.shrink_inactive_list
0.72 +0.0 0.76 perf-profile.calltrace.cycles-pp.__isolate_lru_page.isolate_lru_pages.shrink_inactive_list.shrink_node_memcg.shrink_node
0.92 +0.1 0.98 perf-profile.calltrace.cycles-pp.isolate_lru_pages.shrink_inactive_list.shrink_node_memcg.shrink_node.do_try_to_free_pages
0.62 +0.1 0.75 perf-profile.calltrace.cycles-pp.free_pcppages_bulk.free_unref_page_list.shrink_page_list.shrink_inactive_list.shrink_node_memcg
0.77 +0.2 0.94 perf-profile.calltrace.cycles-pp.free_unref_page_list.shrink_page_list.shrink_inactive_list.shrink_node_memcg.shrink_node
0.00 +0.5 0.53 perf-profile.calltrace.cycles-pp.__pagevec_lru_add_fn.pagevec_lru_move_fn.__lru_cache_add.add_to_page_cache_lru.iomap_readpages_actor
0.00 +0.8 0.85 ± 2% perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages_slowpath.__alloc_pages_nodemask.__do_page_cache_readahead.filemap_fault
0.00 +1.2 1.17 ± 2% perf-profile.calltrace.cycles-pp.__add_to_page_cache_locked.add_to_page_cache_lru.iomap_readpages_actor.iomap_apply.iomap_readpages
0.00 +1.3 1.29 ± 3% perf-profile.calltrace.cycles-pp.iomap_set_range_uptodate.iomap_readpage_actor.iomap_readpages_actor.iomap_apply.iomap_readpages
0.00 +1.3 1.29 ± 2% perf-profile.calltrace.cycles-pp.wakeup_kswapd.wake_all_kswapds.__alloc_pages_slowpath.__alloc_pages_nodemask.__do_page_cache_readahead
0.00 +1.3 1.30 ± 2% perf-profile.calltrace.cycles-pp.wake_all_kswapds.__alloc_pages_slowpath.__alloc_pages_nodemask.__do_page_cache_readahead.filemap_fault
96.66 +1.3 98.00 perf-profile.calltrace.cycles-pp.page_fault
96.62 +1.4 98.00 perf-profile.calltrace.cycles-pp.do_page_fault.page_fault
0.00 +1.5 1.49 ± 5% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.get_page_from_freelist.__alloc_pages_nodemask.__do_page_cache_readahead
0.00 +1.5 1.50 ± 5% perf-profile.calltrace.cycles-pp._raw_spin_lock.get_page_from_freelist.__alloc_pages_nodemask.__do_page_cache_readahead.filemap_fault
96.45 +1.5 97.99 perf-profile.calltrace.cycles-pp.__do_page_fault.do_page_fault.page_fault
95.90 +2.0 97.94 perf-profile.calltrace.cycles-pp.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
95.73 +2.2 97.92 perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
0.00 +2.3 2.32 ± 3% perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.__do_page_cache_readahead.filemap_fault.__xfs_filemap_fault
43.77 +2.4 46.14 perf-profile.calltrace.cycles-pp._raw_spin_lock_irq.shrink_inactive_list.shrink_node_memcg.shrink_node.do_try_to_free_pages
44.55 +2.4 46.95 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irq.shrink_inactive_list.shrink_node_memcg.shrink_node
0.00 +3.4 3.37 ± 3% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irq.shadow_lru_isolate.__list_lru_walk_one.list_lru_walk_one_irq
0.00 +3.4 3.39 ± 3% perf-profile.calltrace.cycles-pp._raw_spin_lock_irq.shadow_lru_isolate.__list_lru_walk_one.list_lru_walk_one_irq.do_shrink_slab
0.00 +3.6 3.59 ± 3% perf-profile.calltrace.cycles-pp.shadow_lru_isolate.__list_lru_walk_one.list_lru_walk_one_irq.do_shrink_slab.shrink_slab
0.00 +3.6 3.62 ± 3% perf-profile.calltrace.cycles-pp.__list_lru_walk_one.list_lru_walk_one_irq.do_shrink_slab.shrink_slab.shrink_node
0.00 +3.6 3.65 ± 3% perf-profile.calltrace.cycles-pp.list_lru_walk_one_irq.do_shrink_slab.shrink_slab.shrink_node.do_try_to_free_pages
0.00 +3.7 3.66 ± 3% perf-profile.calltrace.cycles-pp.do_shrink_slab.shrink_slab.shrink_node.do_try_to_free_pages.try_to_free_pages
0.00 +3.7 3.66 ± 3% perf-profile.calltrace.cycles-pp.shrink_slab.shrink_node.do_try_to_free_pages.try_to_free_pages.__alloc_pages_slowpath
0.00 +4.0 4.00 perf-profile.calltrace.cycles-pp.memset_erms.iomap_readpage_actor.iomap_readpages_actor.iomap_apply.iomap_readpages
0.00 +5.5 5.50 perf-profile.calltrace.cycles-pp.iomap_readpage_actor.iomap_readpages_actor.iomap_apply.iomap_readpages.read_pages
14.51 ± 4% +5.7 20.21 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.pagevec_lru_move_fn.__lru_cache_add.add_to_page_cache_lru
90.60 +6.8 97.44 perf-profile.calltrace.cycles-pp.__do_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
90.50 +6.9 97.44 perf-profile.calltrace.cycles-pp.__xfs_filemap_fault.__do_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault
90.31 +7.1 97.42 perf-profile.calltrace.cycles-pp.filemap_fault.__xfs_filemap_fault.__do_fault.__handle_mm_fault.handle_mm_fault
0.00 +20.2 20.23 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.pagevec_lru_move_fn.__lru_cache_add.add_to_page_cache_lru.iomap_readpages_actor
0.00 +21.0 21.04 perf-profile.calltrace.cycles-pp.pagevec_lru_move_fn.__lru_cache_add.add_to_page_cache_lru.iomap_readpages_actor.iomap_apply
0.00 +21.1 21.09 perf-profile.calltrace.cycles-pp.__lru_cache_add.add_to_page_cache_lru.iomap_readpages_actor.iomap_apply.iomap_readpages
0.00 +22.4 22.40 perf-profile.calltrace.cycles-pp.add_to_page_cache_lru.iomap_readpages_actor.iomap_apply.iomap_readpages.read_pages
0.00 +28.1 28.05 perf-profile.calltrace.cycles-pp.iomap_readpages_actor.iomap_apply.iomap_readpages.read_pages.__do_page_cache_readahead
0.00 +28.1 28.10 perf-profile.calltrace.cycles-pp.iomap_apply.iomap_readpages.read_pages.__do_page_cache_readahead.filemap_fault
0.00 +28.1 28.11 perf-profile.calltrace.cycles-pp.iomap_readpages.read_pages.__do_page_cache_readahead.filemap_fault.__xfs_filemap_fault
0.00 +28.1 28.12 perf-profile.calltrace.cycles-pp.read_pages.__do_page_cache_readahead.filemap_fault.__xfs_filemap_fault.__do_fault
0.00 +64.2 64.20 perf-profile.calltrace.cycles-pp.do_try_to_free_pages.try_to_free_pages.__alloc_pages_slowpath.__alloc_pages_nodemask.__do_page_cache_readahead
0.00 +64.2 64.20 perf-profile.calltrace.cycles-pp.try_to_free_pages.__alloc_pages_slowpath.__alloc_pages_nodemask.__do_page_cache_readahead.filemap_fault
0.00 +66.4 66.37 perf-profile.calltrace.cycles-pp.__alloc_pages_slowpath.__alloc_pages_nodemask.__do_page_cache_readahead.filemap_fault.__xfs_filemap_fault
0.00 +68.9 68.86 perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.__do_page_cache_readahead.filemap_fault.__xfs_filemap_fault.__do_fault
0.00 +97.3 97.32 perf-profile.calltrace.cycles-pp.__do_page_cache_readahead.filemap_fault.__xfs_filemap_fault.__do_fault.__handle_mm_fault
84.95 -84.9 0.00 perf-profile.children.cycles-pp.pagecache_get_page
19.73 ± 2% -6.7 13.07 perf-profile.children.cycles-pp.shrink_page_list
66.59 -4.6 61.97 perf-profile.children.cycles-pp.shrink_node_memcg
66.12 -4.2 61.92 perf-profile.children.cycles-pp.shrink_inactive_list
4.10 -3.7 0.42 perf-profile.children.cycles-pp.filemap_map_pages
6.90 ± 3% -3.0 3.86 ± 2% perf-profile.children.cycles-pp.__remove_mapping
2.82 -2.7 0.12 ± 3% perf-profile.children.cycles-pp.xas_find
2.95 -2.5 0.46 perf-profile.children.cycles-pp.xas_load
4.00 -2.1 1.90 perf-profile.children.cycles-pp.rmap_walk_file
3.21 -1.8 1.38 perf-profile.children.cycles-pp.xas_store
3.15 -1.8 1.34 perf-profile.children.cycles-pp.page_referenced
6.85 ± 2% -1.8 5.10 perf-profile.children.cycles-pp.try_to_unmap_flush
6.85 ± 2% -1.8 5.10 perf-profile.children.cycles-pp.arch_tlbbatch_flush
6.84 ± 2% -1.7 5.09 perf-profile.children.cycles-pp.on_each_cpu_cond_mask
6.80 ± 2% -1.7 5.07 perf-profile.children.cycles-pp.on_each_cpu_mask
2.94 -1.7 1.23 perf-profile.children.cycles-pp.__delete_from_page_cache
6.54 ± 2% -1.7 4.87 perf-profile.children.cycles-pp.smp_call_function_many
2.27 -1.6 0.69 perf-profile.children.cycles-pp.page_referenced_one
2.15 -1.6 0.58 perf-profile.children.cycles-pp.page_vma_mapped_walk
1.98 -1.2 0.79 ± 4% perf-profile.children.cycles-pp.xas_create
1.09 -0.8 0.28 ± 2% perf-profile.children.cycles-pp.workingset_update_node
0.88 ± 2% -0.7 0.15 ± 3% perf-profile.children.cycles-pp.alloc_set_pte
0.62 ± 3% -0.5 0.09 ± 5% perf-profile.children.cycles-pp.list_lru_del
0.53 -0.5 0.07 ± 6% perf-profile.children.cycles-pp.native_irq_return_iret
1.22 -0.4 0.84 perf-profile.children.cycles-pp.try_to_unmap
0.55 ± 3% -0.3 0.27 ± 3% perf-profile.children.cycles-pp.down_read
0.32 ± 4% -0.3 0.06 ± 11% perf-profile.children.cycles-pp.list_lru_add
0.81 -0.3 0.55 perf-profile.children.cycles-pp.try_to_unmap_one
0.39 -0.2 0.15 ± 5% perf-profile.children.cycles-pp.xas_clear_mark
1.43 ± 2% -0.2 1.20 ± 2% perf-profile.children.cycles-pp.__add_to_page_cache_locked
0.43 -0.2 0.21 ± 3% perf-profile.children.cycles-pp.xas_init_marks
0.28 -0.2 0.10 ± 4% perf-profile.children.cycles-pp.up_read
0.24 -0.2 0.07 ± 5% perf-profile.children.cycles-pp.workingset_refault
0.21 ± 20% -0.2 0.05 ± 67% perf-profile.children.cycles-pp.drain_local_pages_wq
0.21 ± 20% -0.2 0.05 ± 67% perf-profile.children.cycles-pp.drain_pages
0.21 ± 20% -0.2 0.05 ± 67% perf-profile.children.cycles-pp.drain_pages_zone
0.22 ± 18% -0.1 0.07 ± 31% perf-profile.children.cycles-pp.worker_thread
0.22 ± 19% -0.1 0.07 ± 31% perf-profile.children.cycles-pp.process_one_work
0.35 ± 2% -0.1 0.22 perf-profile.children.cycles-pp.___might_sleep
0.17 ± 4% -0.1 0.07 perf-profile.children.cycles-pp.__might_sleep
0.29 -0.1 0.23 ± 3% perf-profile.children.cycles-pp.putback_inactive_pages
0.15 ± 3% -0.1 0.08 perf-profile.children.cycles-pp.unlock_page
0.15 ± 3% -0.1 0.09 ± 4% perf-profile.children.cycles-pp.page_add_file_rmap
0.17 ± 2% -0.1 0.11 ± 3% perf-profile.children.cycles-pp._cond_resched
0.08 -0.1 0.03 ±100% perf-profile.children.cycles-pp.rcu_all_qs
0.26 ± 3% -0.1 0.20 ± 4% perf-profile.children.cycles-pp.call_function_interrupt
0.14 ± 3% -0.1 0.09 perf-profile.children.cycles-pp.page_remove_rmap
0.19 ± 3% -0.0 0.14 ± 5% perf-profile.children.cycles-pp.smp_call_function_interrupt
0.16 ± 2% -0.0 0.11 ± 7% perf-profile.children.cycles-pp.flush_smp_call_function_queue
0.27 ± 7% -0.0 0.23 ± 4% perf-profile.children.cycles-pp.smp_call_function_single
0.11 ± 6% -0.0 0.07 ± 5% perf-profile.children.cycles-pp.xas_start
0.10 -0.0 0.07 ± 7% perf-profile.children.cycles-pp.__x86_indirect_thunk_rax
0.15 ± 4% -0.0 0.12 ± 5% perf-profile.children.cycles-pp.__mod_node_page_state
0.07 ± 7% -0.0 0.04 ± 57% perf-profile.children.cycles-pp.check_pte
0.08 ± 5% -0.0 0.06 perf-profile.children.cycles-pp.flush_tlb_func_common
0.07 -0.0 0.06 perf-profile.children.cycles-pp.mem_cgroup_update_lru_size
0.05 +0.0 0.06 perf-profile.children.cycles-pp.free_unref_page_prepare
0.07 ± 5% +0.0 0.09 ± 4% perf-profile.children.cycles-pp.release_pages
0.24 ± 2% +0.0 0.26 perf-profile.children.cycles-pp.workingset_eviction
0.05 ± 8% +0.0 0.07 ± 5% perf-profile.children.cycles-pp.free_unref_page_commit
0.07 ± 6% +0.0 0.09 ± 7% perf-profile.children.cycles-pp.unaccount_page_cache_page
0.06 ± 6% +0.0 0.09 ± 5% perf-profile.children.cycles-pp.mem_cgroup_uncharge_list
0.05 +0.0 0.07 ± 5% perf-profile.children.cycles-pp.__inc_node_page_state
0.15 ± 2% +0.0 0.18 perf-profile.children.cycles-pp.mem_cgroup_commit_charge
0.75 +0.0 0.79 perf-profile.children.cycles-pp.__isolate_lru_page
0.08 +0.0 0.12 ± 7% perf-profile.children.cycles-pp.lru_add_drain_cpu
0.08 ± 5% +0.0 0.12 ± 8% perf-profile.children.cycles-pp.lru_add_drain
0.07 ± 5% +0.0 0.11 perf-profile.children.cycles-pp.mem_cgroup_try_charge
0.50 ± 4% +0.0 0.54 perf-profile.children.cycles-pp.__pagevec_lru_add_fn
0.00 +0.1 0.05 perf-profile.children.cycles-pp.iomap_adjust_read_range
0.00 +0.1 0.05 ± 8% perf-profile.children.cycles-pp.ksys_read
0.00 +0.1 0.05 ± 8% perf-profile.children.cycles-pp.vfs_read
0.00 +0.1 0.05 ± 8% perf-profile.children.cycles-pp.__vfs_read
0.97 +0.1 1.02 perf-profile.children.cycles-pp.isolate_lru_pages
0.67 +0.1 0.75 perf-profile.children.cycles-pp.__list_del_entry_valid
0.00 +0.1 0.09 perf-profile.children.cycles-pp.xas_free_nodes
0.00 +0.1 0.11 ± 21% perf-profile.children.cycles-pp.__slab_alloc
0.00 +0.1 0.11 ± 21% perf-profile.children.cycles-pp.___slab_alloc
0.00 +0.1 0.11 ± 23% perf-profile.children.cycles-pp.kmem_cache_alloc
0.00 +0.1 0.12 ± 21% perf-profile.children.cycles-pp.xas_alloc
0.00 +0.1 0.13 ± 22% perf-profile.children.cycles-pp.kmem_cache_free
0.00 +0.2 0.18 ± 20% perf-profile.children.cycles-pp.smpboot_thread_fn
0.00 +0.2 0.18 ± 20% perf-profile.children.cycles-pp.run_ksoftirqd
0.81 +0.2 1.00 perf-profile.children.cycles-pp.free_unref_page_list
0.00 +0.2 0.19 ± 3% perf-profile.children.cycles-pp.xa_load
0.00 +0.2 0.19 ± 20% perf-profile.children.cycles-pp.rcu_process_callbacks
0.00 +0.2 0.21 ± 16% perf-profile.children.cycles-pp.__softirqentry_text_start
0.94 +0.3 1.29 ± 3% perf-profile.children.cycles-pp.iomap_set_range_uptodate
0.67 ± 5% +0.6 1.29 ± 2% perf-profile.children.cycles-pp.wakeup_kswapd
0.68 ± 5% +0.6 1.31 ± 2% perf-profile.children.cycles-pp.wake_all_kswapds
3.00 +1.0 4.01 perf-profile.children.cycles-pp.memset_erms
96.74 +1.3 98.07 perf-profile.children.cycles-pp.page_fault
96.70 +1.4 98.06 perf-profile.children.cycles-pp.do_page_fault
4.13 +1.4 5.52 perf-profile.children.cycles-pp.iomap_readpage_actor
96.54 +1.5 98.05 perf-profile.children.cycles-pp.__do_page_fault
67.50 +1.6 69.08 perf-profile.children.cycles-pp.__alloc_pages_nodemask
1.47 +1.7 3.19 ± 3% perf-profile.children.cycles-pp.get_page_from_freelist
95.99 +2.0 98.01 perf-profile.children.cycles-pp.handle_mm_fault
95.82 +2.2 97.99 perf-profile.children.cycles-pp.__handle_mm_fault
0.00 +3.7 3.65 ± 3% perf-profile.children.cycles-pp.shadow_lru_isolate
0.00 +3.7 3.67 ± 3% perf-profile.children.cycles-pp.__list_lru_walk_one
0.00 +3.7 3.71 ± 3% perf-profile.children.cycles-pp.list_lru_walk_one_irq
0.00 +3.7 3.71 ± 3% perf-profile.children.cycles-pp.do_shrink_slab
0.00 +3.7 3.72 ± 3% perf-profile.children.cycles-pp.shrink_slab
18.12 ± 2% +4.4 22.51 perf-profile.children.cycles-pp._raw_spin_lock_irqsave
16.98 ± 3% +5.4 22.42 perf-profile.children.cycles-pp.add_to_page_cache_lru
45.19 +5.5 50.72 perf-profile.children.cycles-pp._raw_spin_lock_irq
15.29 ± 3% +5.8 21.12 perf-profile.children.cycles-pp.__lru_cache_add
15.32 ± 3% +5.9 21.18 perf-profile.children.cycles-pp.pagevec_lru_move_fn
90.60 +6.8 97.44 perf-profile.children.cycles-pp.__do_fault
90.50 +6.9 97.44 perf-profile.children.cycles-pp.__xfs_filemap_fault
90.32 +7.1 97.43 perf-profile.children.cycles-pp.filemap_fault
64.32 +10.7 75.04 perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
4.80 +23.3 28.11 perf-profile.children.cycles-pp.iomap_apply
0.00 +28.1 28.08 perf-profile.children.cycles-pp.iomap_readpages_actor
0.00 +28.1 28.12 perf-profile.children.cycles-pp.iomap_readpages
0.00 +28.1 28.13 perf-profile.children.cycles-pp.read_pages
0.00 +97.4 97.38 perf-profile.children.cycles-pp.__do_page_cache_readahead
2.83 -2.5 0.39 perf-profile.self.cycles-pp.xas_load
6.38 ± 2% -1.6 4.73 perf-profile.self.cycles-pp.smp_call_function_many
1.80 -1.4 0.42 perf-profile.self.cycles-pp.page_vma_mapped_walk
1.98 -1.3 0.66 perf-profile.self.cycles-pp.xas_create
1.05 -0.9 0.12 ± 7% perf-profile.self.cycles-pp.filemap_map_pages
1.03 -0.9 0.16 ± 2% perf-profile.self.cycles-pp._raw_spin_lock
0.52 ± 2% -0.5 0.07 ± 6% perf-profile.self.cycles-pp.native_irq_return_iret
0.39 -0.2 0.14 ± 5% perf-profile.self.cycles-pp.xas_clear_mark
0.27 -0.2 0.10 ± 4% perf-profile.self.cycles-pp.up_read
0.28 ± 3% -0.1 0.15 ± 5% perf-profile.self.cycles-pp.down_read
0.33 ± 2% -0.1 0.21 ± 2% perf-profile.self.cycles-pp.___might_sleep
0.37 -0.1 0.25 perf-profile.self.cycles-pp.try_to_unmap_one
0.15 ± 2% -0.1 0.05 ± 8% perf-profile.self.cycles-pp.workingset_refault
0.15 ± 5% -0.1 0.06 perf-profile.self.cycles-pp.__might_sleep
0.27 ± 3% -0.1 0.19 ± 2% perf-profile.self.cycles-pp.page_referenced_one
0.23 ± 2% -0.1 0.16 ± 2% perf-profile.self.cycles-pp.rmap_walk_file
0.14 ± 3% -0.1 0.08 perf-profile.self.cycles-pp.unlock_page
0.27 ± 5% -0.0 0.23 ± 5% perf-profile.self.cycles-pp.smp_call_function_single
0.11 ± 4% -0.0 0.07 ± 6% perf-profile.self.cycles-pp.page_add_file_rmap
0.15 ± 5% -0.0 0.11 ± 3% perf-profile.self.cycles-pp.__mod_node_page_state
0.15 -0.0 0.11 ± 4% perf-profile.self.cycles-pp.putback_inactive_pages
0.10 -0.0 0.07 ± 6% perf-profile.self.cycles-pp.page_remove_rmap
0.10 ± 7% -0.0 0.07 ± 6% perf-profile.self.cycles-pp.xas_start
0.09 ± 4% -0.0 0.06 ± 7% perf-profile.self.cycles-pp.__x86_indirect_thunk_rax
0.06 -0.0 0.04 ± 57% perf-profile.self.cycles-pp.PageHuge
0.15 ± 3% -0.0 0.12 ± 4% perf-profile.self.cycles-pp.workingset_update_node
0.08 ± 5% -0.0 0.06 ± 7% perf-profile.self.cycles-pp._cond_resched
0.14 -0.0 0.13 ± 3% perf-profile.self.cycles-pp.page_evictable
0.07 -0.0 0.06 perf-profile.self.cycles-pp.mem_cgroup_update_lru_size
0.05 +0.0 0.07 ± 7% perf-profile.self.cycles-pp.release_pages
0.05 +0.0 0.07 ± 6% perf-profile.self.cycles-pp.__inc_node_page_state
0.07 ± 7% +0.0 0.08 ± 5% perf-profile.self.cycles-pp._raw_spin_unlock_irqrestore
0.16 +0.0 0.18 ± 5% perf-profile.self.cycles-pp.iomap_readpage_actor
0.07 +0.0 0.09 ± 4% perf-profile.self.cycles-pp.__alloc_pages_nodemask
0.04 ± 57% +0.0 0.06 ± 6% perf-profile.self.cycles-pp.mem_cgroup_commit_charge
0.08 +0.0 0.11 ± 3% perf-profile.self.cycles-pp._raw_spin_lock_irq
0.12 ± 7% +0.0 0.15 perf-profile.self.cycles-pp.__remove_mapping
0.75 +0.0 0.79 perf-profile.self.cycles-pp.__isolate_lru_page
0.05 ± 8% +0.0 0.10 ± 5% perf-profile.self.cycles-pp.__delete_from_page_cache
0.01 ±173% +0.0 0.06 ± 7% perf-profile.self.cycles-pp.xas_init_marks
0.01 ±173% +0.0 0.06 ± 7% perf-profile.self.cycles-pp.free_unref_page_prepare
0.12 ± 4% +0.0 0.17 perf-profile.self.cycles-pp.__add_to_page_cache_locked
0.01 ±173% +0.0 0.06 perf-profile.self.cycles-pp.pagevec_lru_move_fn
0.00 +0.1 0.05 perf-profile.self.cycles-pp.__do_page_cache_readahead
0.00 +0.1 0.05 perf-profile.self.cycles-pp.mem_cgroup_try_charge
0.00 +0.1 0.05 perf-profile.self.cycles-pp.alloc_pages_current
0.00 +0.1 0.05 perf-profile.self.cycles-pp.__mod_zone_page_state
0.00 +0.1 0.05 ± 9% perf-profile.self.cycles-pp.free_unref_page_commit
0.08 +0.1 0.14 ± 6% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.42 +0.1 0.49 perf-profile.self.cycles-pp.shrink_page_list
0.66 +0.1 0.73 perf-profile.self.cycles-pp.__list_del_entry_valid
0.12 ± 3% +0.1 0.20 ± 2% perf-profile.self.cycles-pp.xas_store
0.00 +0.1 0.08 ± 6% perf-profile.self.cycles-pp.iomap_readpages_actor
0.00 +0.1 0.08 ± 5% perf-profile.self.cycles-pp.xas_free_nodes
0.30 +0.1 0.44 perf-profile.self.cycles-pp.free_pcppages_bulk
0.93 +0.3 1.27 ± 3% perf-profile.self.cycles-pp.iomap_set_range_uptodate
0.67 ± 3% +0.4 1.03 perf-profile.self.cycles-pp.get_page_from_freelist
0.67 ± 5% +0.6 1.29 ± 2% perf-profile.self.cycles-pp.wakeup_kswapd
2.98 +1.0 3.99 perf-profile.self.cycles-pp.memset_erms
64.31 +10.7 75.04 perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
vm-scalability.time.user_time
350 +-+-------------------------------------------------------------------+
| +. |
300 +-+..+. .+..+. .+.. .+. .. +.+..+.+. .+.+.+..+.+.+..+.+.+..+. .+.|
| : + + + + +. +.+. |
250 +-+ |
| : |
200 +-+ |
|: |
150 +-+ |
|: |
100 +-+ |
|: |
50 +-+ |
O O O O O O O O O O O O O O O O O O O O O |
0 +-+-------------------------------------------------------------------+
vm-scalability.time.system_time
30000 +-+-----------------------------------------------------------------+
O O O O O O O O O O O O O O O O O O O O O |
25000 +-+..+.+.+.+..+.+.+.+..+.+.+.+..+.+.+..+.+.+.+..+.+.+.+..+.+.+.+..+.|
| : |
| : |
20000 +-+ |
|: |
15000 +-+ |
|: |
10000 +-+ |
|: |
|: |
5000 +-+ |
| |
0 +-+-----------------------------------------------------------------+
vm-scalability.time.percent_of_cpu_this_job_got
8000 O-O--O-O-O--O-O-O-O--O-O-O--O-O-O--O-O-O-O--O-O----------------------+
| |
7000 +-+..+.+.+..+.+.+.+..+.+.+..+.+.+..+.+.+.+..+.+.+..+.+.+..+.+.+.+..+.|
6000 +-+ |
| : |
5000 +-+ |
|: |
4000 +-+ |
|: |
3000 +-+ |
2000 +-+ |
|: |
1000 +-+ |
| |
0 +-+------------------------------------------------------------------+
vm-scalability.time.elapsed_time
400 +-+-------------------------------------------------------------------+
| +..O.+.O..O.O.+..O.+.+..O.O.O..O.O.O..O.O.O..O.+.+..+.+.+..+.+.+..+.|
350 O-O O O O O |
300 +-+ |
| : |
250 +-+ |
|: |
200 +-+ |
|: |
150 +-+ |
100 +-+ |
| |
50 +-+ |
| |
0 +-+-------------------------------------------------------------------+
vm-scalability.time.elapsed_time.max
400 +-+-------------------------------------------------------------------+
| +..O.+.O..O.O.+..O.+.+..O.O.O..O.O.O..O.O.O..O.+.+..+.+.+..+.+.+..+.|
350 O-O O O O O |
300 +-+ |
| : |
250 +-+ |
|: |
200 +-+ |
|: |
150 +-+ |
100 +-+ |
| |
50 +-+ |
| |
0 +-+-------------------------------------------------------------------+
vm-scalability.time.major_page_faults
1e+09 +-+-----------------------------------------------------------------+
9e+08 +-+..+.+.+.+..+.+.+.+..+.+.+.+..+.+.+..+.+.+.+..+.+.+.+..+.+.+.+..+.|
| : |
8e+08 +-+ |
7e+08 +-+ |
|: |
6e+08 +-+ |
5e+08 +-+ |
4e+08 +-+ |
|: |
3e+08 +-+ |
2e+08 +-+ |
| |
1e+08 O-O O O O O O O O O O O O O O O O O O O O |
0 +-+-----------------------------------------------------------------+
vm-scalability.time.minor_page_faults
500000 O-O-O--O-O-O-O--O-O-O-O--O-O-O-O--O-O-O-O-O--O---------------------+
450000 +-+ |
| |
400000 +-+ |
350000 +-+ |
| |
300000 +-+ |
250000 +-+ |
200000 +-+ |
| |
150000 +-+ |
100000 +-+ |
| +.+..+.+.+.+..+.+.+.+..+.+.+.+..+.+.+.+.+..+.+.+.+..+.+.+.+..+.+.|
50000 +-+ |
0 +-+----------------------------------------------------------------+
vm-scalability.time.voluntary_context_switches
6000 +-+------------------------------------------------------------------+
| .+.+.+..+.+ + .+.+.+.. .+..+.+.+.+..+.+.+..+.+.+..+.+. .|
5000 +-+. +. +.+ +.+..+ |
| : |
| : |
4000 +-+ |
|: |
3000 +-+ |
|: |
2000 +-+ |
O:O O O O O O O O O O O O O O O O O O O O |
|: |
1000 +-+ |
| |
0 +-+------------------------------------------------------------------+
vm-scalability.throughput
2e+06 +-+---------------------------------------------------------------+
1.8e+06 +-+.+..+.+.+.+.+..+.+.+.+.+..+.+.+.+.+..+.+.+.+.+..+.+.+.+.+..+.+.|
O O O O O O O O O O O |
1.6e+06 +-+ O O O O O O O O O O |
1.4e+06 +-+ |
|: |
1.2e+06 +-+ |
1e+06 +-+ |
800000 +-+ |
|: |
600000 +-+ |
400000 +-+ |
| |
200000 +-+ |
0 +-+---------------------------------------------------------------+
vm-scalability.stddev
1.6 +-+-------------------------------------------------------------------+
| |
1.4 +-+ O O |
1.2 +-+ |
| |
1 +-+ |
| |
0.8 +-+ O |
| O O O O O O |
0.6 O-O O O O O O O O |
0.4 +-+ O O O |
| |
0.2 +-+ |
|.+..+.+.+..+.+.+..+.+.+..+.+.+..+.+.+..+.+.+..+.+.+..+.+.+..+.+.+..+.|
0 +-+-------------------------------------------------------------------+
vm-scalability.free_time
0.14 +-+------------------------------------------------------------------+
| .+. .+. .+.. .+.. .+. .+.+. .+. .+.|
0.12 +-+. +.+..+.+. .+..+ +..+.+ +.+.+ + +. +..+ +.+. |
| : + |
0.1 +-+ |
O O O O O O O O O O O O O O O O O O O O O |
0.08 +-+ |
|: |
0.06 +-+ |
|: |
0.04 +-+ |
|: |
0.02 +-+ |
| |
0 +-+------------------------------------------------------------------+
vm-scalability.median
25000 +-+-----------------------------------------------------------------+
| |
| +.. .+. .+.. .+. .+..+.+.+.+.. .+. .+.+.+.+..+.+.+.+..+.+.+.+..+.|
20000 +-+ + + + O + O + +. |
O O O O O O O O O O O O O O O O O O O |
| : |
15000 +-+ |
|: |
10000 +-+ |
|: |
|: |
5000 +-+ |
| |
| |
0 +-+-----------------------------------------------------------------+
vm-scalability.workload
6e+08 +-+-----------------------------------------------------------------+
O O..O.O.O.O..O.O.O.O..O.O.O.O..O.O.O..O.O.O.O..+.+.+.+..+.+.+.+..+.|
5e+08 +-+ |
| : |
| : |
4e+08 +-+ |
|: |
3e+08 +-+ |
|: |
2e+08 +-+ |
|: |
| |
1e+08 +-+ |
| |
0 +-+-----------------------------------------------------------------+
pmeter.performance_per_watt
6000 +-+------------------------------------------------------------------+
| :: |
5000 +-+ : : |
| : : |
| +..+.+.+..+.+.+.+..+ +..+.+.+..+.+.+.+..+.+.+..+.+.+..+.+.+.+..+.|
4000 O-O O O O O O O O O O O O O O O O O O O O |
| : |
3000 +-+ |
|: |
2000 +-+ |
|: |
|: |
1000 +-+ |
| |
0 +-+------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
View attachment "config-4.20.0-rc6-00085-g48dc116" of type "text/plain" (168504 bytes)
View attachment "job-script" of type "text/plain" (7546 bytes)
View attachment "job.yaml" of type "text/plain" (5084 bytes)
View attachment "reproduce" of type "text/plain" (20962 bytes)
Powered by blists - more mailing lists