[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date: Mon, 9 Sep 2019 10:15:44 +0800
From: kernel test robot <rong.a.chen@...el.com>
To: Shakeel Butt <shakeelb@...gle.com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
Roman Gushchin <guro@...com>,
Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...e.com>,
Vladimir Davydov <vdavydov.dev@...il.com>,
Chris Down <chris@...isdown.name>,
Andrew Morton <akpm@...ux-foundation.org>,
LKML <linux-kernel@...r.kernel.org>, lkp@...org
Subject: [mm, memcg] 1e577f970f: will-it-scale.per_process_ops -7.2%
regression
Greeting,
FYI, we noticed a -7.2% regression of will-it-scale.per_process_ops due to commit:
commit: 1e577f970f66a53d429cbee37b36177c9712f488 ("mm, memcg: introduce memory.events.local")
https://kernel.googlesource.com/pub/scm/linux/kernel/git/torvalds/linux.git master
in testcase: will-it-scale
on test machine: 96 threads Intel(R) Xeon(R) CPU @ 2.30GHz with 128G memory
with following parameters:
nr_task: 50%
mode: process
test: page_fault2
ucode: 0x400001c
cpufreq_governor: performance
test-description: Will It Scale takes a testcase and runs it from 1 through to n parallel copies to see if the testcase will scale. It builds both a process and threads based test in order to see any differences between the two.
test-url: https://github.com/antonblanchard/will-it-scale
In addition to that, the commit also has significant impact on the following tests:
+------------------+------------------------------------------------------------------------+
| testcase: change | vm-scalability: vm-scalability.median -13.0% regression |
| test machine | 160 threads Intel(R) Xeon(R) CPU E7-8890 v4 @ 2.20GHz with 256G memory |
| test parameters | cpufreq_governor=performance |
| | runtime=300s |
| | size=1T |
| | test=lru-shm |
| | ucode=0xb000036 |
+------------------+------------------------------------------------------------------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen@...el.com>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.2-clear/process/50%/clear-ota-25590-x86_64-2018-10-18.cgz/lkp-csl-2sp4/page_fault2/will-it-scale/0x400001c
commit:
ec16545096 ("memcg, fsnotify: no oom-kill for remote memcg charging")
1e577f970f ("mm, memcg: introduce memory.events.local")
ec165450968b2629 1e577f970f66a53d429cbee37b3
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
4:4 -2% 4:4 perf-profile.calltrace.cycles-pp.error_entry
4:4 -1% 4:4 perf-profile.calltrace.cycles-pp.sync_regs.error_entry
5:4 -1% 5:4 perf-profile.children.cycles-pp.error_entry
0:4 -0% 0:4 perf-profile.self.cycles-pp.error_entry
%stddev %change %stddev
\ | \
208264 -7.2% 193178 will-it-scale.per_process_ops
9996705 -7.2% 9272609 will-it-scale.workload
1.88 ± 2% +0.2 2.07 mpstat.cpu.all.usr%
75.16 +1.9% 76.59 boot-time.boot
6771 +2.0% 6908 boot-time.idle
298.63 -1.5% 294.02 turbostat.PkgWatt
131.45 -3.8% 126.39 turbostat.RAMWatt
11773 ± 2% +5.8% 12459 ± 5% slabinfo.kmalloc-96.active_objs
11899 ± 2% +5.6% 12570 ± 5% slabinfo.kmalloc-96.num_objs
480.00 ± 16% -23.3% 368.00 ± 13% slabinfo.kmalloc-rcl-128.active_objs
480.00 ± 16% -23.3% 368.00 ± 13% slabinfo.kmalloc-rcl-128.num_objs
3.016e+09 -7.3% 2.797e+09 proc-vmstat.numa_hit
3.016e+09 -7.3% 2.797e+09 proc-vmstat.numa_local
17450 ± 5% -14.5% 14912 ± 6% proc-vmstat.pgactivate
3.017e+09 -7.2% 2.798e+09 proc-vmstat.pgalloc_normal
3.009e+09 -7.2% 2.791e+09 proc-vmstat.pgfault
3.016e+09 -7.2% 2.798e+09 proc-vmstat.pgfree
20008 ± 26% +37.0% 27407 ± 8% sched_debug.cfs_rq:/.exec_clock.min
189.33 +720.9% 1554 ±128% sched_debug.cfs_rq:/.load_avg.max
962257 ± 27% +37.1% 1318963 ± 10% sched_debug.cfs_rq:/.min_vruntime.min
4492285 ± 9% -24.4% 3396980 ± 14% sched_debug.cfs_rq:/.spread0.max
287345 ± 14% +45.1% 417049 ± 33% sched_debug.cpu.avg_idle.max
40048 ± 11% +40.1% 56118 ± 34% sched_debug.cpu.avg_idle.stddev
137993 ± 18% +54.7% 213511 ± 30% sched_debug.cpu.max_idle_balance_cost.max
16624 ± 20% +60.4% 26669 ± 39% sched_debug.cpu.max_idle_balance_cost.stddev
18665 ± 18% +26.8% 23665 ± 18% softirqs.CPU15.SCHED
24899 ± 11% -18.4% 20325 ± 7% softirqs.CPU21.SCHED
24857 ± 10% -20.5% 19766 ± 11% softirqs.CPU30.SCHED
14209 ± 38% +66.4% 23644 ± 13% softirqs.CPU34.SCHED
21668 ± 2% -18.5% 17663 ± 17% softirqs.CPU41.SCHED
15193 ± 16% +27.6% 19383 ± 7% softirqs.CPU69.SCHED
14814 ± 18% +37.7% 20393 ± 10% softirqs.CPU78.SCHED
25299 ± 7% -17.2% 20955 ± 12% softirqs.CPU8.SCHED
725.75 ± 5% -9.5% 657.00 interrupts.4:IO-APIC.4-edge.ttyS0
17.25 ± 42% +1407.2% 260.00 ±149% interrupts.CPU18.RES:Rescheduling_interrupts
6134 ± 38% -39.9% 3686 ± 37% interrupts.CPU25.NMI:Non-maskable_interrupts
6134 ± 38% -39.9% 3686 ± 37% interrupts.CPU25.PMI:Performance_monitoring_interrupts
52.75 ± 75% +367.8% 246.75 ± 68% interrupts.CPU27.RES:Rescheduling_interrupts
393.75 ± 68% -83.9% 63.25 ±121% interrupts.CPU3.RES:Rescheduling_interrupts
7887 ± 10% -54.3% 3605 ± 20% interrupts.CPU32.NMI:Non-maskable_interrupts
7887 ± 10% -54.3% 3605 ± 20% interrupts.CPU32.PMI:Performance_monitoring_interrupts
8647 -33.9% 5717 ± 42% interrupts.CPU34.NMI:Non-maskable_interrupts
8647 -33.9% 5717 ± 42% interrupts.CPU34.PMI:Performance_monitoring_interrupts
7125 ± 22% -49.7% 3581 ± 39% interrupts.CPU4.NMI:Non-maskable_interrupts
7125 ± 22% -49.7% 3581 ± 39% interrupts.CPU4.PMI:Performance_monitoring_interrupts
171.25 ±156% +366.0% 798.00 ±120% interrupts.CPU44.RES:Rescheduling_interrupts
7084 ± 16% -31.8% 4828 ± 28% interrupts.CPU53.NMI:Non-maskable_interrupts
7084 ± 16% -31.8% 4828 ± 28% interrupts.CPU53.PMI:Performance_monitoring_interrupts
6189 ± 41% -46.5% 3310 ± 48% interrupts.CPU54.NMI:Non-maskable_interrupts
6189 ± 41% -46.5% 3310 ± 48% interrupts.CPU54.PMI:Performance_monitoring_interrupts
5873 ± 27% -52.5% 2790 ± 62% interrupts.CPU62.NMI:Non-maskable_interrupts
5873 ± 27% -52.5% 2790 ± 62% interrupts.CPU62.PMI:Performance_monitoring_interrupts
13.25 ± 22% +1422.6% 201.75 ±136% interrupts.CPU62.RES:Rescheduling_interrupts
10.75 ± 22% +465.1% 60.75 ± 94% interrupts.CPU64.RES:Rescheduling_interrupts
345.50 ±120% -96.1% 13.50 ± 28% interrupts.CPU8.RES:Rescheduling_interrupts
32.00 ± 31% +262.5% 116.00 ±113% interrupts.CPU84.RES:Rescheduling_interrupts
19.75 ± 48% +941.8% 205.75 ±136% interrupts.CPU90.RES:Rescheduling_interrupts
119.50 ± 15% -24.1% 90.75 ± 12% interrupts.IWI:IRQ_work_interrupts
588624 ± 7% -17.4% 485913 ± 9% interrupts.NMI:Non-maskable_interrupts
588624 ± 7% -17.4% 485913 ± 9% interrupts.PMI:Performance_monitoring_interrupts
20329 ± 3% -11.0% 18098 ± 4% interrupts.RES:Rescheduling_interrupts
22.73 +2.2% 23.24 perf-stat.i.MPKI
9.332e+09 -5.7% 8.8e+09 perf-stat.i.branch-instructions
26885789 -5.7% 25357143 perf-stat.i.branch-misses
79.06 -1.7 77.37 perf-stat.i.cache-miss-rate%
8.564e+08 -6.0% 8.053e+08 perf-stat.i.cache-misses
1.083e+09 -3.9% 1.041e+09 perf-stat.i.cache-references
3.13 +6.4% 3.33 perf-stat.i.cpi
174.10 +6.3% 185.10 perf-stat.i.cycles-between-cache-misses
1.28e+10 -5.7% 1.206e+10 perf-stat.i.dTLB-loads
1.10 +0.1 1.16 perf-stat.i.dTLB-store-miss-rate%
6.836e+09 -6.9% 6.367e+09 perf-stat.i.dTLB-stores
22964025 -5.1% 21793736 perf-stat.i.iTLB-load-misses
4.766e+10 -6.0% 4.479e+10 perf-stat.i.instructions
0.32 -6.0% 0.30 perf-stat.i.ipc
10000057 -7.2% 9275971 perf-stat.i.minor-faults
1.42 +0.5 1.94 ± 4% perf-stat.i.node-load-miss-rate%
3780251 +25.3% 4737976 ± 4% perf-stat.i.node-load-misses
2.628e+08 -8.8% 2.395e+08 perf-stat.i.node-loads
7.34 +0.5 7.87 perf-stat.i.node-store-miss-rate%
59339717 -8.4% 54339533 perf-stat.i.node-stores
10000075 -7.2% 9275983 perf-stat.i.page-faults
22.73 +2.2% 23.23 perf-stat.overall.MPKI
79.05 -1.7 77.37 perf-stat.overall.cache-miss-rate%
3.13 +6.4% 3.33 perf-stat.overall.cpi
174.08 +6.3% 185.08 perf-stat.overall.cycles-between-cache-misses
1.10 +0.1 1.15 perf-stat.overall.dTLB-store-miss-rate%
2075 -1.0% 2055 perf-stat.overall.instructions-per-iTLB-miss
0.32 -6.0% 0.30 perf-stat.overall.ipc
1.42 +0.5 1.94 ± 4% perf-stat.overall.node-load-miss-rate%
7.34 +0.5 7.87 perf-stat.overall.node-store-miss-rate%
1419837 +1.2% 1437149 perf-stat.overall.path-length
9.3e+09 -5.7% 8.77e+09 perf-stat.ps.branch-instructions
26796456 -5.7% 25273598 perf-stat.ps.branch-misses
8.535e+08 -6.0% 8.025e+08 perf-stat.ps.cache-misses
1.08e+09 -3.9% 1.037e+09 perf-stat.ps.cache-references
1.275e+10 -5.7% 1.202e+10 perf-stat.ps.dTLB-loads
6.813e+09 -6.9% 6.345e+09 perf-stat.ps.dTLB-stores
22886580 -5.1% 21720014 perf-stat.ps.iTLB-load-misses
4.749e+10 -6.0% 4.464e+10 perf-stat.ps.instructions
9966243 -7.2% 9244557 perf-stat.ps.minor-faults
3767498 +25.3% 4722078 ± 4% perf-stat.ps.node-load-misses
2.619e+08 -8.8% 2.387e+08 perf-stat.ps.node-loads
59139171 -8.4% 54155638 perf-stat.ps.node-stores
9966269 -7.2% 9244575 perf-stat.ps.page-faults
1.419e+13 -6.1% 1.333e+13 perf-stat.total.instructions
8.87 ± 14% -3.1 5.78 ± 12% perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.__handle_mm_fault.handle_mm_fault
9.47 ± 13% -3.1 6.39 ± 12% perf-profile.calltrace.cycles-pp.alloc_pages_vma.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
9.16 ± 13% -3.1 6.09 ± 12% perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.alloc_pages_vma.__handle_mm_fault.handle_mm_fault.__do_page_fault
6.63 ± 15% -3.0 3.63 ± 14% perf-profile.calltrace.cycles-pp._raw_spin_lock.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.__handle_mm_fault
6.62 ± 15% -3.0 3.62 ± 14% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma
2.90 ± 8% -0.6 2.31 ± 6% perf-profile.calltrace.cycles-pp.free_unref_page_list.release_pages.tlb_flush_mmu.unmap_page_range.unmap_vmas
2.67 ± 8% -0.6 2.10 ± 6% perf-profile.calltrace.cycles-pp.free_pcppages_bulk.free_unref_page_list.release_pages.tlb_flush_mmu.unmap_page_range
2.14 ± 8% -0.6 1.57 ± 7% perf-profile.calltrace.cycles-pp.__do_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
2.03 ± 8% -0.6 1.47 ± 7% perf-profile.calltrace.cycles-pp.shmem_fault.__do_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault
1.88 ± 8% -0.6 1.33 ± 6% perf-profile.calltrace.cycles-pp.shmem_getpage_gfp.shmem_fault.__do_fault.__handle_mm_fault.handle_mm_fault
1.77 ± 8% -0.6 1.21 ± 7% perf-profile.calltrace.cycles-pp.find_lock_entry.shmem_getpage_gfp.shmem_fault.__do_fault.__handle_mm_fault
1.42 ± 8% -0.5 0.87 ± 8% perf-profile.calltrace.cycles-pp.find_get_entry.find_lock_entry.shmem_getpage_gfp.shmem_fault.__do_fault
1.65 ± 9% -0.5 1.12 ± 7% perf-profile.calltrace.cycles-pp._raw_spin_lock.free_pcppages_bulk.free_unref_page_list.release_pages.tlb_flush_mmu
1.64 ± 8% -0.5 1.10 ± 7% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.free_pcppages_bulk.free_unref_page_list.release_pages
0.92 ± 13% +0.3 1.26 ± 10% perf-profile.calltrace.cycles-pp.get_mem_cgroup_from_mm.mem_cgroup_try_charge.mem_cgroup_try_charge_delay.__handle_mm_fault.handle_mm_fault
0.77 ± 8% +0.6 1.39 ± 11% perf-profile.calltrace.cycles-pp.mem_cgroup_commit_charge.alloc_set_pte.finish_fault.__handle_mm_fault.handle_mm_fault
0.60 ± 7% +0.6 1.22 ± 11% perf-profile.calltrace.cycles-pp.mem_cgroup_charge_statistics.mem_cgroup_commit_charge.alloc_set_pte.finish_fault.__handle_mm_fault
0.00 +1.0 1.00 ± 12% perf-profile.calltrace.cycles-pp.__mod_memcg_state.mem_cgroup_charge_statistics.mem_cgroup_commit_charge.alloc_set_pte.finish_fault
0.00 +1.1 1.08 ± 10% perf-profile.calltrace.cycles-pp.__count_memcg_events.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
12.20 ± 6% +3.1 15.35 ± 7% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.pagevec_lru_move_fn.__lru_cache_add.alloc_set_pte
12.27 ± 6% +3.2 15.42 ± 7% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.pagevec_lru_move_fn.__lru_cache_add.alloc_set_pte.finish_fault
14.03 ± 6% +3.3 17.35 ± 7% perf-profile.calltrace.cycles-pp.__lru_cache_add.alloc_set_pte.finish_fault.__handle_mm_fault.handle_mm_fault
13.89 ± 6% +3.3 17.22 ± 7% perf-profile.calltrace.cycles-pp.pagevec_lru_move_fn.__lru_cache_add.alloc_set_pte.finish_fault.__handle_mm_fault
11.46 ± 12% -3.7 7.74 ± 10% perf-profile.children.cycles-pp._raw_spin_lock
8.96 ± 14% -3.1 5.86 ± 12% perf-profile.children.cycles-pp.get_page_from_freelist
9.27 ± 13% -3.1 6.18 ± 12% perf-profile.children.cycles-pp.__alloc_pages_nodemask
9.49 ± 13% -3.1 6.41 ± 11% perf-profile.children.cycles-pp.alloc_pages_vma
3.36 ± 8% -0.7 2.67 ± 7% perf-profile.children.cycles-pp.free_unref_page_list
3.08 ± 8% -0.7 2.40 ± 6% perf-profile.children.cycles-pp.free_pcppages_bulk
2.15 ± 8% -0.6 1.58 ± 7% perf-profile.children.cycles-pp.__do_fault
2.03 ± 8% -0.6 1.47 ± 7% perf-profile.children.cycles-pp.shmem_fault
1.77 ± 8% -0.6 1.22 ± 7% perf-profile.children.cycles-pp.find_lock_entry
1.89 ± 8% -0.6 1.34 ± 6% perf-profile.children.cycles-pp.shmem_getpage_gfp
1.42 ± 8% -0.5 0.88 ± 8% perf-profile.children.cycles-pp.find_get_entry
0.80 ± 9% +0.3 1.10 ± 10% perf-profile.children.cycles-pp.__mod_lruvec_state
0.93 ± 13% +0.3 1.27 ± 10% perf-profile.children.cycles-pp.get_mem_cgroup_from_mm
0.61 ± 7% +0.6 1.23 ± 11% perf-profile.children.cycles-pp.mem_cgroup_charge_statistics
0.78 ± 7% +0.6 1.39 ± 11% perf-profile.children.cycles-pp.mem_cgroup_commit_charge
0.61 ± 9% +0.7 1.29 ± 9% perf-profile.children.cycles-pp.__count_memcg_events
0.90 ± 8% +0.9 1.80 ± 11% perf-profile.children.cycles-pp.__mod_memcg_state
14.03 ± 6% +3.3 17.36 ± 7% perf-profile.children.cycles-pp.__lru_cache_add
13.92 ± 6% +3.3 17.25 ± 7% perf-profile.children.cycles-pp.pagevec_lru_move_fn
1.02 ± 8% -0.5 0.48 ± 9% perf-profile.self.cycles-pp.find_get_entry
0.35 ± 14% +0.1 0.46 ± 9% perf-profile.self.cycles-pp.mem_cgroup_try_charge
0.93 ± 13% +0.3 1.26 ± 10% perf-profile.self.cycles-pp.get_mem_cgroup_from_mm
0.59 ± 8% +0.7 1.28 ± 9% perf-profile.self.cycles-pp.__count_memcg_events
0.90 ± 9% +0.9 1.79 ± 11% perf-profile.self.cycles-pp.__mod_memcg_state
will-it-scale.per_process_ops
250000 +-+----------------------------------------------------------------+
| |
| .+.+. .+.+.+.+..+.+.+.+..+.+.+.+.+..+.+.+.+..+.+.+.+..+ |
200000 O-O.O. O O O.O. O O O O O O O O O O O O O O O O O O O O O O O O
| : |
| : |
150000 +-+ |
|: |
100000 +-+ |
|: |
|: |
50000 +-+ |
| |
| |
0 +-+----------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
***************************************************************************************************
lkp-bdw-ex2: 160 threads Intel(R) Xeon(R) CPU E7-8890 v4 @ 2.20GHz with 256G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/size/tbox_group/test/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.6/debian-x86_64-2019-05-14.cgz/300s/1T/lkp-bdw-ex2/lru-shm/vm-scalability/0xb000036
commit:
ec16545096 ("memcg, fsnotify: no oom-kill for remote memcg charging")
1e577f970f ("mm, memcg: introduce memory.events.local")
ec165450968b2629 1e577f970f66a53d429cbee37b3
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
4:4 25% 5:4 perf-profile.calltrace.cycles-pp.error_entry
4:4 26% 5:4 perf-profile.children.cycles-pp.error_entry
3:4 21% 4:4 perf-profile.self.cycles-pp.error_entry
%stddev %change %stddev
\ | \
0.08 ± 2% -7.6% 0.07 vm-scalability.free_time
453831 -13.0% 394631 vm-scalability.median
87319393 -13.1% 75860153 vm-scalability.throughput
2126 ± 2% +12.0% 2381 vm-scalability.time.percent_of_cpu_this_job_got
4441 ± 2% +17.9% 5233 vm-scalability.time.system_time
54.78 ± 2% -4.4% 52.36 boot-time.boot
9222 -6.6% 8614 ± 2% boot-time.idle
11735511 +24.2% 14571272 ± 2% meminfo.Mapped
26619 +17.5% 31283 meminfo.PageTables
0.03 ± 43% -0.0 0.01 ±100% mpstat.cpu.all.iowait%
6.97 +1.8 8.75 mpstat.cpu.all.sys%
89.00 -2.2% 87.00 vmstat.cpu.id
19.25 ± 2% +22.1% 23.50 ± 2% vmstat.procs.r
4873613 ±100% -76.4% 1151031 ±154% cpuidle.C1.usage
6.044e+08 ±160% +219.4% 1.931e+09 ± 56% cpuidle.C1E.time
5543505 ±143% -94.1% 326343 ± 45% cpuidle.POLL.time
905525 ±135% -88.3% 106104 ± 5% cpuidle.POLL.usage
331.00 +18.1% 391.00 turbostat.Avg_MHz
14.01 +2.2 16.21 turbostat.Busy%
4872319 ±100% -76.4% 1149325 ±155% turbostat.C1
0.95 ±161% +2.2 3.17 ± 56% turbostat.C1E%
51.00 ± 3% +10.8% 56.50 ± 3% turbostat.PkgTmp
308.72 ± 4% +7.3% 331.33 turbostat.PkgWatt
2768727 +20.6% 3338396 ± 2% numa-meminfo.node0.Mapped
88169 ± 9% +15.3% 101648 ± 6% numa-meminfo.node1.KReclaimable
2742386 ± 2% +27.0% 3481981 numa-meminfo.node1.Mapped
5416 ± 4% +23.7% 6699 numa-meminfo.node1.PageTables
88169 ± 9% +15.3% 101648 ± 6% numa-meminfo.node1.SReclaimable
141351 ± 5% +10.0% 155426 ± 7% numa-meminfo.node1.Slab
2840934 ± 3% +24.1% 3524572 ± 2% numa-meminfo.node2.Mapped
998.50 ± 35% -59.2% 407.25 ±102% numa-meminfo.node3.Inactive(file)
3022322 ± 2% +24.2% 3754006 ± 3% numa-meminfo.node3.Mapped
701407 ± 3% +18.4% 830770 ± 2% numa-vmstat.node0.nr_mapped
694990 ± 2% +20.9% 840082 ± 3% numa-vmstat.node1.nr_mapped
1384 ± 4% +16.8% 1617 ± 3% numa-vmstat.node1.nr_page_table_pages
22041 ± 9% +15.4% 25433 ± 6% numa-vmstat.node1.nr_slab_reclaimable
717728 +22.7% 880970 ± 3% numa-vmstat.node2.nr_mapped
1561 ± 19% +28.7% 2009 ± 14% numa-vmstat.node2.nr_page_table_pages
249.25 ± 35% -59.3% 101.50 ±102% numa-vmstat.node3.nr_inactive_file
759778 ± 4% +22.1% 927375 ± 5% numa-vmstat.node3.nr_mapped
249.25 ± 35% -59.3% 101.50 ±102% numa-vmstat.node3.nr_zone_inactive_file
17748 +12.1% 19898 sched_debug.cfs_rq:/.exec_clock.avg
15366 ± 3% +13.8% 17486 sched_debug.cfs_rq:/.exec_clock.min
3136948 +14.6% 3595078 sched_debug.cfs_rq:/.min_vruntime.avg
3312023 +15.0% 3807455 sched_debug.cfs_rq:/.min_vruntime.max
2777689 ± 5% +12.9% 3135353 ± 4% sched_debug.cfs_rq:/.min_vruntime.min
265759 ± 3% -17.0% 220576 ± 5% sched_debug.cpu.clock.avg
265767 ± 3% -17.0% 220585 ± 5% sched_debug.cpu.clock.max
265751 ± 3% -17.0% 220568 ± 5% sched_debug.cpu.clock.min
265759 ± 3% -17.0% 220576 ± 5% sched_debug.cpu.clock_task.avg
265767 ± 3% -17.0% 220585 ± 5% sched_debug.cpu.clock_task.max
265751 ± 3% -17.0% 220568 ± 5% sched_debug.cpu.clock_task.min
29771 ± 12% +33.9% 39850 ± 26% sched_debug.cpu.ttwu_count.max
265751 ± 3% -17.0% 220568 ± 5% sched_debug.cpu_clk
260980 ± 3% -17.3% 215794 ± 5% sched_debug.ktime
269253 ± 3% -16.8% 224063 ± 4% sched_debug.sched_clk
59.50 ± 2% +75.2% 104.25 ± 71% proc-vmstat.nr_active_file
98.75 -9.1% 89.75 ± 2% proc-vmstat.nr_anon_transparent_hugepages
9468053 -2.4% 9236647 proc-vmstat.nr_dirty_background_threshold
18959420 -2.4% 18496065 proc-vmstat.nr_dirty_threshold
28182185 ± 2% +8.2% 30490955 proc-vmstat.nr_file_pages
95231947 -2.4% 92914596 proc-vmstat.nr_free_pages
27914900 ± 2% +8.3% 30222180 proc-vmstat.nr_inactive_anon
402.75 +6.0% 426.75 proc-vmstat.nr_inactive_file
26432 +1.3% 26786 proc-vmstat.nr_kernel_stack
2909616 +24.8% 3630796 proc-vmstat.nr_mapped
6604 +18.0% 7795 proc-vmstat.nr_page_table_pages
27924019 ± 2% +8.3% 30232857 proc-vmstat.nr_shmem
89458 +5.5% 94371 proc-vmstat.nr_slab_reclaimable
59.50 ± 2% +75.2% 104.25 ± 71% proc-vmstat.nr_zone_active_file
27914900 ± 2% +8.3% 30222179 proc-vmstat.nr_zone_inactive_anon
402.75 +6.0% 426.75 proc-vmstat.nr_zone_inactive_file
2289 ± 77% +684.3% 17957 ±112% proc-vmstat.numa_pages_migrated
2289 ± 77% +684.3% 17957 ±112% proc-vmstat.pgmigrate_success
1.207e+10 ± 2% +8.5% 1.31e+10 perf-stat.i.branch-instructions
29052163 +10.7% 32158881 perf-stat.i.cache-misses
3.44 ± 8% -21.9% 2.69 ± 6% perf-stat.i.cpi
6.185e+10 ± 2% +17.1% 7.246e+10 perf-stat.i.cpu-cycles
47.42 ± 2% +8.7% 51.55 ± 2% perf-stat.i.cpu-migrations
8412 ± 10% -67.8% 2711 ± 20% perf-stat.i.cycles-between-cache-misses
1.161e+10 +8.0% 1.255e+10 perf-stat.i.dTLB-loads
4.362e+10 ± 2% +8.9% 4.753e+10 perf-stat.i.instructions
1805696 +5.6% 1906421 perf-stat.i.minor-faults
1805087 +5.6% 1905647 perf-stat.i.page-faults
1.41 ± 2% +7.5% 1.52 perf-stat.overall.cpi
2130 ± 3% +6.0% 2257 ± 2% perf-stat.overall.cycles-between-cache-misses
0.71 ± 2% -7.0% 0.66 perf-stat.overall.ipc
61.18 +1.4 62.61 perf-stat.overall.node-load-miss-rate%
5377 +3.0% 5540 perf-stat.overall.path-length
1.231e+10 +9.7% 1.351e+10 perf-stat.ps.branch-instructions
29515966 +11.7% 32965638 perf-stat.ps.cache-misses
6.286e+10 ± 2% +18.3% 7.439e+10 perf-stat.ps.cpu-cycles
47.77 ± 2% +9.3% 52.19 ± 2% perf-stat.ps.cpu-migrations
1.183e+10 +9.2% 1.292e+10 perf-stat.ps.dTLB-loads
4.446e+10 +10.1% 4.895e+10 perf-stat.ps.instructions
1843881 +6.8% 1970028 perf-stat.ps.minor-faults
6078583 +3.6% 6295092 perf-stat.ps.node-stores
1843880 +6.8% 1970028 perf-stat.ps.page-faults
1.498e+13 +3.0% 1.543e+13 perf-stat.total.instructions
39017 ± 2% -9.0% 35489 ± 2% softirqs.CPU101.SCHED
38287 ± 2% -9.5% 34633 ± 4% softirqs.CPU11.SCHED
38567 ± 2% -8.5% 35301 ± 4% softirqs.CPU112.SCHED
38837 -9.7% 35054 ± 3% softirqs.CPU113.SCHED
37853 ± 2% -7.0% 35188 ± 4% softirqs.CPU119.SCHED
38469 ± 2% -9.4% 34857 ± 3% softirqs.CPU12.SCHED
38615 ± 2% -11.9% 34020 ± 5% softirqs.CPU13.SCHED
38757 ± 3% -9.4% 35129 ± 4% softirqs.CPU158.SCHED
38239 ± 2% -8.2% 35109 ± 3% softirqs.CPU16.SCHED
38539 ± 2% -8.9% 35118 ± 4% softirqs.CPU161.SCHED
38661 ± 3% -9.2% 35110 ± 4% softirqs.CPU163.SCHED
38424 ± 2% -7.9% 35396 ± 3% softirqs.CPU178.SCHED
39368 ± 2% -11.0% 35034 ± 3% softirqs.CPU19.SCHED
37789 ± 3% -9.3% 34285 ± 7% softirqs.CPU191.SCHED
38213 -7.9% 35191 ± 3% softirqs.CPU23.SCHED
38537 ± 2% -9.1% 35016 softirqs.CPU3.SCHED
38657 ± 3% -9.1% 35158 ± 4% softirqs.CPU62.SCHED
38418 ± 2% -8.7% 35081 ± 4% softirqs.CPU67.SCHED
38519 ± 2% -8.4% 35274 ± 3% softirqs.CPU69.SCHED
38754 -9.4% 35095 ± 4% softirqs.CPU71.SCHED
38386 ± 2% -8.9% 34983 ± 4% softirqs.CPU73.SCHED
37979 ± 3% -9.5% 34371 softirqs.CPU9.SCHED
38393 ± 2% -9.2% 34854 ± 2% softirqs.CPU90.SCHED
39251 ± 3% -8.8% 35779 ± 3% softirqs.CPU96.SCHED
14549 ± 3% -14.4% 12454 ± 2% softirqs.NET_RX
173.50 ± 6% -10.8% 154.75 interrupts.100:PCI-MSI.1572919-edge.eth0-TxRx-55
363.25 ± 59% -57.4% 154.75 interrupts.101:PCI-MSI.1572920-edge.eth0-TxRx-56
251.75 ± 33% -34.3% 165.50 ± 3% interrupts.46:PCI-MSI.1572865-edge.eth0-TxRx-1
245.50 ± 43% -35.6% 158.00 ± 2% interrupts.56:PCI-MSI.1572875-edge.eth0-TxRx-11
720167 -5.0% 684354 interrupts.CAL:Function_call_interrupts
251.75 ± 33% -34.3% 165.50 ± 3% interrupts.CPU1.46:PCI-MSI.1572865-edge.eth0-TxRx-1
137.50 ± 73% +11475.8% 15916 ±165% interrupts.CPU1.RES:Rescheduling_interrupts
34.25 ± 85% +4651.8% 1627 ±156% interrupts.CPU100.RES:Rescheduling_interrupts
2221 ± 44% +35.6% 3011 ± 6% interrupts.CPU103.NMI:Non-maskable_interrupts
2221 ± 44% +35.6% 3011 ± 6% interrupts.CPU103.PMI:Performance_monitoring_interrupts
2166 ± 43% +37.9% 2988 ± 4% interrupts.CPU107.NMI:Non-maskable_interrupts
2166 ± 43% +37.9% 2988 ± 4% interrupts.CPU107.PMI:Performance_monitoring_interrupts
2209 ± 43% +34.2% 2963 ± 4% interrupts.CPU108.NMI:Non-maskable_interrupts
2209 ± 43% +34.2% 2963 ± 4% interrupts.CPU108.PMI:Performance_monitoring_interrupts
245.50 ± 43% -35.6% 158.00 ± 2% interrupts.CPU11.56:PCI-MSI.1572875-edge.eth0-TxRx-11
46.25 ± 76% +7385.9% 3462 ±159% interrupts.CPU110.RES:Rescheduling_interrupts
359.75 ± 62% -72.8% 98.00 ±136% interrupts.CPU125.RES:Rescheduling_interrupts
391.25 ± 70% -93.2% 26.50 ± 99% interrupts.CPU130.RES:Rescheduling_interrupts
21.00 ± 39% +9888.1% 2097 ±165% interrupts.CPU133.RES:Rescheduling_interrupts
23.25 ± 62% +1324.7% 331.25 ±130% interrupts.CPU144.RES:Rescheduling_interrupts
1522 ± 56% +73.2% 2637 ± 25% interrupts.CPU150.NMI:Non-maskable_interrupts
1522 ± 56% +73.2% 2637 ± 25% interrupts.CPU150.PMI:Performance_monitoring_interrupts
1904 ± 53% +60.3% 3052 ± 2% interrupts.CPU154.NMI:Non-maskable_interrupts
1904 ± 53% +60.3% 3052 ± 2% interrupts.CPU154.PMI:Performance_monitoring_interrupts
32.25 ± 91% +12542.6% 4077 ±167% interrupts.CPU157.RES:Rescheduling_interrupts
15.25 ± 42% +386.9% 74.25 ±113% interrupts.CPU165.RES:Rescheduling_interrupts
243.00 ±128% -88.5% 28.00 ± 94% interrupts.CPU172.RES:Rescheduling_interrupts
1908 ± 51% +64.1% 3130 ± 3% interrupts.CPU173.NMI:Non-maskable_interrupts
1908 ± 51% +64.1% 3130 ± 3% interrupts.CPU173.PMI:Performance_monitoring_interrupts
2285 ± 43% +37.7% 3146 ± 4% interrupts.CPU180.NMI:Non-maskable_interrupts
2285 ± 43% +37.7% 3146 ± 4% interrupts.CPU180.PMI:Performance_monitoring_interrupts
1971 ± 52% +57.8% 3111 ± 5% interrupts.CPU181.NMI:Non-maskable_interrupts
1971 ± 52% +57.8% 3111 ± 5% interrupts.CPU181.PMI:Performance_monitoring_interrupts
2269 ± 48% +38.6% 3146 ± 3% interrupts.CPU184.NMI:Non-maskable_interrupts
2269 ± 48% +38.6% 3146 ± 3% interrupts.CPU184.PMI:Performance_monitoring_interrupts
2218 ± 50% +41.1% 3129 ± 4% interrupts.CPU187.NMI:Non-maskable_interrupts
2218 ± 50% +41.1% 3129 ± 4% interrupts.CPU187.PMI:Performance_monitoring_interrupts
2220 ± 41% +38.3% 3072 ± 6% interrupts.CPU20.NMI:Non-maskable_interrupts
2220 ± 41% +38.3% 3072 ± 6% interrupts.CPU20.PMI:Performance_monitoring_interrupts
2255 ± 43% +33.7% 3015 ± 4% interrupts.CPU23.NMI:Non-maskable_interrupts
2255 ± 43% +33.7% 3015 ± 4% interrupts.CPU23.PMI:Performance_monitoring_interrupts
358.00 ± 68% -81.3% 67.00 ± 76% interrupts.CPU24.RES:Rescheduling_interrupts
263.00 ± 87% -90.9% 24.00 ± 60% interrupts.CPU28.RES:Rescheduling_interrupts
1865 ± 48% +59.0% 2965 ± 3% interrupts.CPU3.NMI:Non-maskable_interrupts
1865 ± 48% +59.0% 2965 ± 3% interrupts.CPU3.PMI:Performance_monitoring_interrupts
273.00 ±102% -90.8% 25.25 ± 61% interrupts.CPU30.RES:Rescheduling_interrupts
25.75 ± 50% +292.2% 101.00 ± 81% interrupts.CPU4.RES:Rescheduling_interrupts
282.25 ± 80% -89.3% 30.25 ± 66% interrupts.CPU44.RES:Rescheduling_interrupts
20.00 ± 32% +623.8% 144.75 ± 73% interrupts.CPU52.RES:Rescheduling_interrupts
2255 ± 34% +31.8% 2972 ± 4% interrupts.CPU53.NMI:Non-maskable_interrupts
2255 ± 34% +31.8% 2972 ± 4% interrupts.CPU53.PMI:Performance_monitoring_interrupts
173.50 ± 6% -10.8% 154.75 interrupts.CPU55.100:PCI-MSI.1572919-edge.eth0-TxRx-55
363.25 ± 59% -57.4% 154.75 interrupts.CPU56.101:PCI-MSI.1572920-edge.eth0-TxRx-56
16.25 ± 6% +380.0% 78.00 ± 59% interrupts.CPU58.RES:Rescheduling_interrupts
2329 ± 38% +45.8% 3396 ± 18% interrupts.CPU60.NMI:Non-maskable_interrupts
2329 ± 38% +45.8% 3396 ± 18% interrupts.CPU60.PMI:Performance_monitoring_interrupts
29.00 ±107% +228.4% 95.25 ± 79% interrupts.CPU69.RES:Rescheduling_interrupts
2332 ± 41% +34.5% 3137 ± 4% interrupts.CPU72.NMI:Non-maskable_interrupts
2332 ± 41% +34.5% 3137 ± 4% interrupts.CPU72.PMI:Performance_monitoring_interrupts
2281 ± 44% +37.4% 3133 ± 3% interrupts.CPU76.NMI:Non-maskable_interrupts
2281 ± 44% +37.4% 3133 ± 3% interrupts.CPU76.PMI:Performance_monitoring_interrupts
2283 ± 44% +36.3% 3111 ± 3% interrupts.CPU77.NMI:Non-maskable_interrupts
2283 ± 44% +36.3% 3111 ± 3% interrupts.CPU77.PMI:Performance_monitoring_interrupts
1941 ± 51% +61.2% 3130 ± 4% interrupts.CPU78.NMI:Non-maskable_interrupts
1941 ± 51% +61.2% 3130 ± 4% interrupts.CPU78.PMI:Performance_monitoring_interrupts
2294 ± 44% +36.3% 3127 ± 4% interrupts.CPU80.NMI:Non-maskable_interrupts
2294 ± 44% +36.3% 3127 ± 4% interrupts.CPU80.PMI:Performance_monitoring_interrupts
40.28 ± 75% -22.4 17.86 ± 3% perf-profile.calltrace.cycles-pp.secondary_startup_64
40.03 ± 75% -22.3 17.76 ± 3% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
40.03 ± 75% -22.3 17.76 ± 3% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
40.00 ± 75% -22.3 17.74 ± 3% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
36.97 ± 75% -20.6 16.33 ± 4% perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
36.71 ± 75% -20.6 16.15 ± 4% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary
30.82 ± 73% -17.2 13.62 ± 6% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
5.50 ± 89% -3.2 2.33 ± 5% perf-profile.calltrace.cycles-pp.apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
4.83 ± 83% -2.7 2.14 ± 6% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle
5.46 ± 14% -2.6 2.85 ± 21% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
5.46 ± 14% -2.6 2.85 ± 21% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.29 ± 71% -1.2 1.09 ± 5% perf-profile.calltrace.cycles-pp.menu_select.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
1.32 ± 73% -0.8 0.55 ± 6% perf-profile.calltrace.cycles-pp.tick_nohz_get_sleep_length.menu_select.do_idle.cpu_startup_entry.start_secondary
0.44 ± 57% +0.4 0.89 perf-profile.calltrace.cycles-pp.__mod_memcg_state.mem_cgroup_charge_statistics.mem_cgroup_commit_charge.shmem_getpage_gfp.shmem_fault
0.98 ± 58% +0.5 1.49 ± 3% perf-profile.calltrace.cycles-pp.page_add_file_rmap.alloc_set_pte.finish_fault.__handle_mm_fault.handle_mm_fault
0.62 ± 57% +0.5 1.14 perf-profile.calltrace.cycles-pp.mem_cgroup_charge_statistics.mem_cgroup_commit_charge.shmem_getpage_gfp.shmem_fault.__do_fault
0.00 +0.5 0.54 ± 5% perf-profile.calltrace.cycles-pp.lock_page_memcg.page_add_file_rmap.alloc_set_pte.finish_fault.__handle_mm_fault
0.79 ± 57% +0.6 1.35 perf-profile.calltrace.cycles-pp.mem_cgroup_commit_charge.shmem_getpage_gfp.shmem_fault.__do_fault.__handle_mm_fault
0.00 +0.6 0.60 ± 2% perf-profile.calltrace.cycles-pp.__mod_memcg_state.__mod_lruvec_state.__pagevec_lru_add_fn.pagevec_lru_move_fn.__lru_cache_add
1.41 ± 57% +0.6 2.04 ± 3% perf-profile.calltrace.cycles-pp.alloc_set_pte.finish_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault
0.13 ±173% +0.6 0.76 perf-profile.calltrace.cycles-pp.__mod_lruvec_state.__pagevec_lru_add_fn.pagevec_lru_move_fn.__lru_cache_add.shmem_getpage_gfp
1.46 ± 57% +0.6 2.11 ± 3% perf-profile.calltrace.cycles-pp.finish_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
1.35 ± 57% +0.7 2.00 perf-profile.calltrace.cycles-pp.__pagevec_lru_add_fn.pagevec_lru_move_fn.__lru_cache_add.shmem_getpage_gfp.shmem_fault
0.00 +0.8 0.77 perf-profile.calltrace.cycles-pp.__count_memcg_events.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
3.19 ± 57% +2.0 5.21 perf-profile.calltrace.cycles-pp.prepare_exit_to_usermode.swapgs_restore_regs_and_return_to_usermode
4.04 ± 57% +2.2 6.27 perf-profile.calltrace.cycles-pp.swapgs_restore_regs_and_return_to_usermode
10.40 ± 57% +13.3 23.69 ± 3% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.pagevec_lru_move_fn.__lru_cache_add.shmem_getpage_gfp
10.44 ± 57% +13.3 23.74 ± 3% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.pagevec_lru_move_fn.__lru_cache_add.shmem_getpage_gfp.shmem_fault
12.17 ± 57% +14.2 26.36 ± 3% perf-profile.calltrace.cycles-pp.pagevec_lru_move_fn.__lru_cache_add.shmem_getpage_gfp.shmem_fault.__do_fault
12.29 ± 57% +14.2 26.50 ± 3% perf-profile.calltrace.cycles-pp.__lru_cache_add.shmem_getpage_gfp.shmem_fault.__do_fault.__handle_mm_fault
35.44 ± 57% +18.0 53.48 perf-profile.calltrace.cycles-pp.shmem_getpage_gfp.shmem_fault.__do_fault.__handle_mm_fault.handle_mm_fault
35.73 ± 57% +18.1 53.84 perf-profile.calltrace.cycles-pp.shmem_fault.__do_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault
35.81 ± 57% +18.1 53.93 perf-profile.calltrace.cycles-pp.__do_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
44.51 ± 57% +20.4 64.90 perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
45.23 ± 57% +20.9 66.12 perf-profile.calltrace.cycles-pp.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
46.62 ± 57% +21.2 67.81 perf-profile.calltrace.cycles-pp.__do_page_fault.do_page_fault.page_fault
47.04 ± 57% +21.3 68.30 perf-profile.calltrace.cycles-pp.do_page_fault.page_fault
47.22 ± 57% +21.3 68.56 perf-profile.calltrace.cycles-pp.page_fault
40.29 ± 75% -22.4 17.87 ± 3% perf-profile.children.cycles-pp.do_idle
40.28 ± 75% -22.4 17.86 ± 3% perf-profile.children.cycles-pp.secondary_startup_64
40.28 ± 75% -22.4 17.86 ± 3% perf-profile.children.cycles-pp.cpu_startup_entry
40.03 ± 75% -22.3 17.76 ± 3% perf-profile.children.cycles-pp.start_secondary
37.19 ± 75% -20.8 16.42 ± 4% perf-profile.children.cycles-pp.cpuidle_enter_state
37.20 ± 75% -20.8 16.43 ± 4% perf-profile.children.cycles-pp.cpuidle_enter
31.02 ± 73% -17.3 13.71 ± 6% perf-profile.children.cycles-pp.intel_idle
5.91 ± 70% -2.9 3.02 ± 3% perf-profile.children.cycles-pp.apic_timer_interrupt
5.43 ± 69% -2.7 2.73 ± 3% perf-profile.children.cycles-pp.smp_apic_timer_interrupt
5.46 ± 14% -2.6 2.85 ± 21% perf-profile.children.cycles-pp.do_syscall_64
5.46 ± 14% -2.6 2.86 ± 21% perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
2.32 ± 71% -1.2 1.11 ± 5% perf-profile.children.cycles-pp.menu_select
1.63 ± 58% -1.1 0.55 ± 11% perf-profile.children.cycles-pp.irq_exit
1.27 ± 55% -0.9 0.40 ± 14% perf-profile.children.cycles-pp.__softirqentry_text_start
1.34 ± 73% -0.8 0.56 ± 6% perf-profile.children.cycles-pp.tick_nohz_get_sleep_length
1.09 ± 70% -0.6 0.44 ± 11% perf-profile.children.cycles-pp.tick_nohz_next_event
0.71 ± 92% -0.4 0.26 ± 26% perf-profile.children.cycles-pp.truncate_inode_page
0.59 ± 40% -0.4 0.17 ± 28% perf-profile.children.cycles-pp.rebalance_domains
0.80 ± 52% -0.4 0.39 ± 12% perf-profile.children.cycles-pp.ktime_get
0.59 ± 91% -0.4 0.22 ± 28% perf-profile.children.cycles-pp.delete_from_page_cache
0.39 ± 43% -0.3 0.11 ± 20% perf-profile.children.cycles-pp.load_balance
0.42 ± 91% -0.3 0.15 ± 29% perf-profile.children.cycles-pp.__delete_from_page_cache
0.26 ± 42% -0.2 0.08 ± 20% perf-profile.children.cycles-pp.find_busiest_group
0.23 ± 42% -0.2 0.07 ± 23% perf-profile.children.cycles-pp.update_sd_lb_stats
0.23 ± 52% -0.2 0.07 ± 10% perf-profile.children.cycles-pp.run_rebalance_domains
0.22 ± 52% -0.2 0.07 ± 12% perf-profile.children.cycles-pp.update_blocked_averages
0.24 ± 62% -0.1 0.11 ± 13% perf-profile.children.cycles-pp.start_kernel
0.19 ± 56% -0.1 0.08 ± 6% perf-profile.children.cycles-pp.run_timer_softirq
0.17 ± 57% -0.1 0.08 ± 16% perf-profile.children.cycles-pp.tick_nohz_irq_exit
0.50 ± 14% -0.1 0.42 ± 8% perf-profile.children.cycles-pp.xas_store
0.14 ± 5% -0.1 0.07 ± 12% perf-profile.children.cycles-pp.ksys_read
0.14 ± 5% -0.1 0.07 ± 10% perf-profile.children.cycles-pp.vfs_read
0.10 ± 11% -0.1 0.03 ±100% perf-profile.children.cycles-pp.smp_call_function_single
0.11 ± 4% -0.1 0.04 ± 58% perf-profile.children.cycles-pp.perf_read
0.06 ± 58% +0.0 0.10 ± 4% perf-profile.children.cycles-pp.mem_cgroup_page_lruvec
0.30 ± 58% +0.3 0.64 ± 6% perf-profile.children.cycles-pp.lock_page_memcg
0.46 ± 57% +0.4 0.91 perf-profile.children.cycles-pp.__count_memcg_events
0.99 ± 57% +0.5 1.50 ± 4% perf-profile.children.cycles-pp.page_add_file_rmap
0.63 ± 57% +0.5 1.15 perf-profile.children.cycles-pp.mem_cgroup_charge_statistics
0.79 ± 57% +0.6 1.36 perf-profile.children.cycles-pp.mem_cgroup_commit_charge
1.47 ± 57% +0.6 2.12 ± 3% perf-profile.children.cycles-pp.finish_fault
1.18 ± 54% +0.7 1.83 ± 3% perf-profile.children.cycles-pp.__mod_memcg_state
1.37 ± 57% +0.7 2.03 perf-profile.children.cycles-pp.__pagevec_lru_add_fn
1.94 ± 57% +0.8 2.70 ± 2% perf-profile.children.cycles-pp.alloc_set_pte
1.65 ± 39% +1.1 2.79 ± 2% perf-profile.children.cycles-pp.native_irq_return_iret
3.19 ± 57% +2.0 5.22 perf-profile.children.cycles-pp.prepare_exit_to_usermode
4.04 ± 57% +2.2 6.27 perf-profile.children.cycles-pp.swapgs_restore_regs_and_return_to_usermode
12.66 ± 57% +13.1 25.75 ± 2% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
10.69 ± 54% +13.2 23.88 ± 3% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
12.19 ± 57% +14.2 26.39 ± 3% perf-profile.children.cycles-pp.pagevec_lru_move_fn
12.30 ± 57% +14.2 26.52 ± 3% perf-profile.children.cycles-pp.__lru_cache_add
35.47 ± 57% +18.0 53.51 perf-profile.children.cycles-pp.shmem_getpage_gfp
35.74 ± 57% +18.1 53.84 perf-profile.children.cycles-pp.shmem_fault
35.81 ± 57% +18.1 53.93 perf-profile.children.cycles-pp.__do_fault
44.54 ± 57% +20.4 64.94 perf-profile.children.cycles-pp.__handle_mm_fault
45.27 ± 57% +20.9 66.17 perf-profile.children.cycles-pp.handle_mm_fault
46.65 ± 57% +21.2 67.84 perf-profile.children.cycles-pp.__do_page_fault
47.05 ± 57% +21.3 68.31 perf-profile.children.cycles-pp.do_page_fault
47.27 ± 57% +21.4 68.62 perf-profile.children.cycles-pp.page_fault
30.95 ± 73% -17.3 13.69 ± 6% perf-profile.self.cycles-pp.intel_idle
0.58 ± 39% -0.3 0.26 ± 25% perf-profile.self.cycles-pp.ktime_get
0.41 ± 93% -0.3 0.14 ± 29% perf-profile.self.cycles-pp.free_pcppages_bulk
0.19 ± 54% -0.1 0.08 ± 57% perf-profile.self.cycles-pp.tick_nohz_next_event
0.24 ± 54% -0.1 0.14 ± 10% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.15 ± 61% -0.1 0.06 ± 7% perf-profile.self.cycles-pp.run_timer_softirq
0.34 ± 4% -0.1 0.27 ± 10% perf-profile.self.cycles-pp.release_pages
0.05 ± 58% +0.0 0.10 ± 4% perf-profile.self.cycles-pp.mem_cgroup_page_lruvec
0.20 ± 57% +0.1 0.29 perf-profile.self.cycles-pp.page_fault
0.30 ± 58% +0.3 0.63 ± 6% perf-profile.self.cycles-pp.lock_page_memcg
0.46 ± 57% +0.4 0.91 perf-profile.self.cycles-pp.__count_memcg_events
1.16 ± 54% +0.7 1.82 ± 3% perf-profile.self.cycles-pp.__mod_memcg_state
1.65 ± 39% +1.1 2.79 ± 2% perf-profile.self.cycles-pp.native_irq_return_iret
3.13 ± 57% +2.0 5.13 perf-profile.self.cycles-pp.prepare_exit_to_usermode
12.65 ± 57% +13.1 25.75 ± 2% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
View attachment "config-5.2.0-05698-g1e577f970f66a" of type "text/plain" (178273 bytes)
View attachment "job-script" of type "text/plain" (7432 bytes)
View attachment "job.yaml" of type "text/plain" (5032 bytes)
View attachment "reproduce" of type "text/plain" (315 bytes)
Powered by blists - more mailing lists