[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220218063536.GA4377@xsang-OptiPlex-9020>
Date: Fri, 18 Feb 2022 14:35:36 +0800
From: kernel test robot <oliver.sang@...el.com>
To: Hugh Dickins <hughd@...gle.com>
Cc: 0day robot <lkp@...el.com>, Vlastimil Babka <vbabka@...e.cz>,
LKML <linux-kernel@...r.kernel.org>, lkp@...ts.01.org,
ying.huang@...el.com, feng.tang@...el.com,
zhengjun.xing@...ux.intel.com, fengwei.yin@...el.com,
Andrew Morton <akpm@...ux-foundation.org>,
Michal Hocko <mhocko@...e.com>,
"Kirill A. Shutemov" <kirill@...temov.name>,
Matthew Wilcox <willy@...radead.org>,
David Hildenbrand <david@...hat.com>,
Alistair Popple <apopple@...dia.com>,
Johannes Weiner <hannes@...xchg.org>,
Rik van Riel <riel@...riel.com>,
Suren Baghdasaryan <surenb@...gle.com>,
Yu Zhao <yuzhao@...gle.com>, Greg Thelen <gthelen@...gle.com>,
Shakeel Butt <shakeelb@...gle.com>,
Hillf Danton <hdanton@...a.com>, linux-mm@...ck.org
Subject: [mm/munlock] 237b445401: stress-ng.remap.ops_per_sec -62.6%
regression
Greeting,
FYI, we noticed a -62.6% regression of stress-ng.remap.ops_per_sec due to commit:
commit: 237b4454014d3759acc6459eb329c5e3d55113ed ("[PATCH v2 07/13] mm/munlock: mlock_pte_range() when mlocking or munlocking")
url: https://github.com/0day-ci/linux/commits/Hugh-Dickins/mm-munlock-rework-of-mlock-munlock-page-handling/20220215-104421
base: https://git.kernel.org/cgit/linux/kernel/git/tip/tip.git ee28855a54493ce83bc2a3fbe30210be61b57bc7
patch link: https://lore.kernel.org/lkml/d39f6e4d-aa4f-731a-68ee-e77cdbf1d7bb@google.com
in testcase: stress-ng
on test machine: 128 threads 2 sockets Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz with 128G memory
with following parameters:
nr_threads: 100%
testtime: 60s
class: memory
test: remap
cpufreq_governor: performance
ucode: 0xd000280
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <oliver.sang@...el.com>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
sudo bin/lkp install job.yaml # job file is attached in this email
bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
sudo bin/lkp run generated-yaml-file
# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.
=========================================================================================
class/compiler/cpufreq_governor/kconfig/nr_threads/rootfs/tbox_group/test/testcase/testtime/ucode:
memory/gcc-9/performance/x86_64-rhel-8.3/100%/debian-10.4-x86_64-20200603.cgz/lkp-icl-2sp6/remap/stress-ng/60s/0xd000280
commit:
c479426e09 ("mm/munlock: maintain page->mlock_count while unevictable")
237b445401 ("mm/munlock: mlock_pte_range() when mlocking or munlocking")
c479426e09c8088d 237b4454014d3759acc6459eb32
---------------- ---------------------------
%stddev %change %stddev
\ | \
109459 -62.5% 41003 ± 2% stress-ng.remap.ops
1823 -62.6% 682.54 ± 2% stress-ng.remap.ops_per_sec
2.242e+08 -62.5% 83989157 ± 2% stress-ng.time.minor_page_faults
30.00 ± 2% -61.2% 11.65 ± 4% stress-ng.time.user_time
0.48 -0.2 0.26 ± 2% mpstat.cpu.all.usr%
439776 +35.9% 597819 meminfo.Inactive
439776 +35.9% 597819 meminfo.Inactive(anon)
157638 -99.8% 247.00 meminfo.Mlocked
289539 ± 17% +30.0% 376365 ± 5% numa-meminfo.node0.Inactive
289539 ± 17% +30.0% 376365 ± 5% numa-meminfo.node0.Inactive(anon)
77981 -99.8% 126.50 numa-meminfo.node0.Mlocked
79990 ± 2% -99.9% 119.50 numa-meminfo.node1.Mlocked
72605 ± 17% +29.8% 94255 ± 5% numa-vmstat.node0.nr_inactive_anon
19407 -99.8% 31.50 ± 2% numa-vmstat.node0.nr_mlock
72603 ± 17% +29.8% 94255 ± 5% numa-vmstat.node0.nr_zone_inactive_anon
20066 ± 2% -99.9% 29.50 numa-vmstat.node1.nr_mlock
110126 +35.6% 149376 proc-vmstat.nr_inactive_anon
61898 +1.2% 62642 proc-vmstat.nr_mapped
39411 -99.8% 61.33 proc-vmstat.nr_mlock
661395 -5.9% 622058 proc-vmstat.nr_unevictable
110126 +35.6% 149376 proc-vmstat.nr_zone_inactive_anon
661395 -5.9% 622058 proc-vmstat.nr_zone_unevictable
1039724 -17.6% 856602 proc-vmstat.numa_hit
928057 -20.2% 740906 proc-vmstat.numa_local
1039735 -17.7% 855482 proc-vmstat.pgalloc_normal
2.247e+08 -62.4% 84468472 ± 2% proc-vmstat.pgfault
802397 ± 4% -26.1% 593350 ± 7% proc-vmstat.pgfree
55865 ± 38% -100.0% 0.00 proc-vmstat.unevictable_pgs_cleared
1.121e+08 +49.8% 1.678e+08 ± 2% proc-vmstat.unevictable_pgs_culled
1.121e+08 +49.8% 1.678e+08 ± 2% proc-vmstat.unevictable_pgs_mlocked
1.12e+08 +49.9% 1.678e+08 ± 2% proc-vmstat.unevictable_pgs_munlocked
1.12e+08 +49.9% 1.678e+08 ± 2% proc-vmstat.unevictable_pgs_rescued
3.06 ± 11% +33.6% 4.09 ± 9% perf-stat.i.MPKI
1.555e+10 -29.1% 1.102e+10 perf-stat.i.branch-instructions
0.63 ± 3% -0.2 0.47 ± 4% perf-stat.i.branch-miss-rate%
86979814 -49.4% 44006612 ± 2% perf-stat.i.branch-misses
5.57 +45.0% 8.08 perf-stat.i.cpi
0.03 ± 8% -0.0 0.02 ± 12% perf-stat.i.dTLB-load-miss-rate%
4465030 -50.5% 2211973 ± 3% perf-stat.i.dTLB-load-misses
1.882e+10 -32.8% 1.265e+10 perf-stat.i.dTLB-loads
6.515e+09 -52.5% 3.095e+09 ± 2% perf-stat.i.dTLB-stores
7.012e+10 -30.9% 4.842e+10 perf-stat.i.instructions
0.20 ± 2% -26.4% 0.15 ± 2% perf-stat.i.ipc
321.02 -34.4% 210.63 perf-stat.i.metric.M/sec
2061534 -57.3% 879696 ± 2% perf-stat.i.node-loads
531446 ± 7% -31.1% 366181 ± 21% perf-stat.i.node-stores
2.89 ± 9% +37.5% 3.97 ± 10% perf-stat.overall.MPKI
0.56 -0.2 0.40 ± 2% perf-stat.overall.branch-miss-rate%
5.71 +45.1% 8.28 perf-stat.overall.cpi
0.02 -0.0 0.02 ± 2% perf-stat.overall.dTLB-load-miss-rate%
0.00 ± 11% +0.0 0.00 ± 11% perf-stat.overall.dTLB-store-miss-rate%
0.18 -31.1% 0.12 perf-stat.overall.ipc
1.532e+10 -29.1% 1.086e+10 perf-stat.ps.branch-instructions
85628106 -49.4% 43291977 ± 2% perf-stat.ps.branch-misses
4429976 -49.9% 2220756 ± 3% perf-stat.ps.dTLB-load-misses
1.854e+10 -32.8% 1.246e+10 perf-stat.ps.dTLB-loads
6.417e+09 -52.5% 3.049e+09 ± 2% perf-stat.ps.dTLB-stores
6.906e+10 -31.0% 4.769e+10 perf-stat.ps.instructions
2037560 -57.2% 872256 ± 2% perf-stat.ps.node-loads
522322 ± 7% -31.0% 360168 ± 21% perf-stat.ps.node-stores
4.39e+12 -31.2% 3.021e+12 perf-stat.total.instructions
98.28 -47.8 50.50 perf-profile.calltrace.cycles-pp.remap_file_pages
98.14 -47.7 50.44 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.remap_file_pages
98.07 -47.6 50.42 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.remap_file_pages
98.03 -47.6 50.40 perf-profile.calltrace.cycles-pp.__x64_sys_remap_file_pages.do_syscall_64.entry_SYSCALL_64_after_hwframe.remap_file_pages
50.17 -24.4 25.79 perf-profile.calltrace.cycles-pp.do_mmap.__x64_sys_remap_file_pages.do_syscall_64.entry_SYSCALL_64_after_hwframe.remap_file_pages
50.01 -24.3 25.72 perf-profile.calltrace.cycles-pp.mmap_region.do_mmap.__x64_sys_remap_file_pages.do_syscall_64.entry_SYSCALL_64_after_hwframe
48.46 -23.3 25.16 perf-profile.calltrace.cycles-pp.__do_munmap.mmap_region.do_mmap.__x64_sys_remap_file_pages.do_syscall_64
47.70 -23.1 24.56 perf-profile.calltrace.cycles-pp.__mm_populate.__x64_sys_remap_file_pages.do_syscall_64.entry_SYSCALL_64_after_hwframe.remap_file_pages
47.52 -23.0 24.49 perf-profile.calltrace.cycles-pp.populate_vma_page_range.__mm_populate.__x64_sys_remap_file_pages.do_syscall_64.entry_SYSCALL_64_after_hwframe
47.50 -23.0 24.49 perf-profile.calltrace.cycles-pp.__get_user_pages.populate_vma_page_range.__mm_populate.__x64_sys_remap_file_pages.do_syscall_64
47.80 -22.9 24.92 perf-profile.calltrace.cycles-pp.unmap_region.__do_munmap.mmap_region.do_mmap.__x64_sys_remap_file_pages
47.14 -22.8 24.36 perf-profile.calltrace.cycles-pp.handle_mm_fault.__get_user_pages.populate_vma_page_range.__mm_populate.__x64_sys_remap_file_pages
47.06 -22.7 24.33 perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.__get_user_pages.populate_vma_page_range.__mm_populate
46.97 -22.7 24.30 perf-profile.calltrace.cycles-pp.do_fault.__handle_mm_fault.handle_mm_fault.__get_user_pages.populate_vma_page_range
46.94 -22.7 24.29 perf-profile.calltrace.cycles-pp.filemap_map_pages.do_fault.__handle_mm_fault.handle_mm_fault.__get_user_pages
46.56 -22.4 24.12 perf-profile.calltrace.cycles-pp.do_set_pte.filemap_map_pages.do_fault.__handle_mm_fault.handle_mm_fault
46.91 -22.3 24.58 perf-profile.calltrace.cycles-pp.unmap_vmas.unmap_region.__do_munmap.mmap_region.do_mmap
46.38 -22.3 24.06 perf-profile.calltrace.cycles-pp.mlock_page.do_set_pte.filemap_map_pages.do_fault.__handle_mm_fault
46.84 -22.3 24.55 perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.unmap_region.__do_munmap.mmap_region
46.72 -22.2 24.51 perf-profile.calltrace.cycles-pp.zap_pte_range.unmap_page_range.unmap_vmas.unmap_region.__do_munmap
45.95 -22.1 23.82 perf-profile.calltrace.cycles-pp.folio_lruvec_lock_irq.mlock_page.do_set_pte.filemap_map_pages.do_fault
45.82 -22.0 23.78 perf-profile.calltrace.cycles-pp._raw_spin_lock_irq.folio_lruvec_lock_irq.mlock_page.do_set_pte.filemap_map_pages
45.64 -21.9 23.72 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irq.folio_lruvec_lock_irq.mlock_page.do_set_pte
46.18 -21.9 24.32 perf-profile.calltrace.cycles-pp.munlock_page.zap_pte_range.unmap_page_range.unmap_vmas.unmap_region
45.70 -21.7 24.00 perf-profile.calltrace.cycles-pp.folio_lruvec_lock_irq.munlock_page.zap_pte_range.unmap_page_range.unmap_vmas
45.61 -21.6 23.97 perf-profile.calltrace.cycles-pp._raw_spin_lock_irq.folio_lruvec_lock_irq.munlock_page.zap_pte_range.unmap_page_range
45.42 -21.5 23.91 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irq.folio_lruvec_lock_irq.munlock_page.zap_pte_range
1.05 ± 2% +23.7 24.78 perf-profile.calltrace.cycles-pp.mlock
0.93 ± 2% +23.8 24.74 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.mlock
0.91 ± 2% +23.8 24.73 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.mlock
0.88 ± 2% +23.8 24.72 perf-profile.calltrace.cycles-pp.__x64_sys_mlock.do_syscall_64.entry_SYSCALL_64_after_hwframe.mlock
0.87 ± 2% +23.8 24.71 perf-profile.calltrace.cycles-pp.do_mlock.__x64_sys_mlock.do_syscall_64.entry_SYSCALL_64_after_hwframe.mlock
0.00 +23.9 23.86 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irq.folio_lruvec_lock_irq.munlock_page.mlock_pte_range
0.00 +23.9 23.87 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irq.folio_lruvec_lock_irq.mlock_page.mlock_pte_range
0.00 +23.9 23.92 perf-profile.calltrace.cycles-pp._raw_spin_lock_irq.folio_lruvec_lock_irq.munlock_page.mlock_pte_range.walk_p4d_range
0.00 +23.9 23.92 perf-profile.calltrace.cycles-pp._raw_spin_lock_irq.folio_lruvec_lock_irq.mlock_page.mlock_pte_range.walk_p4d_range
0.00 +24.0 23.96 perf-profile.calltrace.cycles-pp.folio_lruvec_lock_irq.munlock_page.mlock_pte_range.walk_p4d_range.__walk_page_range
0.00 +24.0 23.97 perf-profile.calltrace.cycles-pp.folio_lruvec_lock_irq.mlock_page.mlock_pte_range.walk_p4d_range.__walk_page_range
0.54 +24.1 24.63 perf-profile.calltrace.cycles-pp.munlock
0.00 +24.2 24.22 perf-profile.calltrace.cycles-pp.mlock_page.mlock_pte_range.walk_p4d_range.__walk_page_range.walk_page_range
0.00 +24.3 24.28 perf-profile.calltrace.cycles-pp.munlock_page.mlock_pte_range.walk_p4d_range.__walk_page_range.walk_page_range
0.00 +24.3 24.33 perf-profile.calltrace.cycles-pp.__walk_page_range.walk_page_range.mlock_fixup.apply_vma_lock_flags.do_mlock
0.00 +24.3 24.35 perf-profile.calltrace.cycles-pp.walk_page_range.mlock_fixup.apply_vma_lock_flags.do_mlock.__x64_sys_mlock
0.00 +24.4 24.36 perf-profile.calltrace.cycles-pp.__walk_page_range.walk_page_range.mlock_fixup.apply_vma_lock_flags.__x64_sys_munlock
0.00 +24.4 24.38 perf-profile.calltrace.cycles-pp.walk_page_range.mlock_fixup.apply_vma_lock_flags.__x64_sys_munlock.do_syscall_64
0.00 +24.5 24.48 perf-profile.calltrace.cycles-pp.mlock_fixup.apply_vma_lock_flags.__x64_sys_munlock.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +24.5 24.48 perf-profile.calltrace.cycles-pp.mlock_fixup.apply_vma_lock_flags.do_mlock.__x64_sys_mlock.do_syscall_64
0.00 +24.5 24.50 perf-profile.calltrace.cycles-pp.apply_vma_lock_flags.__x64_sys_munlock.do_syscall_64.entry_SYSCALL_64_after_hwframe.munlock
0.00 +24.5 24.53 perf-profile.calltrace.cycles-pp.apply_vma_lock_flags.do_mlock.__x64_sys_mlock.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +24.6 24.56 perf-profile.calltrace.cycles-pp.__x64_sys_munlock.do_syscall_64.entry_SYSCALL_64_after_hwframe.munlock
0.00 +24.6 24.58 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.munlock
0.00 +24.6 24.58 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.munlock
0.00 +48.6 48.62 perf-profile.calltrace.cycles-pp.mlock_pte_range.walk_p4d_range.__walk_page_range.walk_page_range.mlock_fixup
0.00 +48.7 48.68 perf-profile.calltrace.cycles-pp.walk_p4d_range.__walk_page_range.walk_page_range.mlock_fixup.apply_vma_lock_flags
98.34 -47.8 50.51 perf-profile.children.cycles-pp.remap_file_pages
98.04 -47.6 50.41 perf-profile.children.cycles-pp.__x64_sys_remap_file_pages
50.18 -24.4 25.79 perf-profile.children.cycles-pp.do_mmap
50.03 -24.3 25.73 perf-profile.children.cycles-pp.mmap_region
48.04 -23.4 24.68 perf-profile.children.cycles-pp.__mm_populate
48.47 -23.3 25.17 perf-profile.children.cycles-pp.__do_munmap
47.76 -23.2 24.58 perf-profile.children.cycles-pp.populate_vma_page_range
47.74 -23.2 24.57 perf-profile.children.cycles-pp.__get_user_pages
47.81 -22.9 24.93 perf-profile.children.cycles-pp.unmap_region
47.14 -22.8 24.36 perf-profile.children.cycles-pp.handle_mm_fault
47.06 -22.7 24.34 perf-profile.children.cycles-pp.__handle_mm_fault
46.98 -22.7 24.31 perf-profile.children.cycles-pp.do_fault
46.96 -22.7 24.30 perf-profile.children.cycles-pp.filemap_map_pages
46.57 -22.4 24.13 perf-profile.children.cycles-pp.do_set_pte
46.92 -22.3 24.58 perf-profile.children.cycles-pp.unmap_vmas
46.85 -22.3 24.56 perf-profile.children.cycles-pp.unmap_page_range
46.74 -22.2 24.52 perf-profile.children.cycles-pp.zap_pte_range
0.60 -0.4 0.23 ± 4% perf-profile.children.cycles-pp.tlb_finish_mmu
0.57 -0.3 0.22 ± 4% perf-profile.children.cycles-pp.tlb_flush_mmu
0.45 ± 2% -0.3 0.16 ± 5% perf-profile.children.cycles-pp.perf_event_mmap
0.38 ± 2% -0.2 0.14 ± 6% perf-profile.children.cycles-pp.vma_link
0.38 ± 2% -0.2 0.14 ± 4% perf-profile.children.cycles-pp.flush_tlb_mm_range
0.37 -0.2 0.16 ± 4% perf-profile.children.cycles-pp.find_vma
0.31 ± 3% -0.2 0.11 ± 3% perf-profile.children.cycles-pp.__vma_adjust
0.29 ± 2% -0.2 0.10 ± 3% perf-profile.children.cycles-pp.__split_vma
0.30 ± 2% -0.2 0.11 ± 6% perf-profile.children.cycles-pp.flush_tlb_func
0.28 ± 2% -0.2 0.10 ± 4% perf-profile.children.cycles-pp.kmem_cache_alloc
0.28 ± 2% -0.2 0.10 ± 5% perf-profile.children.cycles-pp.remove_vma
0.26 ± 2% -0.2 0.10 ± 6% perf-profile.children.cycles-pp.native_flush_tlb_one_user
0.25 -0.2 0.09 ± 5% perf-profile.children.cycles-pp.vm_area_alloc
0.22 -0.1 0.08 ± 4% perf-profile.children.cycles-pp.vma_merge
0.20 ± 2% -0.1 0.06 perf-profile.children.cycles-pp.__mod_lruvec_page_state
0.22 ± 3% -0.1 0.08 ± 4% perf-profile.children.cycles-pp.vma_interval_tree_insert
0.19 ± 2% -0.1 0.06 perf-profile.children.cycles-pp.follow_page_pte
0.18 ± 4% -0.1 0.05 perf-profile.children.cycles-pp.page_remove_rmap
0.19 ± 2% -0.1 0.07 ± 6% perf-profile.children.cycles-pp.free_pgtables
0.18 ± 3% -0.1 0.07 ± 7% perf-profile.children.cycles-pp.syscall_return_via_sysret
0.21 ± 3% -0.1 0.10 ± 5% perf-profile.children.cycles-pp.__might_resched
0.17 ± 3% -0.1 0.06 ± 9% perf-profile.children.cycles-pp.kmem_cache_free
0.18 ± 2% -0.1 0.07 ± 5% perf-profile.children.cycles-pp.sync_mm_rss
0.17 ± 2% -0.1 0.06 perf-profile.children.cycles-pp.__entry_text_start
0.15 ± 3% -0.1 0.04 ± 44% perf-profile.children.cycles-pp.page_add_file_rmap
0.16 ± 3% -0.1 0.06 perf-profile.children.cycles-pp.__cond_resched
0.12 -0.1 0.02 ± 99% perf-profile.children.cycles-pp.follow_page_mask
0.15 ± 2% -0.1 0.06 ± 8% perf-profile.children.cycles-pp.unlink_file_vma
0.14 ± 3% -0.1 0.05 ± 7% perf-profile.children.cycles-pp.perf_iterate_sb
0.14 ± 3% -0.1 0.05 perf-profile.children.cycles-pp.down_write_killable
0.12 ± 3% -0.1 0.05 perf-profile.children.cycles-pp.folio_unlock
0.12 ± 3% -0.1 0.06 ± 9% perf-profile.children.cycles-pp.vmacache_find
0.12 ± 3% -0.1 0.06 perf-profile.children.cycles-pp._raw_spin_lock
0.12 ± 3% -0.0 0.07 ± 5% perf-profile.children.cycles-pp.next_uptodate_page
0.14 -0.0 0.12 ± 9% perf-profile.children.cycles-pp.sysvec_apic_timer_interrupt
0.12 ± 4% -0.0 0.11 ± 10% perf-profile.children.cycles-pp.__sysvec_apic_timer_interrupt
0.12 ± 4% -0.0 0.11 ± 10% perf-profile.children.cycles-pp.hrtimer_interrupt
0.07 ± 5% -0.0 0.05 ± 7% perf-profile.children.cycles-pp.__mod_lruvec_state
0.12 ± 3% -0.0 0.10 ± 3% perf-profile.children.cycles-pp.up_write
0.07 +0.0 0.10 ± 4% perf-profile.children.cycles-pp.__list_add_valid
0.14 ± 4% +0.1 0.20 ± 7% perf-profile.children.cycles-pp.mem_cgroup_update_lru_size
0.00 +0.1 0.10 ± 5% perf-profile.children.cycles-pp.__list_del_entry_valid
99.53 +0.3 99.81 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
99.44 +0.3 99.78 perf-profile.children.cycles-pp.do_syscall_64
46.40 +1.9 48.30 perf-profile.children.cycles-pp.mlock_page
46.21 +2.4 48.62 perf-profile.children.cycles-pp.munlock_page
91.65 +4.1 95.75 perf-profile.children.cycles-pp.folio_lruvec_lock_irq
91.44 +4.2 95.60 perf-profile.children.cycles-pp._raw_spin_lock_irq
91.09 +4.3 95.38 perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
1.09 ± 2% +23.7 24.80 perf-profile.children.cycles-pp.mlock
0.88 ± 2% +23.8 24.71 perf-profile.children.cycles-pp.do_mlock
0.88 ± 2% +23.8 24.72 perf-profile.children.cycles-pp.__x64_sys_mlock
0.58 ± 2% +24.1 24.64 perf-profile.children.cycles-pp.munlock
0.36 ± 2% +24.2 24.56 perf-profile.children.cycles-pp.__x64_sys_munlock
0.74 ± 2% +48.3 49.04 perf-profile.children.cycles-pp.apply_vma_lock_flags
0.59 ± 2% +48.4 48.98 perf-profile.children.cycles-pp.mlock_fixup
0.00 +48.6 48.62 perf-profile.children.cycles-pp.mlock_pte_range
0.00 +48.7 48.68 perf-profile.children.cycles-pp.walk_p4d_range
0.00 +48.7 48.70 perf-profile.children.cycles-pp.__walk_page_range
0.00 +48.7 48.74 perf-profile.children.cycles-pp.walk_page_range
0.32 ± 2% -0.2 0.12 ± 5% perf-profile.self.cycles-pp.mmap_region
0.26 ± 2% -0.2 0.10 ± 8% perf-profile.self.cycles-pp.native_flush_tlb_one_user
0.24 ± 2% -0.1 0.10 ± 6% perf-profile.self.cycles-pp.find_vma
0.21 ± 3% -0.1 0.08 ± 4% perf-profile.self.cycles-pp.vma_interval_tree_insert
0.37 ± 2% -0.1 0.24 ± 6% perf-profile.self.cycles-pp._raw_spin_lock_irq
0.18 ± 2% -0.1 0.07 ± 7% perf-profile.self.cycles-pp.syscall_return_via_sysret
0.18 ± 3% -0.1 0.07 ± 5% perf-profile.self.cycles-pp.sync_mm_rss
0.19 ± 3% -0.1 0.09 ± 5% perf-profile.self.cycles-pp.__might_resched
0.12 ± 4% -0.1 0.02 ± 99% perf-profile.self.cycles-pp.__do_munmap
0.12 ± 3% -0.1 0.05 perf-profile.self.cycles-pp.folio_unlock
0.10 ± 3% -0.1 0.03 ± 70% perf-profile.self.cycles-pp.vmacache_find
0.22 ± 4% -0.1 0.15 ± 2% perf-profile.self.cycles-pp.folio_lruvec_lock_irq
0.11 ± 3% -0.1 0.06 ± 6% perf-profile.self.cycles-pp._raw_spin_lock
0.12 ± 3% -0.1 0.07 ± 7% perf-profile.self.cycles-pp.next_uptodate_page
0.24 ± 3% -0.0 0.21 ± 5% perf-profile.self.cycles-pp.mlock_page
0.07 +0.0 0.10 ± 4% perf-profile.self.cycles-pp.__list_add_valid
0.00 +0.1 0.05 perf-profile.self.cycles-pp.walk_p4d_range
0.00 +0.1 0.05 ± 8% perf-profile.self.cycles-pp.mlock_pte_range
0.19 ± 2% +0.1 0.25 ± 6% perf-profile.self.cycles-pp.munlock_page
0.14 ± 4% +0.1 0.20 ± 7% perf-profile.self.cycles-pp.mem_cgroup_update_lru_size
0.00 +0.1 0.10 ± 7% perf-profile.self.cycles-pp.__list_del_entry_valid
91.08 +4.3 95.38 perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
---
0DAY/LKP+ Test Infrastructure Open Source Technology Center
https://lists.01.org/hyperkitty/list/lkp@lists.01.org Intel Corporation
Thanks,
Oliver Sang
View attachment "config-5.17.0-rc2-00015-g237b4454014d" of type "text/plain" (174672 bytes)
View attachment "job-script" of type "text/plain" (8191 bytes)
View attachment "job.yaml" of type "text/plain" (5504 bytes)
View attachment "reproduce" of type "text/plain" (339 bytes)
Powered by blists - more mailing lists