[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <202407301049.5051dc19-oliver.sang@intel.com>
Date: Tue, 30 Jul 2024 13:00:17 +0800
From: kernel test robot <oliver.sang@...el.com>
To: Peter Xu <peterx@...hat.com>
CC: <oe-lkp@...ts.linux.dev>, <lkp@...el.com>, <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>, David Hildenbrand
<david@...hat.com>, Huacai Chen <chenhuacai@...nel.org>, Jason Gunthorpe
<jgg@...dia.com>, Matthew Wilcox <willy@...radead.org>, Nathan Chancellor
<nathan@...nel.org>, Ryan Roberts <ryan.roberts@....com>, WANG Xuerui
<kernel@...0n.name>, <linux-mm@...ck.org>, <ying.huang@...el.com>,
<feng.tang@...el.com>, <fengwei.yin@...el.com>, <oliver.sang@...el.com>
Subject: [linus:master] [mm] c0bff412e6: stress-ng.clone.ops_per_sec -2.9%
regression
Hello,
kernel test robot noticed a -2.9% regression of stress-ng.clone.ops_per_sec on:
commit: c0bff412e67b781d761e330ff9578aa9ed2be79e ("mm: allow anon exclusive check over hugetlb tail pages")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
testcase: stress-ng
test machine: 64 threads 2 sockets Intel(R) Xeon(R) Gold 6346 CPU @ 3.10GHz (Ice Lake) with 256G memory
parameters:
nr_threads: 100%
testtime: 60s
test: clone
cpufreq_governor: performance
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <oliver.sang@...el.com>
| Closes: https://lore.kernel.org/oe-lkp/202407301049.5051dc19-oliver.sang@intel.com
Details are as below:
-------------------------------------------------------------------------------------------------->
The kernel config and materials to reproduce are available at:
https://download.01.org/0day-ci/archive/20240730/202407301049.5051dc19-oliver.sang@intel.com
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_threads/rootfs/tbox_group/test/testcase/testtime:
gcc-13/performance/x86_64-rhel-8.3/100%/debian-12-x86_64-20240206.cgz/lkp-icl-2sp8/clone/stress-ng/60s
commit:
9cb28da546 ("mm/gup: handle hugetlb in the generic follow_page_mask code")
c0bff412e6 ("mm: allow anon exclusive check over hugetlb tail pages")
9cb28da54643ad46 c0bff412e67b781d761e330ff95
---------------- ---------------------------
%stddev %change %stddev
\ | \
37842 -3.4% 36554 vmstat.system.cs
0.00 ± 17% -86.4% 0.00 ±223% sched_debug.rt_rq:.rt_time.avg
0.19 ± 17% -86.4% 0.03 ±223% sched_debug.rt_rq:.rt_time.max
0.02 ± 17% -86.4% 0.00 ±223% sched_debug.rt_rq:.rt_time.stddev
24081 -3.7% 23200 proc-vmstat.nr_page_table_pages
399380 -2.3% 390288 proc-vmstat.nr_slab_reclaimable
1625589 -2.4% 1585989 proc-vmstat.nr_slab_unreclaimable
1.019e+08 -3.8% 98035999 proc-vmstat.numa_hit
1.018e+08 -3.9% 97870705 proc-vmstat.numa_local
1.092e+08 -3.8% 1.05e+08 proc-vmstat.pgalloc_normal
1.06e+08 -3.8% 1.019e+08 proc-vmstat.pgfree
2659199 -2.3% 2597978 proc-vmstat.pgreuse
2910 +3.4% 3010 stress-ng.clone.microsecs_per_clone
562874 -2.9% 546587 stress-ng.clone.ops
9298 -2.9% 9031 stress-ng.clone.ops_per_sec
686858 -2.8% 667416 stress-ng.time.involuntary_context_switches
9091031 -3.9% 8734352 stress-ng.time.minor_page_faults
4200 +2.4% 4299 stress-ng.time.percent_of_cpu_this_job_got
2543 +2.4% 2603 stress-ng.time.system_time
342849 -2.8% 333189 stress-ng.time.voluntary_context_switches
6.67 -6.1% 6.26 perf-stat.i.MPKI
6.388e+08 -5.4% 6.045e+08 perf-stat.i.cache-misses
1.558e+09 -4.6% 1.487e+09 perf-stat.i.cache-references
40791 -3.6% 39330 perf-stat.i.context-switches
353.55 +5.4% 372.76 perf-stat.i.cycles-between-cache-misses
7.95 ± 3% -6.5% 7.43 ± 3% perf-stat.i.metric.K/sec
251389 ± 3% -6.5% 235029 ± 3% perf-stat.i.minor-faults
251423 ± 3% -6.5% 235064 ± 3% perf-stat.i.page-faults
6.75 -6.1% 6.33 perf-stat.overall.MPKI
0.38 -0.0 0.37 perf-stat.overall.branch-miss-rate%
350.09 +5.8% 370.24 perf-stat.overall.cycles-between-cache-misses
68503488 -1.2% 67660585 perf-stat.ps.branch-misses
6.33e+08 -5.4% 5.987e+08 perf-stat.ps.cache-misses
1.518e+09 -4.6% 1.449e+09 perf-stat.ps.cache-references
38819 -3.3% 37542 perf-stat.ps.context-switches
3637 +1.2% 3680 perf-stat.ps.cpu-migrations
235473 ± 3% -6.3% 220601 ± 3% perf-stat.ps.minor-faults
235504 ± 3% -6.3% 220632 ± 3% perf-stat.ps.page-faults
45.55 -2.5 43.04 perf-profile.calltrace.cycles-pp.zap_pmd_range.unmap_page_range.unmap_vmas.exit_mmap.__mmput
44.86 -2.5 42.37 perf-profile.calltrace.cycles-pp.zap_pte_range.zap_pmd_range.unmap_page_range.unmap_vmas.exit_mmap
44.42 -2.1 42.37 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
44.42 -2.1 42.37 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
44.41 -2.1 42.36 perf-profile.calltrace.cycles-pp.__x64_sys_exit.do_syscall_64.entry_SYSCALL_64_after_hwframe
44.41 -2.1 42.36 perf-profile.calltrace.cycles-pp.do_exit.__x64_sys_exit.do_syscall_64.entry_SYSCALL_64_after_hwframe
39.08 -1.7 37.34 perf-profile.calltrace.cycles-pp.exit_mm.do_exit.__x64_sys_exit.do_syscall_64.entry_SYSCALL_64_after_hwframe
38.96 -1.7 37.22 perf-profile.calltrace.cycles-pp.exit_mmap.__mmput.exit_mm.do_exit.__x64_sys_exit
38.97 -1.7 37.24 perf-profile.calltrace.cycles-pp.__mmput.exit_mm.do_exit.__x64_sys_exit.do_syscall_64
36.16 -1.6 34.57 perf-profile.calltrace.cycles-pp.unmap_vmas.exit_mmap.__mmput.exit_mm.do_exit
35.99 -1.6 34.40 perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.exit_mmap.__mmput.exit_mm
32.17 -1.5 30.62 perf-profile.calltrace.cycles-pp.zap_present_ptes.zap_pte_range.zap_pmd_range.unmap_page_range.unmap_vmas
12.49 -1.0 11.52 perf-profile.calltrace.cycles-pp._compound_head.zap_present_ptes.zap_pte_range.zap_pmd_range.unmap_page_range
9.66 -0.9 8.74 perf-profile.calltrace.cycles-pp.unmap_vmas.exit_mmap.__mmput.copy_process.kernel_clone
9.61 -0.9 8.69 perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.exit_mmap.__mmput.copy_process
10.71 -0.9 9.84 perf-profile.calltrace.cycles-pp.__mmput.copy_process.kernel_clone.__do_sys_clone3.do_syscall_64
10.70 -0.9 9.84 perf-profile.calltrace.cycles-pp.exit_mmap.__mmput.copy_process.kernel_clone.__do_sys_clone3
10.41 -0.8 9.58 perf-profile.calltrace.cycles-pp.__tlb_batch_free_encoded_pages.tlb_flush_mmu.zap_pte_range.zap_pmd_range.unmap_page_range
10.42 -0.8 9.59 perf-profile.calltrace.cycles-pp.tlb_flush_mmu.zap_pte_range.zap_pmd_range.unmap_page_range.unmap_vmas
10.21 -0.8 9.40 perf-profile.calltrace.cycles-pp.free_pages_and_swap_cache.__tlb_batch_free_encoded_pages.tlb_flush_mmu.zap_pte_range.zap_pmd_range
5.47 -0.4 5.04 perf-profile.calltrace.cycles-pp.folios_put_refs.free_pages_and_swap_cache.__tlb_batch_free_encoded_pages.tlb_flush_mmu.zap_pte_range
1.11 -0.3 0.79 ± 33% perf-profile.calltrace.cycles-pp.anon_vma_interval_tree_insert.anon_vma_clone.anon_vma_fork.dup_mmap.dup_mm
14.18 -0.3 13.87 perf-profile.calltrace.cycles-pp.folio_remove_rmap_ptes.zap_present_ptes.zap_pte_range.zap_pmd_range.unmap_page_range
5.17 -0.3 4.86 perf-profile.calltrace.cycles-pp.put_files_struct.do_exit.__x64_sys_exit.do_syscall_64.entry_SYSCALL_64_after_hwframe
4.80 -0.3 4.53 perf-profile.calltrace.cycles-pp.filp_close.put_files_struct.do_exit.__x64_sys_exit.do_syscall_64
4.40 -0.3 4.14 perf-profile.calltrace.cycles-pp.filp_flush.filp_close.put_files_struct.do_exit.__x64_sys_exit
2.74 -0.2 2.58 perf-profile.calltrace.cycles-pp.anon_vma_fork.dup_mmap.dup_mm.copy_process.kernel_clone
2.25 -0.1 2.11 perf-profile.calltrace.cycles-pp.anon_vma_clone.anon_vma_fork.dup_mmap.dup_mm.copy_process
1.47 -0.1 1.34 perf-profile.calltrace.cycles-pp.put_files_struct.copy_process.kernel_clone.__do_sys_clone3.do_syscall_64
1.87 -0.1 1.76 perf-profile.calltrace.cycles-pp.dnotify_flush.filp_flush.filp_close.put_files_struct.do_exit
1.98 -0.1 1.88 perf-profile.calltrace.cycles-pp.free_pgtables.exit_mmap.__mmput.exit_mm.do_exit
1.28 -0.1 1.18 perf-profile.calltrace.cycles-pp.filp_close.put_files_struct.copy_process.kernel_clone.__do_sys_clone3
1.19 -0.1 1.09 perf-profile.calltrace.cycles-pp.filp_flush.filp_close.put_files_struct.copy_process.kernel_clone
1.31 ± 2% -0.1 1.25 perf-profile.calltrace.cycles-pp.unlink_anon_vmas.free_pgtables.exit_mmap.__mmput.exit_mm
0.58 -0.0 0.55 perf-profile.calltrace.cycles-pp.vm_normal_page.zap_present_ptes.zap_pte_range.zap_pmd_range.unmap_page_range
33.54 +0.6 34.10 perf-profile.calltrace.cycles-pp.syscall
33.45 +0.6 34.01 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.syscall
33.45 +0.6 34.01 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.syscall
33.35 +0.6 33.90 perf-profile.calltrace.cycles-pp.__do_sys_clone3.do_syscall_64.entry_SYSCALL_64_after_hwframe.syscall
33.34 +0.6 33.90 perf-profile.calltrace.cycles-pp.kernel_clone.__do_sys_clone3.do_syscall_64.entry_SYSCALL_64_after_hwframe.syscall
33.30 +0.6 33.86 perf-profile.calltrace.cycles-pp.copy_process.kernel_clone.__do_sys_clone3.do_syscall_64.entry_SYSCALL_64_after_hwframe
20.63 +1.6 22.21 perf-profile.calltrace.cycles-pp.dup_mm.copy_process.kernel_clone.__do_sys_clone3.do_syscall_64
20.55 +1.6 22.14 perf-profile.calltrace.cycles-pp.dup_mmap.dup_mm.copy_process.kernel_clone.__do_sys_clone3
19.40 +1.8 21.19 perf-profile.calltrace.cycles-pp.__clone
19.24 +1.8 21.04 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__clone
19.24 +1.8 21.04 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__clone
19.14 +1.8 20.94 perf-profile.calltrace.cycles-pp.__do_sys_clone.do_syscall_64.entry_SYSCALL_64_after_hwframe.__clone
19.14 +1.8 20.94 perf-profile.calltrace.cycles-pp.kernel_clone.__do_sys_clone.do_syscall_64.entry_SYSCALL_64_after_hwframe.__clone
19.05 +1.8 20.85 perf-profile.calltrace.cycles-pp.copy_process.kernel_clone.__do_sys_clone.do_syscall_64.entry_SYSCALL_64_after_hwframe
18.74 +1.8 20.56 perf-profile.calltrace.cycles-pp.dup_mm.copy_process.kernel_clone.__do_sys_clone.do_syscall_64
18.67 +1.8 20.49 perf-profile.calltrace.cycles-pp.dup_mmap.dup_mm.copy_process.kernel_clone.__do_sys_clone
12.24 +3.1 15.35 perf-profile.calltrace.cycles-pp._compound_head.copy_present_ptes.copy_pte_range.copy_p4d_range.copy_page_range
34.37 +3.7 38.02 perf-profile.calltrace.cycles-pp.copy_page_range.dup_mmap.dup_mm.copy_process.kernel_clone
34.34 +3.7 38.00 perf-profile.calltrace.cycles-pp.copy_p4d_range.copy_page_range.dup_mmap.dup_mm.copy_process
30.99 +3.7 34.69 perf-profile.calltrace.cycles-pp.copy_present_ptes.copy_pte_range.copy_p4d_range.copy_page_range.dup_mmap
33.16 +3.7 36.88 perf-profile.calltrace.cycles-pp.copy_pte_range.copy_p4d_range.copy_page_range.dup_mmap.dup_mm
0.00 +3.9 3.90 perf-profile.calltrace.cycles-pp.folio_try_dup_anon_rmap_ptes.copy_present_ptes.copy_pte_range.copy_p4d_range.copy_page_range
49.67 -2.6 47.07 perf-profile.children.cycles-pp.exit_mmap
49.69 -2.6 47.08 perf-profile.children.cycles-pp.__mmput
45.84 -2.5 43.32 perf-profile.children.cycles-pp.unmap_vmas
45.56 -2.5 43.05 perf-profile.children.cycles-pp.zap_pmd_range
45.61 -2.5 43.10 perf-profile.children.cycles-pp.unmap_page_range
44.98 -2.5 42.48 perf-profile.children.cycles-pp.zap_pte_range
44.53 -2.1 42.48 perf-profile.children.cycles-pp.__x64_sys_exit
44.54 -2.1 42.48 perf-profile.children.cycles-pp.do_exit
39.10 -1.7 37.36 perf-profile.children.cycles-pp.exit_mm
32.99 -1.6 31.41 perf-profile.children.cycles-pp.zap_present_ptes
10.53 -0.8 9.71 perf-profile.children.cycles-pp.tlb_flush_mmu
10.91 -0.7 10.19 perf-profile.children.cycles-pp.__tlb_batch_free_encoded_pages
10.88 -0.7 10.16 perf-profile.children.cycles-pp.free_pages_and_swap_cache
6.64 -0.4 6.22 perf-profile.children.cycles-pp.put_files_struct
5.76 -0.4 5.38 perf-profile.children.cycles-pp.folios_put_refs
6.11 -0.4 5.73 perf-profile.children.cycles-pp.filp_close
5.62 -0.4 5.25 perf-profile.children.cycles-pp.filp_flush
14.28 -0.3 13.97 perf-profile.children.cycles-pp.folio_remove_rmap_ptes
2.75 -0.2 2.58 perf-profile.children.cycles-pp.anon_vma_fork
2.38 -0.2 2.22 perf-profile.children.cycles-pp.dnotify_flush
2.50 -0.1 2.36 perf-profile.children.cycles-pp.free_pgtables
2.25 -0.1 2.11 perf-profile.children.cycles-pp.anon_vma_clone
0.20 ± 33% -0.1 0.08 ± 58% perf-profile.children.cycles-pp.ordered_events__queue
0.20 ± 33% -0.1 0.08 ± 58% perf-profile.children.cycles-pp.queue_event
1.24 ± 4% -0.1 1.14 perf-profile.children.cycles-pp.down_write
1.67 ± 2% -0.1 1.58 perf-profile.children.cycles-pp.unlink_anon_vmas
1.59 -0.1 1.50 perf-profile.children.cycles-pp.__alloc_pages_noprof
1.55 -0.1 1.46 perf-profile.children.cycles-pp.alloc_pages_mpol_noprof
1.58 -0.1 1.50 perf-profile.children.cycles-pp.vm_normal_page
1.11 -0.1 1.04 perf-profile.children.cycles-pp.anon_vma_interval_tree_insert
1.33 -0.1 1.26 ± 2% perf-profile.children.cycles-pp.pte_alloc_one
0.47 ± 11% -0.1 0.40 ± 4% perf-profile.children.cycles-pp.rwsem_down_write_slowpath
0.45 ± 11% -0.1 0.38 ± 4% perf-profile.children.cycles-pp.rwsem_optimistic_spin
1.00 -0.1 0.94 ± 2% perf-profile.children.cycles-pp.get_page_from_freelist
1.36 -0.1 1.31 perf-profile.children.cycles-pp.kmem_cache_free
1.08 -0.0 1.04 perf-profile.children.cycles-pp.kmem_cache_alloc_noprof
0.62 -0.0 0.58 ± 2% perf-profile.children.cycles-pp.dup_fd
0.63 -0.0 0.59 ± 3% perf-profile.children.cycles-pp.__pte_alloc
0.73 -0.0 0.69 perf-profile.children.cycles-pp.__tlb_remove_folio_pages_size
0.58 -0.0 0.54 perf-profile.children.cycles-pp.locks_remove_posix
0.90 -0.0 0.86 perf-profile.children.cycles-pp.copy_huge_pmd
0.54 -0.0 0.51 perf-profile.children.cycles-pp.__memcg_kmem_charge_page
0.76 -0.0 0.72 perf-profile.children.cycles-pp.vm_area_dup
0.31 ± 2% -0.0 0.28 ± 3% perf-profile.children.cycles-pp.rwsem_spin_on_owner
0.50 -0.0 0.47 perf-profile.children.cycles-pp.__anon_vma_interval_tree_remove
0.53 -0.0 0.50 perf-profile.children.cycles-pp.clear_page_erms
0.49 -0.0 0.46 perf-profile.children.cycles-pp.free_swap_cache
0.72 -0.0 0.69 perf-profile.children.cycles-pp.__memcg_slab_post_alloc_hook
0.37 ± 2% -0.0 0.34 ± 2% perf-profile.children.cycles-pp.unlink_file_vma
0.62 -0.0 0.60 perf-profile.children.cycles-pp.__memcg_slab_free_hook
0.42 -0.0 0.40 ± 2% perf-profile.children.cycles-pp.rmqueue
0.37 -0.0 0.35 ± 2% perf-profile.children.cycles-pp.__rmqueue_pcplist
0.28 -0.0 0.25 perf-profile.children.cycles-pp.__rb_insert_augmented
0.35 -0.0 0.33 ± 2% perf-profile.children.cycles-pp.rmqueue_bulk
0.56 -0.0 0.54 perf-profile.children.cycles-pp.fput
0.48 -0.0 0.46 perf-profile.children.cycles-pp._raw_spin_lock
0.51 -0.0 0.50 perf-profile.children.cycles-pp.free_unref_page
0.45 -0.0 0.43 perf-profile.children.cycles-pp.__x64_sys_unshare
0.44 -0.0 0.42 perf-profile.children.cycles-pp.free_unref_page_commit
0.45 -0.0 0.43 perf-profile.children.cycles-pp.ksys_unshare
0.31 -0.0 0.30 perf-profile.children.cycles-pp.memcg_account_kmem
0.27 -0.0 0.26 perf-profile.children.cycles-pp.__mod_memcg_state
0.44 -0.0 0.43 perf-profile.children.cycles-pp.__slab_free
0.28 -0.0 0.26 perf-profile.children.cycles-pp.__vm_area_free
0.22 ± 2% -0.0 0.21 perf-profile.children.cycles-pp.___slab_alloc
0.21 -0.0 0.20 ± 2% perf-profile.children.cycles-pp.__tlb_remove_folio_pages
0.13 -0.0 0.12 perf-profile.children.cycles-pp.__rb_erase_color
0.07 -0.0 0.06 perf-profile.children.cycles-pp.find_idlest_cpu
0.09 -0.0 0.08 perf-profile.children.cycles-pp.wake_up_new_task
0.06 -0.0 0.05 perf-profile.children.cycles-pp.kfree
0.06 -0.0 0.05 perf-profile.children.cycles-pp.update_sg_wakeup_stats
0.11 -0.0 0.10 perf-profile.children.cycles-pp.allocate_slab
0.44 ± 2% +0.1 0.53 ± 2% perf-profile.children.cycles-pp.tlb_finish_mmu
98.24 +0.2 98.46 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
98.24 +0.2 98.46 perf-profile.children.cycles-pp.do_syscall_64
33.55 +0.6 34.10 perf-profile.children.cycles-pp.syscall
33.35 +0.6 33.90 perf-profile.children.cycles-pp.__do_sys_clone3
19.41 +1.8 21.20 perf-profile.children.cycles-pp.__clone
19.14 +1.8 20.94 perf-profile.children.cycles-pp.__do_sys_clone
24.94 +2.1 27.07 perf-profile.children.cycles-pp._compound_head
52.48 +2.4 54.84 perf-profile.children.cycles-pp.kernel_clone
52.36 +2.4 54.72 perf-profile.children.cycles-pp.copy_process
39.38 +3.4 42.77 perf-profile.children.cycles-pp.dup_mm
39.24 +3.4 42.64 perf-profile.children.cycles-pp.dup_mmap
34.34 +3.7 38.00 perf-profile.children.cycles-pp.copy_p4d_range
34.37 +3.7 38.03 perf-profile.children.cycles-pp.copy_page_range
33.28 +3.7 36.98 perf-profile.children.cycles-pp.copy_pte_range
31.41 +3.8 35.18 perf-profile.children.cycles-pp.copy_present_ptes
0.00 +4.0 4.01 perf-profile.children.cycles-pp.folio_try_dup_anon_rmap_ptes
18.44 -3.2 15.24 perf-profile.self.cycles-pp.copy_present_ptes
5.66 -0.4 5.28 perf-profile.self.cycles-pp.folios_put_refs
4.78 -0.3 4.46 perf-profile.self.cycles-pp.free_pages_and_swap_cache
14.11 -0.3 13.80 perf-profile.self.cycles-pp.folio_remove_rmap_ptes
4.82 -0.2 4.59 perf-profile.self.cycles-pp.zap_present_ptes
2.66 -0.2 2.49 perf-profile.self.cycles-pp.filp_flush
2.36 -0.2 2.20 perf-profile.self.cycles-pp.dnotify_flush
0.20 ± 32% -0.1 0.08 ± 58% perf-profile.self.cycles-pp.queue_event
1.44 -0.1 1.36 perf-profile.self.cycles-pp.zap_pte_range
1.11 -0.1 1.03 perf-profile.self.cycles-pp.anon_vma_interval_tree_insert
1.26 -0.1 1.20 perf-profile.self.cycles-pp.vm_normal_page
0.56 -0.0 0.52 ± 2% perf-profile.self.cycles-pp.dup_fd
0.56 -0.0 0.53 perf-profile.self.cycles-pp.locks_remove_posix
0.31 -0.0 0.28 perf-profile.self.cycles-pp.put_files_struct
0.58 -0.0 0.55 perf-profile.self.cycles-pp.__tlb_remove_folio_pages_size
0.49 -0.0 0.46 ± 2% perf-profile.self.cycles-pp.__anon_vma_interval_tree_remove
0.30 ± 3% -0.0 0.28 ± 3% perf-profile.self.cycles-pp.rwsem_spin_on_owner
0.52 -0.0 0.49 ± 2% perf-profile.self.cycles-pp.clear_page_erms
0.31 -0.0 0.29 perf-profile.self.cycles-pp.free_swap_cache
0.33 -0.0 0.31 perf-profile.self.cycles-pp.__memcg_slab_free_hook
0.45 -0.0 0.43 perf-profile.self.cycles-pp._raw_spin_lock
0.55 -0.0 0.53 perf-profile.self.cycles-pp.fput
0.38 -0.0 0.36 perf-profile.self.cycles-pp.__memcg_slab_post_alloc_hook
0.47 -0.0 0.45 perf-profile.self.cycles-pp.up_write
0.26 -0.0 0.24 perf-profile.self.cycles-pp.__rb_insert_augmented
0.33 -0.0 0.32 perf-profile.self.cycles-pp.mod_objcg_state
0.31 -0.0 0.30 perf-profile.self.cycles-pp.__free_one_page
0.09 -0.0 0.08 perf-profile.self.cycles-pp.___slab_alloc
24.40 +2.1 26.55 perf-profile.self.cycles-pp._compound_head
0.00 +3.9 3.89 perf-profile.self.cycles-pp.folio_try_dup_anon_rmap_ptes
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
Powered by blists - more mailing lists