lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <202404301149.2850aac-oliver.sang@intel.com>
Date: Tue, 30 Apr 2024 11:35:17 +0800
From: kernel test robot <oliver.sang@...el.com>
To: David Hildenbrand <david@...hat.com>
CC: <oe-lkp@...ts.linux.dev>, <lkp@...el.com>, <linux-kernel@...r.kernel.org>,
	Andrew Morton <akpm@...ux-foundation.org>, "Darrick J. Wong"
	<djwong@...nel.org>, Hugh Dickins <hughd@...gle.com>, Jason Gunthorpe
	<jgg@...dia.com>, John Hubbard <jhubbard@...dia.com>, <linux-mm@...ck.org>,
	<ying.huang@...el.com>, <feng.tang@...el.com>, <fengwei.yin@...el.com>,
	<oliver.sang@...el.com>
Subject: [linus:master] [mm/madvise]  631426ba1d:
 stress-ng.shm-sysv.ops_per_sec 8.2% improvement



Hello,

kernel test robot noticed a 8.2% improvement of stress-ng.shm-sysv.ops_per_sec on:


commit: 631426ba1d45a8672b177ee85ad4cabe760dd131 ("mm/madvise: make MADV_POPULATE_(READ|WRITE) handle VM_FAULT_RETRY properly")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master

testcase: stress-ng
test machine: 224 threads 2 sockets Intel(R) Xeon(R) Platinum 8480CTDX (Sapphire Rapids) with 256G memory
parameters:

	nr_threads: 100%
	testtime: 60s
	test: shm-sysv
	cpufreq_governor: performance






Details are as below:
-------------------------------------------------------------------------------------------------->


The kernel config and materials to reproduce are available at:
https://download.01.org/0day-ci/archive/20240430/202404301149.2850aac-oliver.sang@intel.com

=========================================================================================
compiler/cpufreq_governor/kconfig/nr_threads/rootfs/tbox_group/test/testcase/testtime:
  gcc-13/performance/x86_64-rhel-8.3/100%/debian-12-x86_64-20240206.cgz/lkp-spr-r02/shm-sysv/stress-ng/60s

commit: 
  v6.9-rc4
  631426ba1d ("mm/madvise: make MADV_POPULATE_(READ|WRITE) handle VM_FAULT_RETRY properly")

        v6.9-rc4 631426ba1d45a8672b177ee85ad 
---------------- --------------------------- 
         %stddev     %change         %stddev
             \          |                \  
   5394576           +12.4%    6065916 ±  2%  cpuidle..usage
    301458 ±  8%     -15.9%     253467 ±  5%  numa-meminfo.node0.SUnreclaim
     75427 ±  8%     -16.0%      63343 ±  4%  numa-vmstat.node0.nr_slab_unreclaimable
    155389           +12.6%     174982        vmstat.system.cs
   3610346 ±  6%     -11.0%    3213619        sched_debug.cfs_rq:/.avg_vruntime.min
   3610346 ±  6%     -11.0%    3213619        sched_debug.cfs_rq:/.min_vruntime.min
     22086           +11.9%      24711        sched_debug.cpu.nr_switches.avg
    185.25 ± 10%     +59.6%     295.75 ± 44%  sched_debug.cpu.nr_uninterruptible.max
     84.67 ±  6%     +33.7%     113.22 ± 22%  sched_debug.cpu.nr_uninterruptible.stddev
   1800984           +17.4%    2113826        stress-ng.shm-sysv.nanosecs_per_shmdt_call
    478970            +8.1%     517685        stress-ng.shm-sysv.ops
      8047            +8.2%       8706        stress-ng.shm-sysv.ops_per_sec
    208527           +22.4%     255186        stress-ng.time.involuntary_context_switches
 6.355e+08            +8.0%  6.864e+08        stress-ng.time.minor_page_faults
     19049            -1.4%      18777        stress-ng.time.percent_of_cpu_this_job_got
     11326            -1.5%      11153        stress-ng.time.system_time
    156.02            +6.3%     165.84        stress-ng.time.user_time
   4855836           +10.3%    5358096        stress-ng.time.voluntary_context_switches
   1329171            +2.0%    1355214        proc-vmstat.nr_file_pages
    702918            +4.8%     736877        proc-vmstat.nr_inactive_anon
    560179            +4.6%     586223        proc-vmstat.nr_shmem
     43521            -6.0%      40920        proc-vmstat.nr_slab_reclaimable
    132019            -4.7%     125797        proc-vmstat.nr_slab_unreclaimable
    702919            +4.8%     736878        proc-vmstat.nr_zone_inactive_anon
 6.371e+08            +8.0%  6.883e+08        proc-vmstat.numa_hit
 6.363e+08            +8.0%  6.875e+08        proc-vmstat.numa_local
  60980733           +30.1%   79334617        proc-vmstat.pgactivate
 6.367e+08            +8.0%  6.875e+08        proc-vmstat.pgalloc_normal
 6.364e+08            +8.0%  6.873e+08        proc-vmstat.pgfault
 6.356e+08            +8.0%  6.863e+08        proc-vmstat.pgfree
 6.158e+08            +8.0%  6.651e+08        proc-vmstat.unevictable_pgs_scanned
      5.96            +3.5%       6.17        perf-stat.i.MPKI
 2.937e+10            +3.4%  3.038e+10        perf-stat.i.branch-instructions
     64.43            +1.3       65.77        perf-stat.i.cache-miss-rate%
 8.295e+08            +7.5%  8.921e+08        perf-stat.i.cache-misses
 1.281e+09            +5.3%  1.349e+09        perf-stat.i.cache-references
    160399           +11.6%     179016        perf-stat.i.context-switches
      4.04            -6.1%       3.80        perf-stat.i.cpi
 5.634e+11            -2.3%  5.505e+11        perf-stat.i.cpu-cycles
     31426            +7.9%      33916        perf-stat.i.cpu-migrations
    675.36            -9.3%     612.68        perf-stat.i.cycles-between-cache-misses
  1.38e+11            +3.9%  1.434e+11        perf-stat.i.instructions
      0.26            +6.7%       0.28        perf-stat.i.ipc
     83.99            +5.4%      88.56        perf-stat.i.metric.K/sec
   9516734            +4.5%    9947225        perf-stat.i.minor-faults
   9524443            +4.5%    9955476        perf-stat.i.page-faults
      6.02            +3.3%       6.22        perf-stat.overall.MPKI
     64.24            +1.6       65.86        perf-stat.overall.cache-miss-rate%
      4.08            -5.6%       3.85        perf-stat.overall.cpi
    678.05            -8.6%     619.59        perf-stat.overall.cycles-between-cache-misses
      0.25            +6.0%       0.26        perf-stat.overall.ipc
    217255            -8.1%     199679 ±  3%  perf-stat.ps.cpu-clock
 5.393e+11            -9.3%  4.891e+11 ±  4%  perf-stat.ps.cpu-cycles
    217255            -8.1%     199680 ±  3%  perf-stat.ps.task-clock
     21.82           -21.8        0.00        perf-profile.calltrace.cycles-pp.faultin_vma_page_range.madvise_vma_behavior.do_madvise.__x64_sys_madvise.do_syscall_64
     19.69           -19.7        0.00        perf-profile.calltrace.cycles-pp.lru_add_drain.faultin_vma_page_range.madvise_vma_behavior.do_madvise.__x64_sys_madvise
     19.69           -19.7        0.00        perf-profile.calltrace.cycles-pp.lru_add_drain_cpu.lru_add_drain.faultin_vma_page_range.madvise_vma_behavior.do_madvise
     19.68           -19.7        0.00        perf-profile.calltrace.cycles-pp.folio_batch_move_lru.lru_add_drain_cpu.lru_add_drain.faultin_vma_page_range.madvise_vma_behavior
     19.54           -19.5        0.00        perf-profile.calltrace.cycles-pp.folio_lruvec_lock_irqsave.folio_batch_move_lru.lru_add_drain_cpu.lru_add_drain.faultin_vma_page_range
     20.37           -18.8        1.59        perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.folio_lruvec_lock_irqsave.folio_batch_move_lru.lru_add_drain_cpu.lru_add_drain
     20.35           -18.8        1.58        perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.folio_lruvec_lock_irqsave.folio_batch_move_lru.lru_add_drain_cpu
     21.84           -17.6        4.26        perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__madvise
     21.84           -17.6        4.26        perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__madvise
     21.84           -17.6        4.26        perf-profile.calltrace.cycles-pp.__x64_sys_madvise.do_syscall_64.entry_SYSCALL_64_after_hwframe.__madvise
     21.84           -17.6        4.26        perf-profile.calltrace.cycles-pp.do_madvise.__x64_sys_madvise.do_syscall_64.entry_SYSCALL_64_after_hwframe.__madvise
     21.84           -17.6        4.26        perf-profile.calltrace.cycles-pp.madvise_vma_behavior.do_madvise.__x64_sys_madvise.do_syscall_64.entry_SYSCALL_64_after_hwframe
     21.84           -17.6        4.26        perf-profile.calltrace.cycles-pp.__madvise
      0.64            +0.0        0.67        perf-profile.calltrace.cycles-pp.shmem_add_to_page_cache.shmem_alloc_and_add_folio.shmem_get_folio_gfp.shmem_fault.__do_fault
      0.67            +0.0        0.70 ±  2%  perf-profile.calltrace.cycles-pp.alloc_pages_mpol.shmem_alloc_folio.shmem_alloc_and_add_folio.shmem_get_folio_gfp.shmem_fault
      0.62 ±  2%      +0.0        0.65 ±  2%  perf-profile.calltrace.cycles-pp.__alloc_pages.alloc_pages_mpol.shmem_alloc_folio.shmem_alloc_and_add_folio.shmem_get_folio_gfp
      0.52 ±  2%      +0.0        0.56 ±  3%  perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages.alloc_pages_mpol.shmem_alloc_folio.shmem_alloc_and_add_folio
      0.71            +0.0        0.75 ±  2%  perf-profile.calltrace.cycles-pp.shmem_alloc_folio.shmem_alloc_and_add_folio.shmem_get_folio_gfp.shmem_fault.__do_fault
      0.55            +0.0        0.59        perf-profile.calltrace.cycles-pp.sync_regs.asm_exc_page_fault.stress_shm_sysv_child
      0.73            +0.0        0.78        perf-profile.calltrace.cycles-pp.clear_page_erms.shmem_get_folio_gfp.shmem_fault.__do_fault.do_fault
      0.57            +0.1        0.63        perf-profile.calltrace.cycles-pp.truncate_inode_folio.shmem_undo_range.shmem_evict_inode.evict.__dentry_kill
      0.60            +0.1        0.67 ±  2%  perf-profile.calltrace.cycles-pp.free_pgtables.exit_mmap.__mmput.exit_mm.do_exit
      0.64            +0.1        0.74        perf-profile.calltrace.cycles-pp.anon_vma_clone.anon_vma_fork.dup_mmap.dup_mm.copy_process
      0.88            +0.1        1.02 ±  2%  perf-profile.calltrace.cycles-pp.anon_vma_fork.dup_mmap.dup_mm.copy_process.kernel_clone
      0.83            +0.2        1.05        perf-profile.calltrace.cycles-pp.folio_lruvec_lock_irqsave.folio_batch_move_lru.lru_add_drain_cpu.lru_add_drain.unmap_region
      0.84            +0.2        1.06        perf-profile.calltrace.cycles-pp.folio_batch_move_lru.lru_add_drain_cpu.lru_add_drain.unmap_region.do_vmi_align_munmap
      0.84            +0.2        1.06        perf-profile.calltrace.cycles-pp.lru_add_drain.unmap_region.do_vmi_align_munmap.do_vma_munmap.ksys_shmdt
      0.84            +0.2        1.06        perf-profile.calltrace.cycles-pp.lru_add_drain_cpu.lru_add_drain.unmap_region.do_vmi_align_munmap.do_vma_munmap
      1.64            +0.2        1.88 ±  2%  perf-profile.calltrace.cycles-pp.dup_mmap.dup_mm.copy_process.kernel_clone.__do_sys_clone
      1.79            +0.2        2.04 ±  2%  perf-profile.calltrace.cycles-pp.dup_mm.copy_process.kernel_clone.__do_sys_clone.do_syscall_64
      1.87            +0.3        2.13 ±  2%  perf-profile.calltrace.cycles-pp.copy_process.kernel_clone.__do_sys_clone.do_syscall_64.entry_SYSCALL_64_after_hwframe
      1.95            +0.3        2.22 ±  2%  perf-profile.calltrace.cycles-pp.kernel_clone.__do_sys_clone.do_syscall_64.entry_SYSCALL_64_after_hwframe._Fork
      1.95            +0.3        2.22 ±  2%  perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe._Fork
      1.95            +0.3        2.22 ±  2%  perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe._Fork
      1.95            +0.3        2.22 ±  2%  perf-profile.calltrace.cycles-pp.__do_sys_clone.do_syscall_64.entry_SYSCALL_64_after_hwframe._Fork
      2.01            +0.3        2.29 ±  2%  perf-profile.calltrace.cycles-pp._Fork
      1.48            +0.4        1.85        perf-profile.calltrace.cycles-pp.shmem_alloc_and_add_folio.shmem_get_folio_gfp.shmem_fault.__do_fault.do_read_fault
      1.72            +0.4        2.11        perf-profile.calltrace.cycles-pp.__do_fault.do_read_fault.do_fault.__handle_mm_fault.handle_mm_fault
      1.72            +0.4        2.11        perf-profile.calltrace.cycles-pp.shmem_get_folio_gfp.shmem_fault.__do_fault.do_read_fault.do_fault
      1.72            +0.4        2.11        perf-profile.calltrace.cycles-pp.shmem_fault.__do_fault.do_read_fault.do_fault.__handle_mm_fault
      1.76            +0.4        2.15        perf-profile.calltrace.cycles-pp.do_read_fault.do_fault.__handle_mm_fault.handle_mm_fault.__get_user_pages
      2.06 ±  7%      +0.4        2.51        perf-profile.calltrace.cycles-pp.folio_lruvec_lock_irqsave.__page_cache_release.folios_put_refs.free_pages_and_swap_cache.__tlb_batch_free_encoded_pages
      2.06 ±  7%      +0.4        2.51        perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.folio_lruvec_lock_irqsave.__page_cache_release.folios_put_refs.free_pages_and_swap_cache
      2.07 ±  6%      +0.4        2.52        perf-profile.calltrace.cycles-pp.__page_cache_release.folios_put_refs.free_pages_and_swap_cache.__tlb_batch_free_encoded_pages.tlb_finish_mmu
      2.14 ±  6%      +0.5        2.60        perf-profile.calltrace.cycles-pp.folios_put_refs.free_pages_and_swap_cache.__tlb_batch_free_encoded_pages.tlb_finish_mmu.exit_mmap
      2.18 ±  6%      +0.5        2.64        perf-profile.calltrace.cycles-pp.__tlb_batch_free_encoded_pages.tlb_finish_mmu.exit_mmap.__mmput.exit_mm
      2.18 ±  6%      +0.5        2.64        perf-profile.calltrace.cycles-pp.free_pages_and_swap_cache.__tlb_batch_free_encoded_pages.tlb_finish_mmu.exit_mmap.__mmput
      2.18 ±  6%      +0.5        2.65        perf-profile.calltrace.cycles-pp.tlb_finish_mmu.exit_mmap.__mmput.exit_mm.do_exit
      0.00            +0.5        0.52 ±  2%  perf-profile.calltrace.cycles-pp.unlink_anon_vmas.free_pgtables.exit_mmap.__mmput.exit_mm
      0.00            +0.5        0.54        perf-profile.calltrace.cycles-pp.folio_lruvec_lock_irqsave.folio_batch_move_lru.lru_add_drain_cpu.lru_add_drain.exit_mmap
      0.00            +0.5        0.54        perf-profile.calltrace.cycles-pp.folio_batch_move_lru.lru_add_drain_cpu.lru_add_drain.exit_mmap.__mmput
      0.00            +0.5        0.54        perf-profile.calltrace.cycles-pp.lru_add_drain.exit_mmap.__mmput.exit_mm.do_exit
      0.00            +0.5        0.54        perf-profile.calltrace.cycles-pp.lru_add_drain_cpu.lru_add_drain.exit_mmap.__mmput.exit_mm
      3.88 ±  3%      +0.7        4.60        perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
      3.88 ±  3%      +0.7        4.60        perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
      3.87 ±  3%      +0.7        4.59        perf-profile.calltrace.cycles-pp.__x64_sys_exit_group.do_syscall_64.entry_SYSCALL_64_after_hwframe
      3.87 ±  3%      +0.7        4.59        perf-profile.calltrace.cycles-pp.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64.entry_SYSCALL_64_after_hwframe
      3.87 ±  3%      +0.7        4.59        perf-profile.calltrace.cycles-pp.do_group_exit.__x64_sys_exit_group.do_syscall_64.entry_SYSCALL_64_after_hwframe
      3.77 ±  3%      +0.7        4.48        perf-profile.calltrace.cycles-pp.exit_mmap.__mmput.exit_mm.do_exit.do_group_exit
      3.77 ±  3%      +0.7        4.49        perf-profile.calltrace.cycles-pp.__mmput.exit_mm.do_exit.do_group_exit.__x64_sys_exit_group
      3.77 ±  3%      +0.7        4.49        perf-profile.calltrace.cycles-pp.exit_mm.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
      1.93            +1.1        3.00        perf-profile.calltrace.cycles-pp.folio_lruvec_lock_irqsave.folio_batch_move_lru.folio_activate.folio_mark_accessed.zap_present_ptes
      1.93            +1.1        3.00        perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.folio_lruvec_lock_irqsave.folio_batch_move_lru.folio_activate.folio_mark_accessed
      1.93            +1.1        3.00        perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.folio_lruvec_lock_irqsave.folio_batch_move_lru.folio_activate
      2.02            +1.1        3.10        perf-profile.calltrace.cycles-pp.folio_batch_move_lru.folio_activate.folio_mark_accessed.zap_present_ptes.zap_pte_range
      2.02            +1.1        3.10        perf-profile.calltrace.cycles-pp.folio_activate.folio_mark_accessed.zap_present_ptes.zap_pte_range.zap_pmd_range
      2.22            +1.1        3.35        perf-profile.calltrace.cycles-pp.folio_mark_accessed.zap_present_ptes.zap_pte_range.zap_pmd_range.unmap_page_range
      2.54            +1.2        3.72        perf-profile.calltrace.cycles-pp.zap_present_ptes.zap_pte_range.zap_pmd_range.unmap_page_range.unmap_vmas
      2.97            +1.3        4.25        perf-profile.calltrace.cycles-pp.zap_pmd_range.unmap_page_range.unmap_vmas.unmap_region.do_vmi_align_munmap
      2.97            +1.3        4.26        perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.unmap_region.do_vmi_align_munmap.do_vma_munmap
      2.96            +1.3        4.25        perf-profile.calltrace.cycles-pp.zap_pte_range.zap_pmd_range.unmap_page_range.unmap_vmas.unmap_region
      2.97            +1.3        4.26        perf-profile.calltrace.cycles-pp.unmap_vmas.unmap_region.do_vmi_align_munmap.do_vma_munmap.ksys_shmdt
      4.86            +1.3        6.20        perf-profile.calltrace.cycles-pp.task_work_run.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe.shmdt
      4.86            +1.3        6.20        perf-profile.calltrace.cycles-pp.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe.shmdt
      3.82            +1.5        5.34        perf-profile.calltrace.cycles-pp.unmap_region.do_vmi_align_munmap.do_vma_munmap.ksys_shmdt.do_syscall_64
      4.04            +1.6        5.59        perf-profile.calltrace.cycles-pp.do_vma_munmap.ksys_shmdt.do_syscall_64.entry_SYSCALL_64_after_hwframe.shmdt
      4.04            +1.6        5.59        perf-profile.calltrace.cycles-pp.do_vmi_align_munmap.do_vma_munmap.ksys_shmdt.do_syscall_64.entry_SYSCALL_64_after_hwframe
      4.08            +1.6        5.63        perf-profile.calltrace.cycles-pp.ksys_shmdt.do_syscall_64.entry_SYSCALL_64_after_hwframe.shmdt
      0.00            +1.8        1.82        perf-profile.calltrace.cycles-pp.__do_fault.do_fault.__handle_mm_fault.handle_mm_fault.__get_user_pages
      8.96            +2.9       11.84        perf-profile.calltrace.cycles-pp.shmdt
      8.95            +2.9       11.84        perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.shmdt
      8.95            +2.9       11.84        perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.shmdt
     15.46            +3.8       19.31        perf-profile.calltrace.cycles-pp.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe.shmctl
     15.46            +3.8       19.30        perf-profile.calltrace.cycles-pp.task_work_run.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe.shmctl
      0.00            +4.0        4.03        perf-profile.calltrace.cycles-pp.do_fault.__handle_mm_fault.handle_mm_fault.__get_user_pages.faultin_page_range
      0.00            +4.1        4.06        perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.__get_user_pages.faultin_page_range.madvise_vma_behavior
      0.00            +4.1        4.07        perf-profile.calltrace.cycles-pp.handle_mm_fault.__get_user_pages.faultin_page_range.madvise_vma_behavior.do_madvise
     21.28            +4.2       25.44        perf-profile.calltrace.cycles-pp.__do_fault.do_fault.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
      0.00            +4.2        4.18        perf-profile.calltrace.cycles-pp.__get_user_pages.faultin_page_range.madvise_vma_behavior.do_madvise.__x64_sys_madvise
     22.07            +4.2       26.29        perf-profile.calltrace.cycles-pp.do_fault.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault
     22.46            +4.2       26.70        perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
      0.00            +4.3        4.26        perf-profile.calltrace.cycles-pp.faultin_page_range.madvise_vma_behavior.do_madvise.__x64_sys_madvise.do_syscall_64
     22.75            +4.3       27.02        perf-profile.calltrace.cycles-pp.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.stress_shm_sysv_child
     23.28            +4.3       27.59        perf-profile.calltrace.cycles-pp.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.stress_shm_sysv_child
     23.32            +4.3       27.63        perf-profile.calltrace.cycles-pp.exc_page_fault.asm_exc_page_fault.stress_shm_sysv_child
     25.52            +4.5       30.01        perf-profile.calltrace.cycles-pp.stress_shm_sysv_child
     26.13            +4.5       30.66        perf-profile.calltrace.cycles-pp.asm_exc_page_fault.stress_shm_sysv_child
     18.50            +5.0       23.47        perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.folio_lruvec_lock_irqsave.__page_cache_release.folios_put_refs.shmem_undo_range
     18.50            +5.0       23.47        perf-profile.calltrace.cycles-pp.folio_lruvec_lock_irqsave.__page_cache_release.folios_put_refs.shmem_undo_range.shmem_evict_inode
     17.84            +5.0       22.82        perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irq.folio_lruvec_lock_irq.check_move_unevictable_folios.shmem_unlock_mapping
     17.87            +5.0       22.85        perf-profile.calltrace.cycles-pp._raw_spin_lock_irq.folio_lruvec_lock_irq.check_move_unevictable_folios.shmem_unlock_mapping.shmctl_do_lock
     17.88            +5.0       22.86        perf-profile.calltrace.cycles-pp.folio_lruvec_lock_irq.check_move_unevictable_folios.shmem_unlock_mapping.shmctl_do_lock.ksys_shmctl
     18.65            +5.0       23.65        perf-profile.calltrace.cycles-pp.__page_cache_release.folios_put_refs.shmem_undo_range.shmem_evict_inode.evict
     18.08            +5.0       23.10        perf-profile.calltrace.cycles-pp.check_move_unevictable_folios.shmem_unlock_mapping.shmctl_do_lock.ksys_shmctl.do_syscall_64
     19.17            +5.1       24.23        perf-profile.calltrace.cycles-pp.folios_put_refs.shmem_undo_range.shmem_evict_inode.evict.__dentry_kill
     18.76            +5.2       23.92        perf-profile.calltrace.cycles-pp.shmem_unlock_mapping.shmctl_do_lock.ksys_shmctl.do_syscall_64.entry_SYSCALL_64_after_hwframe
     18.78            +5.2       23.94        perf-profile.calltrace.cycles-pp.shmctl_do_lock.ksys_shmctl.do_syscall_64.entry_SYSCALL_64_after_hwframe.shmctl
     20.29            +5.2       25.47        perf-profile.calltrace.cycles-pp.__dentry_kill.dput.__fput.task_work_run.syscall_exit_to_user_mode
     20.29            +5.2       25.47        perf-profile.calltrace.cycles-pp.dput.__fput.task_work_run.syscall_exit_to_user_mode.do_syscall_64
     20.28            +5.2       25.46        perf-profile.calltrace.cycles-pp.evict.__dentry_kill.dput.__fput.task_work_run
     20.27            +5.2       25.45        perf-profile.calltrace.cycles-pp.shmem_undo_range.shmem_evict_inode.evict.__dentry_kill.dput
     20.28            +5.2       25.46        perf-profile.calltrace.cycles-pp.shmem_evict_inode.evict.__dentry_kill.dput.__fput
     20.31            +5.2       25.49        perf-profile.calltrace.cycles-pp.__fput.task_work_run.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe
     19.09            +5.2       24.28        perf-profile.calltrace.cycles-pp.ksys_shmctl.do_syscall_64.entry_SYSCALL_64_after_hwframe.shmctl
     20.52            +5.4       25.95        perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.folio_lruvec_lock_irqsave.__page_cache_release.folios_put_refs
     18.42            +5.5       23.92        perf-profile.calltrace.cycles-pp.shmem_alloc_and_add_folio.shmem_get_folio_gfp.shmem_fault.__do_fault.do_fault
     17.33            +5.5       22.87        perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.folio_lruvec_lock_irqsave.folio_batch_move_lru.folio_add_lru
     17.35            +5.5       22.89        perf-profile.calltrace.cycles-pp.folio_lruvec_lock_irqsave.folio_batch_move_lru.folio_add_lru.shmem_alloc_and_add_folio.shmem_get_folio_gfp
     17.35            +5.5       22.89        perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.folio_lruvec_lock_irqsave.folio_batch_move_lru.folio_add_lru.shmem_alloc_and_add_folio
     17.76            +5.6       23.38        perf-profile.calltrace.cycles-pp.folio_batch_move_lru.folio_add_lru.shmem_alloc_and_add_folio.shmem_get_folio_gfp.shmem_fault
     17.81            +5.6       23.44        perf-profile.calltrace.cycles-pp.folio_add_lru.shmem_alloc_and_add_folio.shmem_get_folio_gfp.shmem_fault.__do_fault
     21.20            +6.0       27.16        perf-profile.calltrace.cycles-pp.shmem_get_folio_gfp.shmem_fault.__do_fault.do_fault.__handle_mm_fault
     21.25            +6.0       27.22        perf-profile.calltrace.cycles-pp.shmem_fault.__do_fault.do_fault.__handle_mm_fault.handle_mm_fault
     34.55            +9.0       43.59        perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.shmctl
     34.55            +9.0       43.60        perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.shmctl
     34.57            +9.0       43.62        perf-profile.calltrace.cycles-pp.shmctl
     21.82           -21.8        0.00        perf-profile.children.cycles-pp.faultin_vma_page_range
     21.15           -19.2        1.94        perf-profile.children.cycles-pp.lru_add_drain
     21.36           -19.2        2.16        perf-profile.children.cycles-pp.lru_add_drain_cpu
     21.84           -17.6        4.26        perf-profile.children.cycles-pp.__x64_sys_madvise
     21.84           -17.6        4.26        perf-profile.children.cycles-pp.do_madvise
     21.84           -17.6        4.26        perf-profile.children.cycles-pp.madvise_vma_behavior
     21.84           -17.6        4.26        perf-profile.children.cycles-pp.__madvise
     41.38           -12.4       28.96        perf-profile.children.cycles-pp.folio_batch_move_lru
     61.27            -6.9       54.34        perf-profile.children.cycles-pp.folio_lruvec_lock_irqsave
     61.71            -6.9       54.82        perf-profile.children.cycles-pp._raw_spin_lock_irqsave
     72.45            -4.5       67.90        perf-profile.children.cycles-pp.do_syscall_64
     72.45            -4.5       67.91        perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
     79.47            -1.9       77.57        perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
      0.18 ±  7%      -0.1        0.10 ± 12%  perf-profile.children.cycles-pp.smpboot_thread_fn
      0.17 ±  7%      -0.1        0.09 ± 15%  perf-profile.children.cycles-pp.run_ksoftirqd
      0.22 ±  5%      -0.1        0.15 ± 10%  perf-profile.children.cycles-pp.kthread
      0.26 ±  5%      -0.1        0.20 ±  8%  perf-profile.children.cycles-pp.ret_from_fork_asm
      0.26 ±  4%      -0.1        0.20 ±  6%  perf-profile.children.cycles-pp.ret_from_fork
      0.26 ±  6%      -0.0        0.22 ±  7%  perf-profile.children.cycles-pp.rcu_do_batch
      0.30 ±  5%      -0.0        0.27 ±  6%  perf-profile.children.cycles-pp.__do_softirq
      0.14            -0.0        0.13 ±  2%  perf-profile.children.cycles-pp.__dquot_alloc_space
      0.07            -0.0        0.06        perf-profile.children.cycles-pp.inode_add_bytes
      0.06            -0.0        0.05        perf-profile.children.cycles-pp.mem_cgroup_update_lru_size
      0.14            +0.0        0.15        perf-profile.children.cycles-pp.handle_pte_fault
      0.14            +0.0        0.15        perf-profile.children.cycles-pp.lock_mm_and_find_vma
      0.10            +0.0        0.11        perf-profile.children.cycles-pp.pte_offset_map_nolock
      0.10            +0.0        0.11        perf-profile.children.cycles-pp.try_charge_memcg
      0.05            +0.0        0.06        perf-profile.children.cycles-pp.folio_mark_dirty
      0.05            +0.0        0.06        perf-profile.children.cycles-pp.page_counter_uncharge
      0.07            +0.0        0.08        perf-profile.children.cycles-pp.find_idlest_cpu
      0.07            +0.0        0.08        perf-profile.children.cycles-pp.xas_descend
      0.12            +0.0        0.13        perf-profile.children.cycles-pp.__mem_cgroup_uncharge_folios
      0.18            +0.0        0.19        perf-profile.children.cycles-pp.__slab_free
      0.08            +0.0        0.09        perf-profile.children.cycles-pp.__vm_area_free
      0.06            +0.0        0.07        perf-profile.children.cycles-pp.fput
      0.09            +0.0        0.10        perf-profile.children.cycles-pp.up_read
      0.08            +0.0        0.09        perf-profile.children.cycles-pp.wake_up_new_task
      0.07 ±  6%      +0.0        0.08 ±  5%  perf-profile.children.cycles-pp.intel_idle
      0.12 ±  3%      +0.0        0.13        perf-profile.children.cycles-pp.do_mmap
      0.12            +0.0        0.13 ±  2%  perf-profile.children.cycles-pp.cgroup_rstat_updated
      0.13 ±  2%      +0.0        0.14 ±  3%  perf-profile.children.cycles-pp.__free_one_page
      0.10            +0.0        0.11 ±  4%  perf-profile.children.cycles-pp.mmap_region
      0.10 ±  4%      +0.0        0.11        perf-profile.children.cycles-pp.select_task_rq_fair
      0.21            +0.0        0.22        perf-profile.children.cycles-pp.___perf_sw_event
      0.12            +0.0        0.13 ±  3%  perf-profile.children.cycles-pp.__memcg_slab_post_alloc_hook
      0.14 ±  2%      +0.0        0.15 ±  3%  perf-profile.children.cycles-pp.filemap_map_pages
      0.08            +0.0        0.10 ±  5%  perf-profile.children.cycles-pp.shmctl_down
      0.14            +0.0        0.16 ±  3%  perf-profile.children.cycles-pp.tlb_flush_rmap_batch
      0.14            +0.0        0.16 ±  3%  perf-profile.children.cycles-pp.tlb_flush_rmaps
      0.11 ±  3%      +0.0        0.13 ±  5%  perf-profile.children.cycles-pp.__count_memcg_events
      0.24            +0.0        0.26        perf-profile.children.cycles-pp.__perf_sw_event
      0.19            +0.0        0.21        perf-profile.children.cycles-pp.lock_vma_under_rcu
      0.14 ±  2%      +0.0        0.16 ±  2%  perf-profile.children.cycles-pp.mod_objcg_state
      0.16            +0.0        0.18        perf-profile.children.cycles-pp.__pte_offset_map_lock
      0.30            +0.0        0.32        perf-profile.children.cycles-pp._raw_spin_lock
      0.21            +0.0        0.23 ±  2%  perf-profile.children.cycles-pp.vm_area_dup
      0.24 ±  2%      +0.0        0.26        perf-profile.children.cycles-pp.xas_store
      0.10            +0.0        0.12 ±  3%  perf-profile.children.cycles-pp.wake_up_q
      0.12 ±  4%      +0.0        0.14 ±  2%  perf-profile.children.cycles-pp.try_to_wake_up
      0.17 ±  3%      +0.0        0.20 ±  3%  perf-profile.children.cycles-pp.cpuidle_enter
      0.17 ±  3%      +0.0        0.20 ±  3%  perf-profile.children.cycles-pp.cpuidle_enter_state
      0.23            +0.0        0.26        perf-profile.children.cycles-pp.folio_add_file_rmap_ptes
      0.27            +0.0        0.30        perf-profile.children.cycles-pp.filemap_unaccount_folio
      0.18 ±  4%      +0.0        0.21 ±  4%  perf-profile.children.cycles-pp.cpuidle_idle_call
      0.17 ±  2%      +0.0        0.20 ±  2%  perf-profile.children.cycles-pp.__shm_close
      0.28            +0.0        0.30        perf-profile.children.cycles-pp.kmem_cache_alloc
      0.09 ±  5%      +0.0        0.12        perf-profile.children.cycles-pp.workingset_age_nonresident
      0.16            +0.0        0.19 ±  2%  perf-profile.children.cycles-pp.rwsem_wake
      0.30            +0.0        0.33 ±  2%  perf-profile.children.cycles-pp.anon_vma_interval_tree_insert
      0.25            +0.0        0.28        perf-profile.children.cycles-pp.filemap_get_folios_tag
      0.33            +0.0        0.36        perf-profile.children.cycles-pp.set_pte_range
      0.26            +0.0        0.30 ±  4%  perf-profile.children.cycles-pp.__mem_cgroup_charge
      0.30            +0.0        0.33 ±  2%  perf-profile.children.cycles-pp.fault_dirty_shared_page
      0.26            +0.0        0.30        perf-profile.children.cycles-pp.__mod_node_page_state
      0.27 ±  3%      +0.0        0.31 ±  3%  perf-profile.children.cycles-pp.common_startup_64
      0.27 ±  3%      +0.0        0.31 ±  3%  perf-profile.children.cycles-pp.cpu_startup_entry
      0.27 ±  3%      +0.0        0.31 ±  3%  perf-profile.children.cycles-pp.do_idle
      0.56            +0.0        0.61        perf-profile.children.cycles-pp.native_irq_return_iret
      0.26 ±  2%      +0.0        0.30 ±  4%  perf-profile.children.cycles-pp.start_secondary
      0.23 ±  2%      +0.0        0.27 ±  3%  perf-profile.children.cycles-pp.rwsem_spin_on_owner
      0.35            +0.0        0.39        perf-profile.children.cycles-pp.__mod_lruvec_state
      0.57            +0.0        0.61        perf-profile.children.cycles-pp.sync_regs
      0.10 ±  4%      +0.0        0.14        perf-profile.children.cycles-pp.workingset_activation
      0.27            +0.0        0.32 ±  2%  perf-profile.children.cycles-pp.find_lock_entries
      0.32            +0.0        0.36        perf-profile.children.cycles-pp.ipcget
      0.25 ±  2%      +0.0        0.30        perf-profile.children.cycles-pp.remove_vma
      0.32            +0.0        0.36 ±  2%  perf-profile.children.cycles-pp.__x64_sys_shmget
      0.45            +0.0        0.50        perf-profile.children.cycles-pp.finish_fault
      0.33            +0.0        0.37 ±  2%  perf-profile.children.cycles-pp.shmget
      0.75            +0.0        0.80        perf-profile.children.cycles-pp.shmem_add_to_page_cache
      0.36 ±  4%      +0.0        0.41 ±  5%  perf-profile.children.cycles-pp.rmqueue_bulk
      0.00            +0.1        0.05        perf-profile.children.cycles-pp.activate_task
      0.00            +0.1        0.05        perf-profile.children.cycles-pp.follow_page_pte
      0.00            +0.1        0.05        perf-profile.children.cycles-pp.percpu_counter_add_batch
      0.48            +0.1        0.53        perf-profile.children.cycles-pp.__filemap_remove_folio
      0.13 ±  4%      +0.1        0.18 ±  4%  perf-profile.children.cycles-pp.irq_exit_rcu
      0.34            +0.1        0.39        perf-profile.children.cycles-pp.__x64_sys_shmat
      0.34            +0.1        0.39        perf-profile.children.cycles-pp.do_shmat
      0.27            +0.1        0.32 ±  2%  perf-profile.children.cycles-pp.up_write
      0.40 ±  3%      +0.1        0.45 ±  4%  perf-profile.children.cycles-pp.__rmqueue_pcplist
      0.51 ±  2%      +0.1        0.56 ±  4%  perf-profile.children.cycles-pp.rmqueue
      0.50            +0.1        0.56        perf-profile.children.cycles-pp.__mod_memcg_lruvec_state
      0.35            +0.1        0.41 ±  2%  perf-profile.children.cycles-pp.shmat
      0.47            +0.1        0.52        perf-profile.children.cycles-pp.unlink_anon_vmas
      0.38 ±  2%      +0.1        0.44 ±  2%  perf-profile.children.cycles-pp.sysvec_apic_timer_interrupt
      0.42            +0.1        0.47        perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt
      0.82            +0.1        0.87 ±  2%  perf-profile.children.cycles-pp.shmem_alloc_folio
      0.61            +0.1        0.67        perf-profile.children.cycles-pp.filemap_remove_folio
      0.61            +0.1        0.68 ±  2%  perf-profile.children.cycles-pp.free_pgtables
      0.66 ±  2%      +0.1        0.73 ±  3%  perf-profile.children.cycles-pp.get_page_from_freelist
      0.85            +0.1        0.92 ±  2%  perf-profile.children.cycles-pp.alloc_pages_mpol
      0.79            +0.1        0.86 ±  2%  perf-profile.children.cycles-pp.__alloc_pages
      0.76            +0.1        0.83        perf-profile.children.cycles-pp.truncate_inode_folio
      0.84            +0.1        0.93        perf-profile.children.cycles-pp.clear_page_erms
      0.39 ±  2%      +0.1        0.47        perf-profile.children.cycles-pp.__folio_batch_release
      0.24 ±  2%      +0.1        0.32 ±  7%  perf-profile.children.cycles-pp.tlb_flush_mmu
      0.88            +0.1        0.98        perf-profile.children.cycles-pp.__lruvec_stat_mod_folio
      0.64            +0.1        0.74        perf-profile.children.cycles-pp.anon_vma_clone
      0.30 ±  4%      +0.1        0.39 ±  6%  perf-profile.children.cycles-pp.osq_lock
      0.88            +0.1        1.02 ±  2%  perf-profile.children.cycles-pp.anon_vma_fork
      0.57 ±  3%      +0.1        0.72 ±  4%  perf-profile.children.cycles-pp.rwsem_optimistic_spin
      0.81 ±  2%      +0.2        0.97 ±  2%  perf-profile.children.cycles-pp.rwsem_down_write_slowpath
      1.05            +0.2        1.24 ±  2%  perf-profile.children.cycles-pp.down_write
      1.65            +0.2        1.88 ±  2%  perf-profile.children.cycles-pp.dup_mmap
      1.79            +0.2        2.04 ±  2%  perf-profile.children.cycles-pp.dup_mm
      1.87            +0.3        2.13 ±  2%  perf-profile.children.cycles-pp.copy_process
      1.95            +0.3        2.22 ±  2%  perf-profile.children.cycles-pp.__do_sys_clone
      1.95            +0.3        2.22 ±  2%  perf-profile.children.cycles-pp.kernel_clone
      2.02            +0.3        2.29 ±  2%  perf-profile.children.cycles-pp._Fork
      1.91            +0.4        2.32        perf-profile.children.cycles-pp.do_read_fault
      2.19 ±  6%      +0.5        2.66        perf-profile.children.cycles-pp.tlb_finish_mmu
      2.42 ±  5%      +0.6        2.98        perf-profile.children.cycles-pp.__tlb_batch_free_encoded_pages
      2.42 ±  5%      +0.6        2.98        perf-profile.children.cycles-pp.free_pages_and_swap_cache
      3.77 ±  3%      +0.7        4.49        perf-profile.children.cycles-pp.__mmput
      3.77 ±  3%      +0.7        4.49        perf-profile.children.cycles-pp.exit_mm
      3.77 ±  3%      +0.7        4.49        perf-profile.children.cycles-pp.exit_mmap
      3.91 ±  3%      +0.7        4.62        perf-profile.children.cycles-pp.__x64_sys_exit_group
      3.90 ±  3%      +0.7        4.62        perf-profile.children.cycles-pp.do_group_exit
      3.90 ±  3%      +0.7        4.62        perf-profile.children.cycles-pp.do_exit
      2.03            +1.1        3.11        perf-profile.children.cycles-pp.folio_activate
      2.23            +1.1        3.37        perf-profile.children.cycles-pp.folio_mark_accessed
      2.75            +1.2        3.96        perf-profile.children.cycles-pp.zap_present_ptes
      3.22            +1.3        4.54        perf-profile.children.cycles-pp.zap_pte_range
      3.25            +1.3        4.58        perf-profile.children.cycles-pp.unmap_page_range
      3.23            +1.3        4.56        perf-profile.children.cycles-pp.zap_pmd_range
      3.28            +1.3        4.61        perf-profile.children.cycles-pp.unmap_vmas
      3.84            +1.5        5.35        perf-profile.children.cycles-pp.unmap_region
      4.09            +1.6        5.64        perf-profile.children.cycles-pp.do_vmi_align_munmap
      4.04            +1.6        5.59        perf-profile.children.cycles-pp.do_vma_munmap
      4.08            +1.6        5.63        perf-profile.children.cycles-pp.ksys_shmdt
      2.13            +2.0        4.18        perf-profile.children.cycles-pp.__get_user_pages
      8.96            +2.9       11.85        perf-profile.children.cycles-pp.shmdt
      0.00            +4.3        4.26        perf-profile.children.cycles-pp.faultin_page_range
     23.56            +4.3       27.90        perf-profile.children.cycles-pp.do_user_addr_fault
     23.59            +4.3       27.94        perf-profile.children.cycles-pp.exc_page_fault
     25.36            +4.5       29.84        perf-profile.children.cycles-pp.asm_exc_page_fault
     26.09            +4.5       30.62        perf-profile.children.cycles-pp.stress_shm_sysv_child
     17.90            +5.0       22.88        perf-profile.children.cycles-pp.folio_lruvec_lock_irq
     18.08            +5.0       23.07        perf-profile.children.cycles-pp._raw_spin_lock_irq
     18.09            +5.0       23.11        perf-profile.children.cycles-pp.check_move_unevictable_folios
     18.78            +5.2       23.94        perf-profile.children.cycles-pp.shmctl_do_lock
     18.76            +5.2       23.92        perf-profile.children.cycles-pp.shmem_unlock_mapping
     20.28            +5.2       25.46        perf-profile.children.cycles-pp.shmem_evict_inode
     20.28            +5.2       25.46        perf-profile.children.cycles-pp.evict
     20.27            +5.2       25.45        perf-profile.children.cycles-pp.shmem_undo_range
     20.30            +5.2       25.48        perf-profile.children.cycles-pp.dput
     20.32            +5.2       25.51        perf-profile.children.cycles-pp.task_work_run
     20.29            +5.2       25.47        perf-profile.children.cycles-pp.__dentry_kill
     20.36            +5.2       25.55        perf-profile.children.cycles-pp.syscall_exit_to_user_mode
     20.31            +5.2       25.50        perf-profile.children.cycles-pp.__fput
     19.09            +5.2       24.28        perf-profile.children.cycles-pp.ksys_shmctl
     20.73            +5.4       26.18        perf-profile.children.cycles-pp.__page_cache_release
     21.51            +5.5       27.03        perf-profile.children.cycles-pp.folios_put_refs
     17.87            +5.6       23.50        perf-profile.children.cycles-pp.folio_add_lru
     20.09            +5.8       25.86        perf-profile.children.cycles-pp.shmem_alloc_and_add_folio
     23.21            +6.1       29.33        perf-profile.children.cycles-pp.shmem_fault
     23.24            +6.1       29.36        perf-profile.children.cycles-pp.shmem_get_folio_gfp
     23.25            +6.1       29.38        perf-profile.children.cycles-pp.__do_fault
     24.26            +6.2       30.49        perf-profile.children.cycles-pp.do_fault
     24.73            +6.3       31.00        perf-profile.children.cycles-pp.__handle_mm_fault
     25.04            +6.3       31.33        perf-profile.children.cycles-pp.handle_mm_fault
     34.58            +9.0       43.62        perf-profile.children.cycles-pp.shmctl
     79.47            -1.9       77.57        perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
      0.14 ±  3%      -0.0        0.13        perf-profile.self.cycles-pp.lru_add_fn
      0.10            +0.0        0.11        perf-profile.self.cycles-pp.mas_walk
      0.05            +0.0        0.06        perf-profile.self.cycles-pp.__mem_cgroup_charge
      0.05            +0.0        0.06        perf-profile.self.cycles-pp.filemap_remove_folio
      0.07            +0.0        0.08        perf-profile.self.cycles-pp.folio_unlock
      0.06            +0.0        0.07        perf-profile.self.cycles-pp.do_fault
      0.08            +0.0        0.09        perf-profile.self.cycles-pp.down_read_trylock
      0.12            +0.0        0.13        perf-profile.self.cycles-pp.xas_store
      0.16            +0.0        0.17 ±  2%  perf-profile.self.cycles-pp.___perf_sw_event
      0.09            +0.0        0.10 ±  3%  perf-profile.self.cycles-pp.cgroup_rstat_updated
      0.07 ±  6%      +0.0        0.08 ±  5%  perf-profile.self.cycles-pp.intel_idle
      0.15            +0.0        0.16 ±  2%  perf-profile.self.cycles-pp.shmem_alloc_and_add_folio
      0.08            +0.0        0.09 ±  4%  perf-profile.self.cycles-pp.dup_mmap
      0.15 ±  2%      +0.0        0.16 ±  2%  perf-profile.self.cycles-pp.check_move_unevictable_folios
      0.09            +0.0        0.10 ±  4%  perf-profile.self.cycles-pp.__count_memcg_events
      0.13 ±  3%      +0.0        0.15 ±  3%  perf-profile.self.cycles-pp.get_page_from_freelist
      0.13 ±  2%      +0.0        0.15 ±  3%  perf-profile.self.cycles-pp.rmqueue_bulk
      0.11 ±  3%      +0.0        0.12 ±  3%  perf-profile.self.cycles-pp.folio_mark_accessed
      0.28            +0.0        0.30        perf-profile.self.cycles-pp._raw_spin_lock
      0.19            +0.0        0.21 ±  2%  perf-profile.self.cycles-pp._raw_spin_lock_irq
      0.22            +0.0        0.24        perf-profile.self.cycles-pp.down_write
      0.17 ±  2%      +0.0        0.19        perf-profile.self.cycles-pp.zap_present_ptes
      0.11            +0.0        0.13        perf-profile.self.cycles-pp.up_write
      0.26            +0.0        0.28 ±  2%  perf-profile.self.cycles-pp.__handle_mm_fault
      0.14            +0.0        0.16 ±  2%  perf-profile.self.cycles-pp.anon_vma_clone
      0.25 ±  2%      +0.0        0.27 ±  2%  perf-profile.self.cycles-pp.__lruvec_stat_mod_folio
      0.22 ±  2%      +0.0        0.24 ±  2%  perf-profile.self.cycles-pp.filemap_get_folios_tag
      0.25            +0.0        0.28        perf-profile.self.cycles-pp.__mod_node_page_state
      0.09 ±  5%      +0.0        0.12        perf-profile.self.cycles-pp.workingset_age_nonresident
      0.30            +0.0        0.33        perf-profile.self.cycles-pp.anon_vma_interval_tree_insert
      0.23 ±  2%      +0.0        0.27 ±  2%  perf-profile.self.cycles-pp.find_lock_entries
      0.56            +0.0        0.61        perf-profile.self.cycles-pp.native_irq_return_iret
      0.22 ±  2%      +0.0        0.26 ±  3%  perf-profile.self.cycles-pp.rwsem_spin_on_owner
      0.42            +0.0        0.46        perf-profile.self.cycles-pp.__mod_memcg_lruvec_state
      0.57            +0.0        0.61        perf-profile.self.cycles-pp.sync_regs
      0.00            +0.1        0.05        perf-profile.self.cycles-pp.free_pcppages_bulk
      0.00            +0.1        0.05        perf-profile.self.cycles-pp.lock_vma_under_rcu
      0.84            +0.1        0.92        perf-profile.self.cycles-pp.clear_page_erms
      0.30 ±  4%      +0.1        0.39 ±  6%  perf-profile.self.cycles-pp.osq_lock
      1.54            +0.1        1.66        perf-profile.self.cycles-pp.stress_shm_sysv_child
      2.20            +0.3        2.47        perf-profile.self.cycles-pp.shmem_get_folio_gfp




Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ