lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220317090415.GE735@xsang-OptiPlex-9020>
Date:   Thu, 17 Mar 2022 17:04:15 +0800
From:   kernel test robot <oliver.sang@...el.com>
To:     Nadav Amit <namit@...are.com>
Cc:     Ingo Molnar <mingo@...nel.org>,
        Dave Hansen <dave.hansen@...ux.intel.com>,
        LKML <linux-kernel@...r.kernel.org>, lkp@...ts.01.org,
        lkp@...el.com, ying.huang@...el.com, feng.tang@...el.com,
        zhengjun.xing@...ux.intel.com, fengwei.yin@...el.com
Subject: [x86/mm/tlb]  6035152d8e:  will-it-scale.per_thread_ops -13.2%
 regression



Greeting,

FYI, we noticed a -13.2% regression of will-it-scale.per_thread_ops due to commit:


commit: 6035152d8eebe16a5bb60398d3e05dc7799067b0 ("x86/mm/tlb: Open-code on_each_cpu_cond_mask() for tlb_is_not_lazy()")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master

in testcase: will-it-scale
on test machine: 128 threads 2 sockets Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz with 128G memory
with following parameters:

	nr_task: 100%
	mode: thread
	test: tlb_flush1
	cpufreq_governor: performance
	ucode: 0xd000331

test-description: Will It Scale takes a testcase and runs it from 1 through to n parallel copies to see if the testcase will scale. It builds both a process and threads based test in order to see any differences between the two.
test-url: https://github.com/antonblanchard/will-it-scale



If you fix the issue, kindly add following tag
Reported-by: kernel test robot <oliver.sang@...el.com>


Details are as below:
-------------------------------------------------------------------------------------------------->


To reproduce:

        git clone https://github.com/intel/lkp-tests.git
        cd lkp-tests
        sudo bin/lkp install job.yaml           # job file is attached in this email
        bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
        sudo bin/lkp run generated-yaml-file

        # if come across any failure that blocks the test,
        # please remove ~/.lkp and /lkp dir to run from a clean state.

=========================================================================================
compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase/ucode:
  gcc-9/performance/x86_64-rhel-8.3/thread/100%/debian-10.4-x86_64-20200603.cgz/lkp-icl-2sp5/tlb_flush1/will-it-scale/0xd000331

commit: 
  4c1ba3923e ("x86/mm/tlb: Unify flush_tlb_func_local() and flush_tlb_func_remote()")
  6035152d8e ("x86/mm/tlb: Open-code on_each_cpu_cond_mask() for tlb_is_not_lazy()")

4c1ba3923e6c8aa7 6035152d8eebe16a5bb60398d3e 
---------------- --------------------------- 
         %stddev     %change         %stddev
             \          |                \  
    823626           -13.2%     715243        will-it-scale.128.threads
      1.53           -13.2%       1.32        will-it-scale.128.threads_idle
      6434           -13.2%       5587        will-it-scale.per_thread_ops
    823626           -13.2%     715243        will-it-scale.workload
 6.342e+10           -12.9%  5.524e+10        turbostat.IRQ
     13455           -10.9%      11995        vmstat.system.cs
   8834119            -7.9%    8132489        vmstat.system.in
     31.26            -6.6       24.66 ±  2%  mpstat.cpu.all.irq%
     66.51            +6.7       73.23        mpstat.cpu.all.sys%
      0.30            -0.0        0.26        mpstat.cpu.all.usr%
 1.282e+08           -11.2%  1.139e+08        numa-numastat.node0.local_node
 1.282e+08           -11.1%  1.139e+08        numa-numastat.node0.numa_hit
 1.279e+08 ±  2%     -14.4%  1.095e+08 ±  2%  numa-numastat.node1.local_node
 1.279e+08 ±  2%     -14.3%  1.096e+08 ±  2%  numa-numastat.node1.numa_hit
  66398782           -10.5%   59416495        numa-vmstat.node0.numa_hit
  66355740           -10.6%   59333364        numa-vmstat.node0.numa_local
  66322224 ±  2%     -13.8%   57147061 ±  2%  numa-vmstat.node1.numa_hit
  65908198 ±  2%     -13.9%   56773100 ±  2%  numa-vmstat.node1.numa_local
  13146738           +11.3%   14628057        sched_debug.cfs_rq:/.min_vruntime.avg
  10371412 ±  3%     +12.7%   11683859 ±  3%  sched_debug.cfs_rq:/.min_vruntime.min
   2505980 ± 12%     -32.7%    1686413 ±  9%  sched_debug.cfs_rq:/.spread0.max
     18298            -9.0%      16648        sched_debug.cpu.nr_switches.avg
     15393           -10.8%      13737        sched_debug.cpu.nr_switches.min
      0.04 ± 71%     -69.1%       0.01 ± 30%  sched_debug.cpu.nr_uninterruptible.avg
    194329            +5.8%     205510        proc-vmstat.nr_inactive_anon
    194329            +5.8%     205510        proc-vmstat.nr_zone_inactive_anon
 2.562e+08           -12.7%  2.235e+08        proc-vmstat.numa_hit
  2.56e+08           -12.7%  2.234e+08        proc-vmstat.numa_local
 2.563e+08           -12.7%  2.236e+08        proc-vmstat.pgalloc_normal
 5.004e+08           -13.0%  4.354e+08        proc-vmstat.pgfault
 2.561e+08           -12.7%  2.234e+08        proc-vmstat.pgfree
 1.392e+10            -6.5%  1.302e+10        perf-stat.i.branch-instructions
 1.246e+08            -9.7%  1.125e+08        perf-stat.i.branch-misses
 4.507e+08            -4.6%    4.3e+08        perf-stat.i.cache-misses
 1.848e+09            -7.3%  1.713e+09        perf-stat.i.cache-references
     13512           -11.0%      12032        perf-stat.i.context-switches
      6.74            +7.0%       7.22        perf-stat.i.cpi
    182.97            -2.4%     178.63        perf-stat.i.cpu-migrations
    916.35            +5.4%     965.43 ±  2%  perf-stat.i.cycles-between-cache-misses
 1.646e+10            -8.3%   1.51e+10        perf-stat.i.dTLB-loads
      0.14            -0.0        0.13        perf-stat.i.dTLB-store-miss-rate%
   9784986           -13.2%    8495466        perf-stat.i.dTLB-store-misses
 7.083e+09           -10.0%  6.378e+09        perf-stat.i.dTLB-stores
 6.113e+10            -6.5%  5.714e+10        perf-stat.i.instructions
      0.15            -6.4%       0.14        perf-stat.i.ipc
    308.99            -7.9%     284.59        perf-stat.i.metric.M/sec
   1655373           -13.1%    1438884        perf-stat.i.minor-faults
  27419309            -5.5%   25923350        perf-stat.i.node-loads
   1655375           -13.1%    1438886        perf-stat.i.page-faults
      0.90            -0.0        0.86        perf-stat.overall.branch-miss-rate%
     24.40            +0.7       25.10        perf-stat.overall.cache-miss-rate%
      6.75            +7.1%       7.23        perf-stat.overall.cpi
    915.93            +4.9%     961.03        perf-stat.overall.cycles-between-cache-misses
      0.14            -0.0        0.13        perf-stat.overall.dTLB-store-miss-rate%
      0.15            -6.6%       0.14        perf-stat.overall.ipc
  22432987            +7.7%   24171448        perf-stat.overall.path-length
 1.387e+10            -6.5%  1.297e+10        perf-stat.ps.branch-instructions
 1.242e+08            -9.7%  1.121e+08        perf-stat.ps.branch-misses
 4.493e+08            -4.6%  4.286e+08        perf-stat.ps.cache-misses
 1.842e+09            -7.3%  1.708e+09        perf-stat.ps.cache-references
     13466           -10.9%      11992        perf-stat.ps.context-switches
    182.44            -2.4%     177.98        perf-stat.ps.cpu-migrations
 1.641e+10            -8.3%  1.505e+10        perf-stat.ps.dTLB-loads
   9753730           -13.2%    8467120        perf-stat.ps.dTLB-store-misses
 7.061e+09           -10.0%  6.357e+09        perf-stat.ps.dTLB-stores
 6.094e+10            -6.5%  5.695e+10        perf-stat.ps.instructions
   1649980           -13.1%    1433930        perf-stat.ps.minor-faults
  27334503            -5.4%   25845323        perf-stat.ps.node-loads
   1649982           -13.1%    1433932        perf-stat.ps.page-faults
 1.848e+13            -6.4%  1.729e+13        perf-stat.total.instructions
     41.88           -41.9        0.00        perf-profile.calltrace.cycles-pp.on_each_cpu_cond_mask.flush_tlb_mm_range.ptep_clear_flush.wp_page_copy.__handle_mm_fault
     41.50           -41.5        0.00        perf-profile.calltrace.cycles-pp.smp_call_function_many_cond.on_each_cpu_cond_mask.flush_tlb_mm_range.ptep_clear_flush.wp_page_copy
     16.03 ±  2%     -16.0        0.00        perf-profile.calltrace.cycles-pp.llist_add_batch.smp_call_function_many_cond.on_each_cpu_cond_mask.flush_tlb_mm_range.ptep_clear_flush
      6.79 ±  3%      -6.8        0.00        perf-profile.calltrace.cycles-pp.asm_sysvec_call_function.smp_call_function_many_cond.on_each_cpu_cond_mask.flush_tlb_mm_range.ptep_clear_flush
      6.52 ±  3%      -6.5        0.00        perf-profile.calltrace.cycles-pp.sysvec_call_function.asm_sysvec_call_function.smp_call_function_many_cond.on_each_cpu_cond_mask.flush_tlb_mm_range
      6.44 ±  3%      -6.4        0.00        perf-profile.calltrace.cycles-pp.__sysvec_call_function.sysvec_call_function.asm_sysvec_call_function.smp_call_function_many_cond.on_each_cpu_cond_mask
      5.21 ±  2%      -5.2        0.00        perf-profile.calltrace.cycles-pp.asm_sysvec_call_function.llist_add_batch.smp_call_function_many_cond.on_each_cpu_cond_mask.flush_tlb_mm_range
      4.97 ±  2%      -5.0        0.00        perf-profile.calltrace.cycles-pp.sysvec_call_function.asm_sysvec_call_function.llist_add_batch.smp_call_function_many_cond.on_each_cpu_cond_mask
     14.50 ±  4%      -4.2       10.28 ±  5%  perf-profile.calltrace.cycles-pp.flush_tlb_func.flush_smp_call_function_queue.__sysvec_call_function.sysvec_call_function.asm_sysvec_call_function
     12.85 ±  3%      -4.1        8.74 ±  3%  perf-profile.calltrace.cycles-pp.flush_smp_call_function_queue.__sysvec_call_function.sysvec_call_function.asm_sysvec_call_function.smp_call_function_many_cond
      6.79 ±  3%      -2.0        4.79 ±  3%  perf-profile.calltrace.cycles-pp.asm_sysvec_call_function.smp_call_function_many_cond.flush_tlb_mm_range.tlb_flush_mmu.tlb_finish_mmu
      6.52 ±  3%      -2.0        4.57 ±  3%  perf-profile.calltrace.cycles-pp.sysvec_call_function.asm_sysvec_call_function.smp_call_function_many_cond.flush_tlb_mm_range.tlb_flush_mmu
     51.35            -1.8       49.55        perf-profile.calltrace.cycles-pp.__madvise
     51.14            -1.8       49.38        perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__madvise
     51.13            -1.8       49.37        perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__madvise
     51.11            -1.8       49.35        perf-profile.calltrace.cycles-pp.__x64_sys_madvise.do_syscall_64.entry_SYSCALL_64_after_hwframe.__madvise
     51.11            -1.8       49.35        perf-profile.calltrace.cycles-pp.do_madvise.__x64_sys_madvise.do_syscall_64.entry_SYSCALL_64_after_hwframe.__madvise
     50.10            -1.6       48.45        perf-profile.calltrace.cycles-pp.zap_page_range.do_madvise.__x64_sys_madvise.do_syscall_64.entry_SYSCALL_64_after_hwframe
     48.04            -1.4       46.69        perf-profile.calltrace.cycles-pp.tlb_finish_mmu.zap_page_range.do_madvise.__x64_sys_madvise.do_syscall_64
     47.80            -1.3       46.49        perf-profile.calltrace.cycles-pp.tlb_flush_mmu.tlb_finish_mmu.zap_page_range.do_madvise.__x64_sys_madvise
     46.37            -1.1       45.24        perf-profile.calltrace.cycles-pp.flush_tlb_mm_range.tlb_flush_mmu.tlb_finish_mmu.zap_page_range.do_madvise
      6.59 ±  2%      -0.8        5.74        perf-profile.calltrace.cycles-pp.asm_sysvec_call_function.llist_add_batch.smp_call_function_many_cond.flush_tlb_mm_range.tlb_flush_mmu
     44.91            -0.8       44.08        perf-profile.calltrace.cycles-pp.smp_call_function_many_cond.flush_tlb_mm_range.tlb_flush_mmu.tlb_finish_mmu.zap_page_range
      7.49 ±  2%      -0.5        7.01 ±  2%  perf-profile.calltrace.cycles-pp.llist_reverse_order.flush_smp_call_function_queue.__sysvec_call_function.sysvec_call_function.asm_sysvec_call_function
      1.77            -0.3        1.49 ±  2%  perf-profile.calltrace.cycles-pp.default_send_IPI_mask_sequence_phys.smp_call_function_many_cond.flush_tlb_mm_range.tlb_flush_mmu.tlb_finish_mmu
      1.05            -0.2        0.81        perf-profile.calltrace.cycles-pp.cpumask_next.smp_call_function_many_cond.flush_tlb_mm_range.tlb_flush_mmu.tlb_finish_mmu
      1.07            -0.2        0.90        perf-profile.calltrace.cycles-pp.do_fault.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault
      0.98            -0.1        0.83        perf-profile.calltrace.cycles-pp.filemap_map_pages.do_fault.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
      0.66            -0.1        0.51        perf-profile.calltrace.cycles-pp._find_next_bit.cpumask_next.smp_call_function_many_cond.flush_tlb_mm_range.tlb_flush_mmu
      1.06 ±  2%      -0.1        0.98 ±  2%  perf-profile.calltrace.cycles-pp.__default_send_IPI_dest_field.default_send_IPI_mask_sequence_phys.smp_call_function_many_cond.flush_tlb_mm_range.tlb_flush_mmu
      0.00            +0.6        0.56 ±  3%  perf-profile.calltrace.cycles-pp.cpumask_next.native_flush_tlb_others.flush_tlb_mm_range.ptep_clear_flush.wp_page_copy
      0.00            +0.8        0.76        perf-profile.calltrace.cycles-pp.cpumask_next.smp_call_function_many_cond.flush_tlb_mm_range.ptep_clear_flush.wp_page_copy
      0.00            +0.8        0.83 ±  2%  perf-profile.calltrace.cycles-pp.__default_send_IPI_dest_field.default_send_IPI_mask_sequence_phys.smp_call_function_many_cond.flush_tlb_mm_range.ptep_clear_flush
      0.00            +1.3        1.27 ±  2%  perf-profile.calltrace.cycles-pp.default_send_IPI_mask_sequence_phys.smp_call_function_many_cond.flush_tlb_mm_range.ptep_clear_flush.wp_page_copy
     20.24 ±  2%      +1.7       21.94 ±  2%  perf-profile.calltrace.cycles-pp.llist_add_batch.smp_call_function_many_cond.flush_tlb_mm_range.tlb_flush_mmu.tlb_finish_mmu
     48.18            +1.8       50.02        perf-profile.calltrace.cycles-pp.testcase
     47.77            +1.9       49.69        perf-profile.calltrace.cycles-pp.asm_exc_page_fault.testcase
     47.66            +1.9       49.59        perf-profile.calltrace.cycles-pp.exc_page_fault.asm_exc_page_fault.testcase
     47.63            +1.9       49.58        perf-profile.calltrace.cycles-pp.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.testcase
      0.00            +2.0        1.98 ±  3%  perf-profile.calltrace.cycles-pp.native_flush_tlb_others.flush_tlb_mm_range.ptep_clear_flush.wp_page_copy.__handle_mm_fault
     45.24            +2.2       47.47        perf-profile.calltrace.cycles-pp.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.testcase
     45.09            +2.3       47.34        perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
      6.44 ±  3%      +2.3        8.77 ±  3%  perf-profile.calltrace.cycles-pp.__sysvec_call_function.sysvec_call_function.asm_sysvec_call_function.smp_call_function_many_cond.flush_tlb_mm_range
     43.71            +2.5       46.23        perf-profile.calltrace.cycles-pp.wp_page_copy.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault
     42.92            +2.7       45.57        perf-profile.calltrace.cycles-pp.ptep_clear_flush.wp_page_copy.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
     42.89            +2.7       45.55        perf-profile.calltrace.cycles-pp.flush_tlb_mm_range.ptep_clear_flush.wp_page_copy.__handle_mm_fault.handle_mm_fault
      6.31 ±  2%      +4.3       10.62 ±  2%  perf-profile.calltrace.cycles-pp.sysvec_call_function.asm_sysvec_call_function.llist_add_batch.smp_call_function_many_cond.flush_tlb_mm_range
      0.00            +4.3        4.32 ±  4%  perf-profile.calltrace.cycles-pp.sysvec_call_function.asm_sysvec_call_function.smp_call_function_many_cond.flush_tlb_mm_range.ptep_clear_flush
      0.00            +4.5        4.53 ±  3%  perf-profile.calltrace.cycles-pp.asm_sysvec_call_function.smp_call_function_many_cond.flush_tlb_mm_range.ptep_clear_flush.wp_page_copy
      0.00            +5.6        5.56 ±  2%  perf-profile.calltrace.cycles-pp.asm_sysvec_call_function.llist_add_batch.smp_call_function_many_cond.flush_tlb_mm_range.ptep_clear_flush
      0.00           +21.7       21.66 ±  3%  perf-profile.calltrace.cycles-pp.llist_add_batch.smp_call_function_many_cond.flush_tlb_mm_range.ptep_clear_flush.wp_page_copy
      0.00           +42.5       42.48        perf-profile.calltrace.cycles-pp.smp_call_function_many_cond.flush_tlb_mm_range.ptep_clear_flush.wp_page_copy.__handle_mm_fault
     41.89           -41.9        0.00        perf-profile.children.cycles-pp.on_each_cpu_cond_mask
     30.59 ±  2%      -6.4       24.19 ±  2%  perf-profile.children.cycles-pp.flush_smp_call_function_queue
     31.91 ±  2%      -6.3       25.60 ±  2%  perf-profile.children.cycles-pp.asm_sysvec_call_function
     30.51 ±  2%      -6.3       24.20 ±  2%  perf-profile.children.cycles-pp.sysvec_call_function
     30.12 ±  2%      -6.3       23.84 ±  2%  perf-profile.children.cycles-pp.__sysvec_call_function
     19.62 ±  3%      -5.7       13.88 ±  5%  perf-profile.children.cycles-pp.flush_tlb_func
     51.37            -1.8       49.57        perf-profile.children.cycles-pp.__madvise
     51.33            -1.8       49.55        perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
     51.32            -1.8       49.54        perf-profile.children.cycles-pp.do_syscall_64
     51.11            -1.8       49.35        perf-profile.children.cycles-pp.__x64_sys_madvise
     51.11            -1.8       49.35        perf-profile.children.cycles-pp.do_madvise
     50.11            -1.6       48.46        perf-profile.children.cycles-pp.zap_page_range
     48.05            -1.4       46.69        perf-profile.children.cycles-pp.tlb_finish_mmu
     47.82            -1.3       46.50        perf-profile.children.cycles-pp.tlb_flush_mmu
      3.68            -0.8        2.84 ±  2%  perf-profile.children.cycles-pp.default_send_IPI_mask_sequence_phys
      9.19 ±  2%      -0.8        8.42        perf-profile.children.cycles-pp.llist_reverse_order
      2.65 ±  2%      -0.6        2.04 ±  2%  perf-profile.children.cycles-pp.native_flush_tlb_local
      2.15            -0.3        1.82 ±  2%  perf-profile.children.cycles-pp.__default_send_IPI_dest_field
      1.08            -0.2        0.91        perf-profile.children.cycles-pp.do_fault
      1.42 ±  7%      -0.2        1.25 ±  6%  perf-profile.children.cycles-pp.release_pages
      0.64 ± 10%      -0.2        0.49 ± 11%  perf-profile.children.cycles-pp._raw_spin_unlock_irqrestore
      1.00            -0.2        0.85        perf-profile.children.cycles-pp.filemap_map_pages
      0.53            -0.1        0.39        perf-profile.children.cycles-pp.asm_sysvec_call_function_single
      0.51            -0.1        0.37        perf-profile.children.cycles-pp.sysvec_call_function_single
      0.50            -0.1        0.36        perf-profile.children.cycles-pp.__sysvec_call_function_single
      0.47            -0.1        0.38        perf-profile.children.cycles-pp.copy_page
      0.47 ±  3%      -0.1        0.39 ±  2%  perf-profile.children.cycles-pp.unmap_page_range
      0.49            -0.1        0.41        perf-profile.children.cycles-pp.next_uptodate_page
      0.42 ±  2%      -0.1        0.35 ±  3%  perf-profile.children.cycles-pp.zap_pte_range
      0.17 ±  6%      -0.0        0.14 ±  4%  perf-profile.children.cycles-pp.tlb_gather_mmu
      0.12 ± 10%      -0.0        0.10 ±  5%  perf-profile.children.cycles-pp.sync_mm_rss
      0.27 ±  2%      -0.0        0.24 ±  2%  perf-profile.children.cycles-pp.irq_exit_rcu
      0.24 ±  3%      -0.0        0.22        perf-profile.children.cycles-pp.error_entry
      0.14 ±  4%      -0.0        0.12        perf-profile.children.cycles-pp.sync_regs
      0.13            -0.0        0.11        perf-profile.children.cycles-pp.__x86_indirect_thunk_rax
      0.26            -0.0        0.24        perf-profile.children.cycles-pp.irqtime_account_irq
      0.12 ±  4%      -0.0        0.10 ±  7%  perf-profile.children.cycles-pp.rwsem_down_read_slowpath
      0.15 ±  3%      -0.0        0.13 ±  2%  perf-profile.children.cycles-pp.unlock_page
      0.10 ±  3%      -0.0        0.09 ±  8%  perf-profile.children.cycles-pp.__x64_sys_munmap
      0.10 ±  3%      -0.0        0.09 ±  8%  perf-profile.children.cycles-pp.__vm_munmap
      0.10            -0.0        0.09 ±  8%  perf-profile.children.cycles-pp.__munmap
      0.10            -0.0        0.09 ±  8%  perf-profile.children.cycles-pp.__do_munmap
      0.09 ±  4%      -0.0        0.08 ±  4%  perf-profile.children.cycles-pp.unmap_region
      0.09            -0.0        0.08 ±  5%  perf-profile.children.cycles-pp.unmap_vmas
      0.08 ±  5%      -0.0        0.07        perf-profile.children.cycles-pp.do_set_pte
      0.06 ±  5%      -0.0        0.05        perf-profile.children.cycles-pp.page_add_file_rmap
      0.06            -0.0        0.05        perf-profile.children.cycles-pp.___perf_sw_event
      0.08            -0.0        0.07        perf-profile.children.cycles-pp.__perf_sw_event
      1.54 ±  2%      +0.1        1.66 ±  2%  perf-profile.children.cycles-pp._find_next_bit
      2.31            +0.1        2.44        perf-profile.children.cycles-pp.cpumask_next
     89.30            +1.5       90.82        perf-profile.children.cycles-pp.flush_tlb_mm_range
     48.44            +1.8       50.25        perf-profile.children.cycles-pp.testcase
     48.01            +1.9       49.92        perf-profile.children.cycles-pp.asm_exc_page_fault
     47.67            +1.9       49.61        perf-profile.children.cycles-pp.do_user_addr_fault
     47.72            +1.9       49.67        perf-profile.children.cycles-pp.exc_page_fault
      0.00            +2.0        2.02 ±  3%  perf-profile.children.cycles-pp.native_flush_tlb_others
     45.26            +2.2       47.48        perf-profile.children.cycles-pp.handle_mm_fault
     45.10            +2.3       47.35        perf-profile.children.cycles-pp.__handle_mm_fault
     43.72            +2.5       46.24        perf-profile.children.cycles-pp.wp_page_copy
     42.92            +2.6       45.57        perf-profile.children.cycles-pp.ptep_clear_flush
     36.85 ±  2%      +7.5       44.33 ±  3%  perf-profile.children.cycles-pp.llist_add_batch
     16.95 ±  4%      -5.1       11.81 ±  6%  perf-profile.self.cycles-pp.flush_tlb_func
      9.19 ±  2%      -0.8        8.41        perf-profile.self.cycles-pp.llist_reverse_order
      2.61 ±  2%      -0.6        2.01 ±  2%  perf-profile.self.cycles-pp.native_flush_tlb_local
      2.14            -0.3        1.81 ±  2%  perf-profile.self.cycles-pp.__default_send_IPI_dest_field
      1.04 ±  5%      -0.2        0.82 ±  7%  perf-profile.self.cycles-pp.flush_tlb_mm_range
      0.17 ± 37%      -0.1        0.12 ±  7%  perf-profile.self.cycles-pp.__handle_mm_fault
      0.24 ±  2%      -0.1        0.19 ±  3%  perf-profile.self.cycles-pp.default_send_IPI_mask_sequence_phys
      0.32            -0.0        0.27        perf-profile.self.cycles-pp.testcase
      0.31            -0.0        0.27 ±  2%  perf-profile.self.cycles-pp.copy_page
      0.31 ±  2%      -0.0        0.29        perf-profile.self.cycles-pp.next_uptodate_page
      0.14 ±  4%      -0.0        0.12        perf-profile.self.cycles-pp.sync_regs
      0.16 ±  3%      -0.0        0.14        perf-profile.self.cycles-pp.filemap_map_pages
      0.08 ± 10%      -0.0        0.06 ±  7%  perf-profile.self.cycles-pp.sync_mm_rss
      0.23 ±  2%      +0.0        0.25        perf-profile.self.cycles-pp.find_next_bit
      0.35 ±  2%      +0.0        0.38        perf-profile.self.cycles-pp.cpumask_next
      1.00 ±  2%      +0.1        1.15        perf-profile.self.cycles-pp._find_next_bit
      0.00            +1.0        1.05 ±  4%  perf-profile.self.cycles-pp.native_flush_tlb_others
     24.77 ±  2%      +8.1       32.86 ±  3%  perf-profile.self.cycles-pp.llist_add_batch




Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


---
0-DAY CI Kernel Test Service
https://lists.01.org/hyperkitty/list/lkp@lists.01.org

Thanks,
Oliver Sang


View attachment "config-5.12.0-rc2-00003-g6035152d8eeb" of type "text/plain" (169017 bytes)

View attachment "job-script" of type "text/plain" (7948 bytes)

View attachment "job.yaml" of type "text/plain" (5347 bytes)

View attachment "reproduce" of type "text/plain" (346 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ