lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Date:   Wed, 3 Mar 2021 15:20:03 +0800
From:   kernel test robot <oliver.sang@...el.com>
To:     Waiman Long <longman@...hat.com>
Cc:     Peter Zijlstra <peterz@...radead.org>,
        Davidlohr Bueso <dbueso@...e.de>,
        LKML <linux-kernel@...r.kernel.org>, lkp@...ts.01.org,
        lkp@...el.com, ying.huang@...el.com, feng.tang@...el.com,
        zhengjun.xing@...el.com
Subject: [locking/rwsem]  1a728dff85:  stress-ng.fiemap.ops_per_sec 630.5%
 improvement


Greeting,

FYI, we noticed a 630.5% improvement of stress-ng.fiemap.ops_per_sec due to commit:


commit: 1a728dff855a318bb58bcc1259b1826a7ad9f0bd ("locking/rwsem: Enable reader optimistic lock stealing")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master


in testcase: stress-ng
on test machine: 96 threads Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 512G memory
with following parameters:

	nr_threads: 10%
	disk: 1HDD
	testtime: 60s
	fs: xfs
	class: filesystem
	test: fiemap
	cpufreq_governor: performance
	ucode: 0x5003006


In addition to that, the commit also has significant impact on the following tests:

+------------------+---------------------------------------------------------------------------+
| testcase: change | ebizzy: ebizzy.throughput 3.2% improvement                                |
| test machine     | 288 threads Intel(R) Xeon Phi(TM) CPU 7295 @ 1.50GHz with 80G memory      |
| test parameters  | cpufreq_governor=performance                                              |
|                  | duration=10s                                                              |
|                  | iterations=100x                                                           |
|                  | memory.high=90%                                                           |
|                  | memory.low=50%                                                            |
|                  | memory.max=max                                                            |
|                  | nr_threads=200%                                                           |
|                  | pids.max=10000                                                            |
|                  | ucode=0x11                                                                |
+------------------+---------------------------------------------------------------------------+
| testcase: change | unixbench: boot-time.boot 11.4% regression                                |
| test machine     | 16 threads Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz with 32G memory         |
| test parameters  | cpufreq_governor=performance                                              |
|                  | nr_task=30%                                                               |
|                  | runtime=300s                                                              |
|                  | test=shell8                                                               |
|                  | ucode=0xde                                                                |
+------------------+---------------------------------------------------------------------------+
| testcase: change | vm-scalability: boot-time.boot 5.9% regression                            |
| test machine     | 192 threads Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz with 192G memory |
| test parameters  | cpufreq_governor=performance                                              |
|                  | runtime=300s                                                              |
|                  | size=8T                                                                   |
|                  | test=anon-w-seq-mt                                                        |
|                  | ucode=0x5003003                                                           |
+------------------+---------------------------------------------------------------------------+
| testcase: change | fio-basic: fio.latency_2us% 0.0% undefined                                |
| test machine     | 192 threads Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz with 192G memory |
| test parameters  | bs=4k                                                                     |
|                  | cpufreq_governor=performance                                              |
|                  | disk=1SSD                                                                 |
|                  | fs=xfs                                                                    |
|                  | ioengine=sync                                                             |
|                  | nr_task=32                                                                |
|                  | runtime=300s                                                              |
|                  | rw=randwrite                                                              |
|                  | test_size=256g                                                            |
|                  | ucode=0x4003006                                                           |
+------------------+---------------------------------------------------------------------------+




Details are as below:
-------------------------------------------------------------------------------------------------->


To reproduce:

        git clone https://github.com/intel/lkp-tests.git
        cd lkp-tests
        bin/lkp install                job.yaml  # job file is attached in this email
        bin/lkp split-job --compatible job.yaml
        bin/lkp run                    compatible-job.yaml

=========================================================================================
class/compiler/cpufreq_governor/disk/fs/kconfig/nr_threads/rootfs/tbox_group/test/testcase/testtime/ucode:
  filesystem/gcc-9/performance/1HDD/xfs/x86_64-rhel-8.3/10%/debian-10.4-x86_64-20200603.cgz/lkp-csl-2sp7/fiemap/stress-ng/60s/0x5003006

commit: 
  2f06f70292 ("locking/rwsem: Prevent potential lock starvation")
  1a728dff85 ("locking/rwsem: Enable reader optimistic lock stealing")

2f06f702925b512a 1a728dff855a318bb58bcc1259b 
---------------- --------------------------- 
         %stddev     %change         %stddev
             \          |                \  
    332128 ±  2%    +630.4%    2426017 ±  3%  stress-ng.fiemap.ops
      5534 ±  2%    +630.5%      40430 ±  3%  stress-ng.fiemap.ops_per_sec
    319.67 ±  3%     +46.4%     467.83 ±  3%  stress-ng.time.percent_of_cpu_this_job_got
    198.32 ±  3%     +45.9%     289.43 ±  3%  stress-ng.time.system_time
   1005643 ±  2%    +410.3%    5132152 ±  3%  stress-ng.time.voluntary_context_switches
      0.49 ±164%    +502.2%       2.97 ± 92%  perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait
    250.68           +11.3%     279.06        pmeter.Average_Active_Power
      4014 ±  3%     +14.7%       4606 ±  2%  meminfo.Active
      3660 ±  4%     +16.2%       4254 ±  2%  meminfo.Active(anon)
  19989037 ±  5%    +215.6%   63093608 ±  4%  cpuidle.C1.time
    167687 ±  5%    +464.0%     945793 ±  4%  cpuidle.C1.usage
    223029 ± 12%     +27.2%     283740        cpuidle.POLL.time
     88.24            +2.6%      90.51        iostat.cpu.idle
      6.85 ±  2%     -46.7%       3.65 ±  6%  iostat.cpu.iowait
      4.77 ±  2%     +19.3%       5.70        iostat.cpu.system
     87.67            +2.7%      90.00        vmstat.cpu.id
      6.83 ±  5%     -51.2%       3.33 ± 14%  vmstat.procs.b
     38022 ±  2%    +329.2%     163191 ±  2%  vmstat.system.cs
      7.06 ±  2%      -3.3        3.76 ±  6%  mpstat.cpu.all.iowait%
      1.45 ±  5%      -0.4        1.04 ±  4%  mpstat.cpu.all.irq%
      0.10 ±  5%      -0.0        0.08 ±  3%  mpstat.cpu.all.soft%
      3.30 ±  4%      +1.4        4.69 ±  3%  mpstat.cpu.all.sys%
    915.17 ±  4%     +15.9%       1060        proc-vmstat.nr_active_anon
      3830            +3.8%       3977        proc-vmstat.nr_shmem
    915.17 ±  4%     +15.9%       1060        proc-vmstat.nr_zone_active_anon
      1130 ±  9%     +37.8%       1557 ±  3%  proc-vmstat.pgactivate
      2723 ±  7%    +204.8%       8303 ± 50%  sched_debug.cfs_rq:/.load.avg
      7452 ±  6%    +454.2%      41299 ± 90%  sched_debug.cfs_rq:/.load.stddev
    125.36 ± 14%     -33.9%      82.90 ± 32%  sched_debug.cfs_rq:/.load_avg.avg
      1866 ± 31%     -56.0%     821.25 ± 17%  sched_debug.cfs_rq:/.load_avg.max
    329.76 ± 15%     -38.8%     201.97 ± 28%  sched_debug.cfs_rq:/.load_avg.stddev
      3314 ± 10%     +84.4%       6110 ± 31%  sched_debug.cfs_rq:/.min_vruntime.stddev
     -5914          +176.2%     -16334        sched_debug.cfs_rq:/.spread0.min
      3322 ± 10%     +84.0%       6113 ± 31%  sched_debug.cfs_rq:/.spread0.stddev
     31862           +48.0%      47146 ± 31%  sched_debug.cpu.clock.avg
     31867           +48.0%      47151 ± 31%  sched_debug.cpu.clock.max
     31856           +48.0%      47141 ± 31%  sched_debug.cpu.clock.min
     31730           +47.6%      46849 ± 31%  sched_debug.cpu.clock_task.avg
     31853           +47.5%      46993 ± 31%  sched_debug.cpu.clock_task.max
    519.33 ± 18%     +45.4%     755.00 ± 22%  sched_debug.cpu.nr_switches.min
     31859           +48.0%      47143 ± 31%  sched_debug.cpu_clk
     31362           +48.7%      46646 ± 32%  sched_debug.ktime
     32213           +47.5%      47504 ± 31%  sched_debug.sched_clk
     17.19 ± 13%     -91.2%       1.52 ± 14%  perf-stat.i.MPKI
 1.005e+09          +419.2%  5.216e+09 ±  2%  perf-stat.i.branch-instructions
      1.90 ± 14%      -0.8        1.08 ±  2%  perf-stat.i.branch-miss-rate%
  19599818 ±  8%    +155.6%   50101044 ±  2%  perf-stat.i.branch-misses
      5.48 ± 31%      +3.2        8.73 ± 15%  perf-stat.i.cache-miss-rate%
  84426040 ±  9%     -59.2%   34429248 ±  6%  perf-stat.i.cache-references
     39333 ±  2%    +332.3%     170024 ±  2%  perf-stat.i.context-switches
      2.29 ±  4%     -70.4%       0.68 ±  2%  perf-stat.i.cpi
 1.112e+10 ±  3%     +52.6%  1.698e+10 ±  3%  perf-stat.i.cpu-cycles
      3668 ± 32%     +66.4%       6103 ± 15%  perf-stat.i.cycles-between-cache-misses
 1.287e+09 ±  2%    +426.6%  6.779e+09 ±  2%  perf-stat.i.dTLB-loads
 7.075e+08 ±  2%    +452.9%  3.912e+09 ±  2%  perf-stat.i.dTLB-stores
     69.43           -17.3       52.14        perf-stat.i.iTLB-load-miss-rate%
   6869997 ±  4%     -47.6%    3597110 ±  4%  perf-stat.i.iTLB-load-misses
   2798506 ±  3%     +18.4%    3313200 ±  2%  perf-stat.i.iTLB-loads
 5.081e+09          +428.3%  2.684e+10 ±  2%  perf-stat.i.instructions
    838.24 ±  3%    +756.9%       7182 ±  2%  perf-stat.i.instructions-per-iTLB-miss
      0.46 ±  4%    +233.5%       1.52        perf-stat.i.ipc
      0.12 ±  3%     +52.6%       0.18 ±  3%  perf-stat.i.metric.GHz
     32.18          +416.3%     166.12 ±  2%  perf-stat.i.metric.M/sec
     90.09 ±  3%      -6.1       84.01 ±  5%  perf-stat.i.node-store-miss-rate%
     16.63 ± 10%     -92.3%       1.29 ±  8%  perf-stat.overall.MPKI
      1.95 ± 10%      -1.0        0.96        perf-stat.overall.branch-miss-rate%
      4.77 ± 35%      +3.8        8.53 ± 15%  perf-stat.overall.cache-miss-rate%
      2.19 ±  4%     -71.1%       0.63        perf-stat.overall.cpi
      3173 ± 36%     +86.6%       5921 ± 14%  perf-stat.overall.cycles-between-cache-misses
      0.01 ± 90%      -0.0        0.00 ± 45%  perf-stat.overall.dTLB-store-miss-rate%
     71.04           -19.0       52.04        perf-stat.overall.iTLB-load-miss-rate%
    740.89 ±  3%    +908.0%       7467 ±  2%  perf-stat.overall.instructions-per-iTLB-miss
      0.46 ±  4%    +245.6%       1.58        perf-stat.overall.ipc
     91.43 ±  3%      -6.0       85.39 ±  4%  perf-stat.overall.node-store-miss-rate%
  9.89e+08          +419.1%  5.134e+09 ±  2%  perf-stat.ps.branch-instructions
  19305161 ±  8%    +155.5%   49318587 ±  2%  perf-stat.ps.branch-misses
  83065370 ±  9%     -59.2%   33887266 ±  6%  perf-stat.ps.cache-references
     38696 ±  2%    +332.4%     167343 ±  2%  perf-stat.ps.context-switches
 1.095e+10 ±  3%     +52.6%  1.671e+10 ±  3%  perf-stat.ps.cpu-cycles
 1.267e+09 ±  2%    +426.6%  6.673e+09 ±  2%  perf-stat.ps.dTLB-loads
 6.963e+08 ±  2%    +453.0%   3.85e+09 ±  2%  perf-stat.ps.dTLB-stores
   6759166 ±  4%     -47.6%    3540057 ±  4%  perf-stat.ps.iTLB-load-misses
   2753822 ±  3%     +18.4%    3260765 ±  2%  perf-stat.ps.iTLB-loads
 5.001e+09          +428.2%  2.642e+10 ±  2%  perf-stat.ps.instructions
 3.166e+11          +430.8%   1.68e+12 ±  3%  perf-stat.total.instructions
     92637 ±  6%     -11.1%      82347 ±  5%  interrupts.CAL:Function_call_interrupts
    283.33 ± 49%    +107.1%     586.83 ± 21%  interrupts.CPU10.NMI:Non-maskable_interrupts
    283.33 ± 49%    +107.1%     586.83 ± 21%  interrupts.CPU10.PMI:Performance_monitoring_interrupts
     33.67 ± 19%    +250.5%     118.00 ± 35%  interrupts.CPU11.RES:Rescheduling_interrupts
     36.50 ± 46%    +231.5%     121.00 ± 37%  interrupts.CPU12.RES:Rescheduling_interrupts
    977.17 ± 16%     -30.9%     675.50 ±  7%  interrupts.CPU14.CAL:Function_call_interrupts
      1209 ± 73%     -44.4%     673.00 ±  4%  interrupts.CPU16.CAL:Function_call_interrupts
    309.67 ± 40%     +72.3%     533.50 ± 21%  interrupts.CPU16.NMI:Non-maskable_interrupts
    309.67 ± 40%     +72.3%     533.50 ± 21%  interrupts.CPU16.PMI:Performance_monitoring_interrupts
      1191 ± 31%     -46.7%     635.50 ± 11%  interrupts.CPU18.CAL:Function_call_interrupts
      1283 ± 41%     -47.9%     668.33 ± 10%  interrupts.CPU19.CAL:Function_call_interrupts
     47.50 ± 40%    +241.8%     162.33 ± 56%  interrupts.CPU19.RES:Rescheduling_interrupts
     41.50 ± 30%    +189.2%     120.00 ± 40%  interrupts.CPU2.RES:Rescheduling_interrupts
    883.17 ± 10%     -28.9%     627.83 ±  8%  interrupts.CPU20.CAL:Function_call_interrupts
    897.50 ± 11%     -26.5%     660.00 ± 10%  interrupts.CPU21.CAL:Function_call_interrupts
      1207 ± 35%     -43.3%     684.50 ± 12%  interrupts.CPU22.CAL:Function_call_interrupts
     29.00 ± 62%    +299.4%     115.83 ± 27%  interrupts.CPU25.RES:Rescheduling_interrupts
      1203 ± 42%     -46.7%     641.00 ± 10%  interrupts.CPU34.CAL:Function_call_interrupts
     31.83 ± 44%    +158.1%      82.17 ± 33%  interrupts.CPU35.RES:Rescheduling_interrupts
     24.33 ± 54%    +178.8%      67.83 ± 42%  interrupts.CPU41.RES:Rescheduling_interrupts
     32.33 ± 40%    +235.6%     108.50 ± 49%  interrupts.CPU48.RES:Rescheduling_interrupts
      1378 ± 74%     -49.1%     701.33 ± 16%  interrupts.CPU49.CAL:Function_call_interrupts
     44.00 ± 48%    +134.8%     103.33 ± 18%  interrupts.CPU49.RES:Rescheduling_interrupts
     34.00 ± 29%    +192.6%      99.50 ± 63%  interrupts.CPU5.RES:Rescheduling_interrupts
    304.67 ± 45%    +104.8%     624.00 ± 33%  interrupts.CPU50.NMI:Non-maskable_interrupts
    304.67 ± 45%    +104.8%     624.00 ± 33%  interrupts.CPU50.PMI:Performance_monitoring_interrupts
    290.50 ± 44%     +94.5%     565.17 ± 43%  interrupts.CPU53.NMI:Non-maskable_interrupts
    290.50 ± 44%     +94.5%     565.17 ± 43%  interrupts.CPU53.PMI:Performance_monitoring_interrupts
    288.17 ± 22%     +73.7%     500.50 ± 30%  interrupts.CPU58.NMI:Non-maskable_interrupts
    288.17 ± 22%     +73.7%     500.50 ± 30%  interrupts.CPU58.PMI:Performance_monitoring_interrupts
     35.17 ± 35%    +280.6%     133.83 ± 50%  interrupts.CPU6.RES:Rescheduling_interrupts
    262.17 ± 43%     +88.9%     495.33 ± 33%  interrupts.CPU60.NMI:Non-maskable_interrupts
    262.17 ± 43%     +88.9%     495.33 ± 33%  interrupts.CPU60.PMI:Performance_monitoring_interrupts
    288.00 ± 32%    +134.4%     675.17 ± 33%  interrupts.CPU64.NMI:Non-maskable_interrupts
    288.00 ± 32%    +134.4%     675.17 ± 33%  interrupts.CPU64.PMI:Performance_monitoring_interrupts
     43.00 ± 37%    +241.5%     146.83 ± 23%  interrupts.CPU8.RES:Rescheduling_interrupts
    997.83 ± 18%     -38.8%     611.17 ± 13%  interrupts.CPU82.CAL:Function_call_interrupts
    935.00 ± 19%     -33.9%     618.50 ± 15%  interrupts.CPU83.CAL:Function_call_interrupts
    380.50 ± 42%     +71.3%     651.67 ± 27%  interrupts.CPU9.NMI:Non-maskable_interrupts
    380.50 ± 42%     +71.3%     651.67 ± 27%  interrupts.CPU9.PMI:Performance_monitoring_interrupts
     32.83 ± 40%    +210.7%     102.00 ± 40%  interrupts.CPU9.RES:Rescheduling_interrupts
     28573 ± 12%     +43.4%      40973 ±  3%  interrupts.NMI:Non-maskable_interrupts
     28573 ± 12%     +43.4%      40973 ±  3%  interrupts.PMI:Performance_monitoring_interrupts
      3498 ±  4%    +152.8%       8846 ± 12%  interrupts.RES:Rescheduling_interrupts
     56.01 ±  2%      -5.3       50.67        perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
     56.01 ±  2%      -5.3       50.67        perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64_no_verify
     55.99 ±  2%      -5.3       50.66        perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
     49.96 ±  3%      -5.2       44.74 ±  3%  perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary
     56.73 ±  2%      -5.2       51.53        perf-profile.calltrace.cycles-pp.secondary_startup_64_no_verify
     50.85 ±  3%      -5.2       45.70 ±  3%  perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
     37.91 ±  3%      -3.9       34.03        perf-profile.calltrace.cycles-pp.xfs_read_iomap_begin.iomap_apply.iomap_fiemap.xfs_vn_fiemap.do_vfs_ioctl
      6.22 ±  2%      -3.6        2.60 ±  4%  perf-profile.calltrace.cycles-pp.up_read.xfs_iunlock.xfs_read_iomap_begin.iomap_apply.iomap_fiemap
     10.83 ±  3%      -3.1        7.76 ±  2%  perf-profile.calltrace.cycles-pp.xfs_ilock_for_iomap.xfs_read_iomap_begin.iomap_apply.iomap_fiemap.xfs_vn_fiemap
      5.38 ±  7%      -3.1        2.33 ±  6%  perf-profile.calltrace.cycles-pp.xfs_bmbt_to_iomap.xfs_read_iomap_begin.iomap_apply.iomap_fiemap.xfs_vn_fiemap
      6.40 ±  2%      -3.0        3.39 ±  4%  perf-profile.calltrace.cycles-pp.xfs_iunlock.xfs_read_iomap_begin.iomap_apply.iomap_fiemap.xfs_vn_fiemap
      7.75 ±  2%      -2.1        5.61 ±  2%  perf-profile.calltrace.cycles-pp.down_read.xfs_ilock_for_iomap.xfs_read_iomap_begin.iomap_apply.iomap_fiemap
      9.93 ±  9%      -1.7        8.26 ± 11%  perf-profile.calltrace.cycles-pp.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
      2.26 ±  4%      -1.2        1.09 ±  7%  perf-profile.calltrace.cycles-pp.xfs_reflink_trim_around_shared.xfs_read_iomap_begin.iomap_apply.iomap_fiemap.xfs_vn_fiemap
      5.02 ±  8%      -0.9        4.08 ± 12%  perf-profile.calltrace.cycles-pp.asm_call_sysvec_on_stack.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter
      4.99 ±  8%      -0.9        4.06 ± 12%  perf-profile.calltrace.cycles-pp.__sysvec_apic_timer_interrupt.asm_call_sysvec_on_stack.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state
      4.89 ±  8%      -0.9        3.99 ± 12%  perf-profile.calltrace.cycles-pp.hrtimer_interrupt.__sysvec_apic_timer_interrupt.asm_call_sysvec_on_stack.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
      1.09 ±  9%      -0.4        0.68 ±  5%  perf-profile.calltrace.cycles-pp.vfs_fallocate.ksys_fallocate.__x64_sys_fallocate.do_syscall_64.entry_SYSCALL_64_after_hwframe
      1.09 ±  9%      -0.4        0.68 ±  5%  perf-profile.calltrace.cycles-pp.__x64_sys_fallocate.do_syscall_64.entry_SYSCALL_64_after_hwframe
      1.09 ±  9%      -0.4        0.68 ±  5%  perf-profile.calltrace.cycles-pp.ksys_fallocate.__x64_sys_fallocate.do_syscall_64.entry_SYSCALL_64_after_hwframe
      1.09 ±  8%      -0.4        0.68 ±  5%  perf-profile.calltrace.cycles-pp.xfs_file_fallocate.vfs_fallocate.ksys_fallocate.__x64_sys_fallocate.do_syscall_64
      0.00            +0.6        0.62 ±  4%  perf-profile.calltrace.cycles-pp.xfs_bmapi_update_map.xfs_bmapi_read.xfs_read_iomap_begin.iomap_apply.iomap_fiemap
      0.00            +0.7        0.66 ±  4%  perf-profile.calltrace.cycles-pp.copy_user_generic_unrolled._copy_to_user.fiemap_fill_next_extent.iomap_fiemap_actor.iomap_apply
      0.00            +0.7        0.71 ±  6%  perf-profile.calltrace.cycles-pp.try_to_wake_up.wake_up_q.rwsem_wake.xfs_iunlock.xfs_vn_fiemap
      0.00            +0.7        0.71 ±  8%  perf-profile.calltrace.cycles-pp.xfs_errortag_test.xfs_bmapi_read.xfs_read_iomap_begin.iomap_apply.iomap_fiemap
      0.00            +0.7        0.72 ±  6%  perf-profile.calltrace.cycles-pp.wake_up_q.rwsem_wake.xfs_iunlock.xfs_vn_fiemap.do_vfs_ioctl
      0.00            +0.7        0.74 ±  6%  perf-profile.calltrace.cycles-pp.__might_sleep.down_read.xfs_ilock_for_iomap.xfs_read_iomap_begin.iomap_apply
      0.00            +0.8        0.76 ±  5%  perf-profile.calltrace.cycles-pp.xfs_ilock.xfs_ilock_for_iomap.xfs_read_iomap_begin.iomap_apply.iomap_fiemap
      0.00            +0.8        0.81 ±  2%  perf-profile.calltrace.cycles-pp.xfs_bmapi_trim_map.xfs_bmapi_read.xfs_read_iomap_begin.iomap_apply.iomap_fiemap
      0.00            +0.8        0.83 ±  5%  perf-profile.calltrace.cycles-pp.rwsem_wake.xfs_iunlock.xfs_vn_fiemap.do_vfs_ioctl.__x64_sys_ioctl
      0.00            +0.9        0.89 ±  6%  perf-profile.calltrace.cycles-pp.xfs_iunlock.xfs_vn_fiemap.do_vfs_ioctl.__x64_sys_ioctl.do_syscall_64
      0.00            +1.2        1.17 ±  3%  perf-profile.calltrace.cycles-pp.___might_sleep.down_read.xfs_ilock_for_iomap.xfs_read_iomap_begin.iomap_apply
      0.00            +1.8        1.78 ±  3%  perf-profile.calltrace.cycles-pp._copy_to_user.fiemap_fill_next_extent.iomap_fiemap_actor.iomap_apply.iomap_fiemap
      0.09 ±223%      +2.5        2.57 ±  3%  perf-profile.calltrace.cycles-pp.fiemap_fill_next_extent.iomap_fiemap_actor.iomap_apply.iomap_fiemap.xfs_vn_fiemap
      0.70 ± 10%      +3.2        3.95 ±  2%  perf-profile.calltrace.cycles-pp.iomap_fiemap_actor.iomap_apply.iomap_fiemap.xfs_vn_fiemap.do_vfs_ioctl
      5.21 ±  3%      +4.4        9.63        perf-profile.calltrace.cycles-pp.xfs_iext_lookup_extent.xfs_bmapi_read.xfs_read_iomap_begin.iomap_apply.iomap_fiemap
     39.92 ±  3%      +4.8       44.72        perf-profile.calltrace.cycles-pp.iomap_apply.iomap_fiemap.xfs_vn_fiemap.do_vfs_ioctl.__x64_sys_ioctl
     40.01 ±  3%      +5.2       45.20        perf-profile.calltrace.cycles-pp.iomap_fiemap.xfs_vn_fiemap.do_vfs_ioctl.__x64_sys_ioctl.do_syscall_64
     10.44 ±  4%      +5.4       15.80        perf-profile.calltrace.cycles-pp.xfs_bmapi_read.xfs_read_iomap_begin.iomap_apply.iomap_fiemap.xfs_vn_fiemap
     42.13 ±  2%      +5.5       47.64        perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
     42.15 ±  2%      +5.5       47.66        perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
     40.48 ±  3%      +5.9       46.33        perf-profile.calltrace.cycles-pp.xfs_vn_fiemap.do_vfs_ioctl.__x64_sys_ioctl.do_syscall_64.entry_SYSCALL_64_after_hwframe
     40.49 ±  3%      +5.9       46.39        perf-profile.calltrace.cycles-pp.do_vfs_ioctl.__x64_sys_ioctl.do_syscall_64.entry_SYSCALL_64_after_hwframe
     40.50 ±  3%      +5.9       46.41        perf-profile.calltrace.cycles-pp.__x64_sys_ioctl.do_syscall_64.entry_SYSCALL_64_after_hwframe
     56.01 ±  2%      -5.3       50.67        perf-profile.children.cycles-pp.start_secondary
     56.73 ±  2%      -5.2       51.52        perf-profile.children.cycles-pp.do_idle
     56.73 ±  2%      -5.2       51.53        perf-profile.children.cycles-pp.secondary_startup_64_no_verify
     56.73 ±  2%      -5.2       51.53        perf-profile.children.cycles-pp.cpu_startup_entry
     51.50 ±  2%      -5.0       46.48 ±  3%  perf-profile.children.cycles-pp.cpuidle_enter_state
     51.53 ±  2%      -5.0       46.51 ±  3%  perf-profile.children.cycles-pp.cpuidle_enter
     37.99 ±  3%      -3.6       34.40        perf-profile.children.cycles-pp.xfs_read_iomap_begin
      6.23 ±  2%      -3.6        2.67 ±  4%  perf-profile.children.cycles-pp.up_read
      5.39 ±  7%      -3.0        2.37 ±  6%  perf-profile.children.cycles-pp.xfs_bmbt_to_iomap
     10.87 ±  3%      -2.9        7.96        perf-profile.children.cycles-pp.xfs_ilock_for_iomap
      6.66            -2.2        4.44 ±  4%  perf-profile.children.cycles-pp.xfs_iunlock
      7.81 ±  2%      -2.0        5.82 ±  2%  perf-profile.children.cycles-pp.down_read
      2.24 ± 17%      -1.8        0.45 ±  8%  perf-profile.children.cycles-pp.xfs_isilocked
      9.16 ±  8%      -1.5        7.68 ± 11%  perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt
      6.80 ±  9%      -1.3        5.48 ± 11%  perf-profile.children.cycles-pp.asm_call_sysvec_on_stack
      2.27 ±  4%      -1.2        1.09 ±  7%  perf-profile.children.cycles-pp.xfs_reflink_trim_around_shared
      5.21 ±  8%      -0.9        4.29 ± 12%  perf-profile.children.cycles-pp.__sysvec_apic_timer_interrupt
      5.10 ±  8%      -0.9        4.22 ± 12%  perf-profile.children.cycles-pp.hrtimer_interrupt
      1.09 ±  9%      -0.4        0.68 ±  5%  perf-profile.children.cycles-pp.vfs_fallocate
      1.09 ±  9%      -0.4        0.68 ±  5%  perf-profile.children.cycles-pp.__x64_sys_fallocate
      1.09 ±  9%      -0.4        0.68 ±  5%  perf-profile.children.cycles-pp.ksys_fallocate
      1.09 ±  8%      -0.4        0.68 ±  5%  perf-profile.children.cycles-pp.xfs_file_fallocate
      0.77 ±  7%      -0.3        0.43 ± 21%  perf-profile.children.cycles-pp.load_balance
      0.56 ± 12%      -0.3        0.22 ± 20%  perf-profile.children.cycles-pp.newidle_balance
      0.68 ±  8%      -0.3        0.36 ± 19%  perf-profile.children.cycles-pp.find_busiest_group
      0.66 ±  7%      -0.3        0.35 ± 19%  perf-profile.children.cycles-pp.update_sd_lb_stats
      0.62 ± 14%      -0.3        0.31 ± 14%  perf-profile.children.cycles-pp.kthread
      0.62 ± 14%      -0.3        0.31 ± 14%  perf-profile.children.cycles-pp.ret_from_fork
      0.49 ± 10%      -0.3        0.18 ± 14%  perf-profile.children.cycles-pp.rwsem_optimistic_spin
      1.86 ± 12%      -0.3        1.58 ± 12%  perf-profile.children.cycles-pp.tick_sched_timer
      0.49 ± 15%      -0.3        0.24 ± 13%  perf-profile.children.cycles-pp.worker_thread
      0.42 ± 10%      -0.2        0.17 ± 15%  perf-profile.children.cycles-pp.rwsem_down_read_slowpath
      0.63 ± 12%      -0.2        0.44 ± 12%  perf-profile.children.cycles-pp.pick_next_task_fair
      0.35 ± 16%      -0.2        0.17 ± 36%  perf-profile.children.cycles-pp.irq_work_run_list
      0.35 ± 17%      -0.2        0.16 ± 37%  perf-profile.children.cycles-pp.asm_sysvec_irq_work
      0.35 ± 17%      -0.2        0.16 ± 37%  perf-profile.children.cycles-pp.sysvec_irq_work
      0.35 ± 17%      -0.2        0.16 ± 37%  perf-profile.children.cycles-pp.__sysvec_irq_work
      0.35 ± 17%      -0.2        0.16 ± 37%  perf-profile.children.cycles-pp.irq_work_run
      0.35 ± 17%      -0.2        0.16 ± 37%  perf-profile.children.cycles-pp.irq_work_single
      0.35 ± 17%      -0.2        0.16 ± 37%  perf-profile.children.cycles-pp.printk
      0.35 ± 17%      -0.2        0.16 ± 37%  perf-profile.children.cycles-pp.vprintk_emit
      0.35 ± 17%      -0.2        0.16 ± 37%  perf-profile.children.cycles-pp.console_unlock
      0.33 ± 17%      -0.2        0.15 ± 37%  perf-profile.children.cycles-pp.serial8250_console_write
      0.37 ± 18%      -0.2        0.20 ±  9%  perf-profile.children.cycles-pp.do_writepages
      0.37 ± 18%      -0.2        0.20 ±  9%  perf-profile.children.cycles-pp.xfs_vm_writepages
      0.32 ± 16%      -0.2        0.15 ± 35%  perf-profile.children.cycles-pp.uart_console_write
      0.32 ± 16%      -0.2        0.15 ± 35%  perf-profile.children.cycles-pp.wait_for_xmitr
      0.30 ± 16%      -0.2        0.14 ± 36%  perf-profile.children.cycles-pp.serial8250_console_putchar
      0.31 ± 12%      -0.2        0.15 ± 11%  perf-profile.children.cycles-pp.xfs_free_file_space
      0.33 ± 17%      -0.2        0.18 ± 14%  perf-profile.children.cycles-pp.process_one_work
      0.36 ± 16%      -0.1        0.21 ±  5%  perf-profile.children.cycles-pp.xfs_flush_unmap_range
      0.32 ± 24%      -0.1        0.17 ± 11%  perf-profile.children.cycles-pp.iomap_writepages
      0.32 ± 24%      -0.1        0.17 ± 11%  perf-profile.children.cycles-pp.write_cache_pages
      0.32 ± 15%      -0.1        0.18 ±  7%  perf-profile.children.cycles-pp.filemap_write_and_wait_range
      0.29 ± 26%      -0.1        0.15 ± 13%  perf-profile.children.cycles-pp.iomap_writepage_map
      0.44 ± 12%      -0.1        0.31 ±  8%  perf-profile.children.cycles-pp.read_tsc
      0.52 ± 11%      -0.1        0.40 ± 10%  perf-profile.children.cycles-pp.native_irq_return_iret
      0.45 ±  9%      -0.1        0.33 ± 13%  perf-profile.children.cycles-pp.lapic_next_deadline
      0.19 ± 18%      -0.1        0.07 ± 18%  perf-profile.children.cycles-pp.iomap_zero_range
      0.18 ± 21%      -0.1        0.06 ± 47%  perf-profile.children.cycles-pp.iomap_zero_range_actor
      0.15 ± 22%      -0.1        0.04 ± 72%  perf-profile.children.cycles-pp.iomap_read_page_sync
      0.18 ± 21%      -0.1        0.07 ± 15%  perf-profile.children.cycles-pp.iomap_write_begin
      0.15 ± 21%      -0.1        0.04 ± 72%  perf-profile.children.cycles-pp.submit_bio_wait
      0.23 ± 17%      -0.1        0.12 ± 31%  perf-profile.children.cycles-pp.io_serial_in
      0.23 ± 31%      -0.1        0.13 ± 19%  perf-profile.children.cycles-pp.xfs_map_blocks
      0.26 ± 19%      -0.1        0.16 ±  6%  perf-profile.children.cycles-pp.__filemap_fdatawrite_range
      0.22 ± 31%      -0.1        0.13 ± 18%  perf-profile.children.cycles-pp.xfs_bmapi_convert_delalloc
      0.28 ± 13%      -0.1        0.19 ± 15%  perf-profile.children.cycles-pp.update_blocked_averages
      0.28 ± 15%      -0.1        0.19 ± 14%  perf-profile.children.cycles-pp.run_rebalance_domains
      0.12 ± 25%      -0.1        0.05 ± 45%  perf-profile.children.cycles-pp.submit_bio_noacct
      0.12 ± 23%      -0.1        0.05 ± 45%  perf-profile.children.cycles-pp.submit_bio
      0.11 ± 30%      -0.1        0.04 ± 44%  perf-profile.children.cycles-pp.blk_mq_submit_bio
      0.16 ± 14%      -0.1        0.10 ± 19%  perf-profile.children.cycles-pp.idle_cpu
      0.11 ± 12%      -0.1        0.05 ± 47%  perf-profile.children.cycles-pp.cpumask_next_and
      0.09 ± 25%      -0.1        0.03 ±102%  perf-profile.children.cycles-pp.rwsem_spin_on_owner
      0.09 ± 15%      -0.0        0.04 ± 73%  perf-profile.children.cycles-pp._find_next_bit
      0.16 ± 27%      -0.0        0.11 ± 22%  perf-profile.children.cycles-pp.__remove_hrtimer
      0.09 ± 20%      -0.0        0.05 ± 47%  perf-profile.children.cycles-pp.execve
      0.09 ± 20%      -0.0        0.05 ± 47%  perf-profile.children.cycles-pp.__x64_sys_execve
      0.09 ± 20%      -0.0        0.05 ± 47%  perf-profile.children.cycles-pp.do_execveat_common
      0.13 ± 15%      -0.0        0.10 ± 13%  perf-profile.children.cycles-pp.__xfs_trans_commit
      0.09 ± 16%      -0.0        0.06 ± 17%  perf-profile.children.cycles-pp.__hrtimer_get_next_event
      0.08 ± 17%      -0.0        0.06 ± 13%  perf-profile.children.cycles-pp.xfs_log_commit_cil
      0.01 ±223%      +0.1        0.06 ± 21%  perf-profile.children.cycles-pp.update_cfs_rq_h_load
      0.04 ± 71%      +0.1        0.09 ± 13%  perf-profile.children.cycles-pp.__update_load_avg_se
      0.00            +0.1        0.06 ± 13%  perf-profile.children.cycles-pp.wake_q_add
      0.05 ± 72%      +0.1        0.11 ± 15%  perf-profile.children.cycles-pp.update_curr
      0.01 ±223%      +0.1        0.07 ± 16%  perf-profile.children.cycles-pp.tick_nohz_idle_exit
      0.58 ±  5%      +0.1        0.65 ±  5%  perf-profile.children.cycles-pp.rwsem_down_write_slowpath
      0.00            +0.1        0.08 ± 14%  perf-profile.children.cycles-pp.perf_trace_sched_wakeup_template
      0.00            +0.1        0.08 ±  8%  perf-profile.children.cycles-pp.__switch_to
      0.05 ± 49%      +0.1        0.14 ±  9%  perf-profile.children.cycles-pp.select_task_rq_fair
      0.00            +0.1        0.09 ± 17%  perf-profile.children.cycles-pp.rwsem_mark_wake
      0.15 ± 20%      +0.1        0.25 ±  7%  perf-profile.children.cycles-pp.update_load_avg
      0.06 ± 47%      +0.1        0.17 ±  8%  perf-profile.children.cycles-pp.set_next_entity
      0.11 ± 12%      +0.1        0.23 ±  7%  perf-profile.children.cycles-pp.dequeue_entity
      0.12 ± 10%      +0.1        0.25 ±  7%  perf-profile.children.cycles-pp.dequeue_task_fair
      0.10 ± 25%      +0.2        0.26 ± 11%  perf-profile.children.cycles-pp.enqueue_entity
      0.96 ±  8%      +0.2        1.14 ±  6%  perf-profile.children.cycles-pp.__schedule
      0.10 ± 14%      +0.2        0.32 ±  4%  perf-profile.children.cycles-pp.xfs_fsb_to_db
      0.12 ± 21%      +0.2        0.34 ±  9%  perf-profile.children.cycles-pp.enqueue_task_fair
      0.12 ± 21%      +0.2        0.35 ±  9%  perf-profile.children.cycles-pp.ttwu_do_activate
      0.11 ± 11%      +0.2        0.36 ±  2%  perf-profile.children.cycles-pp.__x86_indirect_thunk_rax
      0.06 ± 47%      +0.3        0.32 ±  9%  perf-profile.children.cycles-pp.rcu_all_qs
      0.17 ± 14%      +0.3        0.45 ±  6%  perf-profile.children.cycles-pp.schedule_idle
      0.09 ± 28%      +0.3        0.38 ±  8%  perf-profile.children.cycles-pp.copy_user_enhanced_fast_string
      0.07 ± 19%      +0.4        0.42 ±  6%  perf-profile.children.cycles-pp.iomap_to_fiemap
      0.26 ±  8%      +0.4        0.63 ±  3%  perf-profile.children.cycles-pp.xfs_bmapi_update_map
      0.08 ± 10%      +0.4        0.50 ±  4%  perf-profile.children.cycles-pp.__might_fault
      0.13 ± 14%      +0.5        0.61 ±  4%  perf-profile.children.cycles-pp._cond_resched
      0.29 ± 17%      +0.5        0.81 ±  6%  perf-profile.children.cycles-pp.try_to_wake_up
      0.21 ± 19%      +0.5        0.76 ±  6%  perf-profile.children.cycles-pp.wake_up_q
      1.11 ±  4%      +0.6        1.66 ±  5%  perf-profile.children.cycles-pp.xfs_ilock
      0.14 ±  6%      +0.6        0.72 ±  8%  perf-profile.children.cycles-pp.xfs_errortag_test
      0.21 ± 28%      +0.6        0.81 ±  2%  perf-profile.children.cycles-pp.xfs_bmapi_trim_map
      0.17 ± 16%      +0.6        0.81 ±  2%  perf-profile.children.cycles-pp.copy_user_generic_unrolled
      0.22 ± 16%      +0.7        0.87 ±  5%  perf-profile.children.cycles-pp.rwsem_wake
      0.16 ± 11%      +0.8        0.98 ±  5%  perf-profile.children.cycles-pp.__might_sleep
      0.14 ± 11%      +1.3        1.43 ±  3%  perf-profile.children.cycles-pp.___might_sleep
      0.37 ± 10%      +1.5        1.85 ±  3%  perf-profile.children.cycles-pp._copy_to_user
      0.48 ± 10%      +2.1        2.63 ±  3%  perf-profile.children.cycles-pp.fiemap_fill_next_extent
      0.71 ± 10%      +3.3        4.02 ±  2%  perf-profile.children.cycles-pp.iomap_fiemap_actor
      5.23 ±  3%      +4.4        9.68        perf-profile.children.cycles-pp.xfs_iext_lookup_extent
     40.16 ±  3%      +4.8       44.97        perf-profile.children.cycles-pp.iomap_apply
     40.03 ±  3%      +5.2       45.26        perf-profile.children.cycles-pp.iomap_fiemap
     42.39 ±  2%      +5.4       47.84        perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
     42.37 ±  2%      +5.4       47.81        perf-profile.children.cycles-pp.do_syscall_64
     10.48 ±  4%      +5.5       15.96        perf-profile.children.cycles-pp.xfs_bmapi_read
     40.48 ±  3%      +5.9       46.33        perf-profile.children.cycles-pp.xfs_vn_fiemap
     40.49 ±  3%      +5.9       46.39        perf-profile.children.cycles-pp.do_vfs_ioctl
     40.50 ±  3%      +5.9       46.41        perf-profile.children.cycles-pp.__x64_sys_ioctl
      7.41 ±  2%      -4.0        3.40 ±  2%  perf-profile.self.cycles-pp.down_read
      6.14 ±  2%      -3.6        2.53 ±  4%  perf-profile.self.cycles-pp.up_read
      5.23 ±  7%      -3.2        2.06 ±  6%  perf-profile.self.cycles-pp.xfs_bmbt_to_iomap
      2.23 ± 17%      -1.8        0.43 ±  7%  perf-profile.self.cycles-pp.xfs_isilocked
      2.88 ±  8%      -1.5        1.39 ±  5%  perf-profile.self.cycles-pp.xfs_ilock_for_iomap
      2.24 ±  4%      -1.2        1.06 ±  7%  perf-profile.self.cycles-pp.xfs_reflink_trim_around_shared
      0.39 ±  9%      -0.3        0.14 ± 16%  perf-profile.self.cycles-pp.rwsem_optimistic_spin
      0.44 ± 22%      -0.2        0.20 ± 11%  perf-profile.self.cycles-pp.update_sd_lb_stats
      0.43 ± 12%      -0.1        0.30 ±  9%  perf-profile.self.cycles-pp.read_tsc
      0.52 ± 12%      -0.1        0.40 ±  9%  perf-profile.self.cycles-pp.native_irq_return_iret
      0.45 ± 10%      -0.1        0.33 ± 13%  perf-profile.self.cycles-pp.lapic_next_deadline
      0.23 ± 17%      -0.1        0.12 ± 31%  perf-profile.self.cycles-pp.io_serial_in
      0.16 ± 16%      -0.1        0.10 ± 17%  perf-profile.self.cycles-pp.idle_cpu
      0.18 ± 17%      -0.1        0.12 ± 14%  perf-profile.self.cycles-pp.__hrtimer_next_event_base
      0.15 ± 20%      -0.1        0.09 ± 15%  perf-profile.self.cycles-pp.rcu_sched_clock_irq
      0.10 ± 27%      -0.0        0.06 ± 13%  perf-profile.self.cycles-pp.asm_sysvec_apic_timer_interrupt
      0.08 ± 20%      -0.0        0.04 ± 73%  perf-profile.self.cycles-pp._find_next_bit
      0.11 ± 26%      -0.0        0.08 ± 14%  perf-profile.self.cycles-pp.update_blocked_averages
      0.03 ±103%      +0.1        0.09 ±  8%  perf-profile.self.cycles-pp.update_load_avg
      0.01 ±223%      +0.1        0.06 ± 21%  perf-profile.self.cycles-pp.update_cfs_rq_h_load
      0.03 ± 70%      +0.1        0.09 ± 12%  perf-profile.self.cycles-pp.__update_load_avg_se
      0.00            +0.1        0.05 ±  9%  perf-profile.self.cycles-pp.wake_q_add
      0.00            +0.1        0.07 ± 14%  perf-profile.self.cycles-pp.rwsem_down_write_slowpath
      0.03 ± 99%      +0.1        0.10 ± 11%  perf-profile.self.cycles-pp.set_next_entity
      0.00            +0.1        0.08 ±  6%  perf-profile.self.cycles-pp.__switch_to
      0.08 ±  9%      +0.1        0.17 ±  7%  perf-profile.self.cycles-pp.__schedule
      0.00            +0.1        0.10 ± 14%  perf-profile.self.cycles-pp.__might_fault
      0.00            +0.1        0.11 ± 11%  perf-profile.self.cycles-pp.enqueue_entity
      0.08 ± 17%      +0.1        0.20 ± 10%  perf-profile.self.cycles-pp.__x86_indirect_thunk_rax
      0.10 ± 16%      +0.2        0.28 ±  4%  perf-profile.self.cycles-pp.xfs_fsb_to_db
      0.04 ± 71%      +0.2        0.22 ±  8%  perf-profile.self.cycles-pp.rcu_all_qs
      0.04 ± 72%      +0.2        0.23 ±  5%  perf-profile.self.cycles-pp._copy_to_user
      0.04 ± 72%      +0.2        0.29 ±  9%  perf-profile.self.cycles-pp._cond_resched
      0.09 ± 29%      +0.3        0.35 ±  9%  perf-profile.self.cycles-pp.copy_user_enhanced_fast_string
      0.10 ± 17%      +0.3        0.41 ±  9%  perf-profile.self.cycles-pp.iomap_fiemap
      0.07 ± 19%      +0.3        0.40 ±  9%  perf-profile.self.cycles-pp.iomap_to_fiemap
      0.25 ±  9%      +0.4        0.62 ±  3%  perf-profile.self.cycles-pp.xfs_bmapi_update_map
      0.13 ±  7%      +0.6        0.69 ±  5%  perf-profile.self.cycles-pp.xfs_errortag_test
      0.18 ± 12%      +0.6        0.78 ±  3%  perf-profile.self.cycles-pp.xfs_bmapi_trim_map
      0.13 ± 12%      +0.6        0.73 ±  5%  perf-profile.self.cycles-pp.xfs_ilock
      0.16 ± 16%      +0.6        0.78 ±  2%  perf-profile.self.cycles-pp.copy_user_generic_unrolled
      0.12 ± 19%      +0.7        0.80 ±  7%  perf-profile.self.cycles-pp.fiemap_fill_next_extent
      0.21 ± 13%      +0.7        0.93 ±  3%  perf-profile.self.cycles-pp.xfs_iunlock
      0.13 ± 11%      +0.7        0.85 ±  5%  perf-profile.self.cycles-pp.__might_sleep
      0.15 ± 18%      +0.8        0.91 ±  2%  perf-profile.self.cycles-pp.iomap_fiemap_actor
      2.59 ±  3%      +1.1        3.70 ±  2%  perf-profile.self.cycles-pp.xfs_read_iomap_begin
      0.14 ± 11%      +1.2        1.36 ±  3%  perf-profile.self.cycles-pp.___might_sleep
      2.42 ±  5%      +1.3        3.71 ±  3%  perf-profile.self.cycles-pp.xfs_bmapi_read
      5.13 ±  3%      +4.4        9.54        perf-profile.self.cycles-pp.xfs_iext_lookup_extent
      1.22 ±  9%      +5.1        6.36        perf-profile.self.cycles-pp.iomap_apply


                                                                                
                            pmeter.Average_Active_Power                         
                                                                                
  295 +---------------------------------------------------------------------+   
  290 |-+                                                                   |   
      |         O                                    O O O  O O             |   
  285 |-+             O      O O   O O O O O O O O O            O     O     |   
  280 |-O O O O          O O     O                                O O   O O |   
  275 |-+         O O                                                       |   
  270 |-+                                                                   |   
      | +                                                                   |   
  265 |-::                                                                  |   
  260 |:+:                                                                  |   
  255 |:+ +.+.      +.                      .+    .+.         +.            |   
  250 |-+     +.+  :  +.. .+.       .+.   .+  + .+   +   +.. +  +. .+.+     |   
      |          + :     +   +.   .+   +.+     +      + +   +     +         |   
  245 |-+         +            +.+                     +                    |   
  240 +---------------------------------------------------------------------+   
                                                                                
                                                                                                                                                                
                            stress-ng.time.system_time                          
                                                                                
  320 +---------------------------------------------------------------------+   
      |                  O                                                  |   
  300 |-+ O   O   O O O    O O         O O   O O   O   O              O     |   
  280 |-O   O   O              O     O     O     O   O   O  O O O O O     O |   
      |                          O O                                    O   |   
  260 |-+                                                                   |   
      |                                                                     |   
  240 |-+                                                                   |   
      |                                    +     +                          |   
  220 |-+           +.+.. .+.       .+.   + +   + +           +             |   
  200 |-+  .+   +. +     +   +.   .+   +.+   +.+   +         : +            |   
      |.+.+  : :  +            +.+                  :  +.+.. :  +.+.+.+     |   
  180 |-+    : :                                    : +     +               |   
      |       +                                      +                      |   
  160 +---------------------------------------------------------------------+   
                                                                                
                                                                                                                                                                
                    stress-ng.time.percent_of_cpu_this_job_got                  
                                                                                
  500 +---------------------------------------------------------------------+   
      |   O   O   O O O    O O         O O     O       O              O     |   
      | O   O   O              O     O     O O   O O     O  O   O O O     O |   
  450 |-+                        O O                 O        O         O   |   
      |                                                                     |   
      |                                                                     |   
  400 |-+                                                                   |   
      |                                                                     |   
  350 |-+           +.+..                 .+.   .+.                         |   
      |    .+      +      .+.+    .+.+.+.+   +.+   +          +             |   
      |.+.+  :  +.+      +    + .+                  :        + + .+.+.+     |   
  300 |-+    : :               +                    :  +.+..+   +           |   
      |       ::                                     :+                     |   
      |       +                                      +                      |   
  250 +---------------------------------------------------------------------+   
                                                                                
                                                                                                                                                                
                       stress-ng.time.voluntary_context_switches                
                                                                                
  5.5e+06 +-----------------------------------------------------------------+   
    5e+06 |-+     O O O O OO O O O     O   O O O O O O O O OO O O O O     O |   
          | O O O                  O O                                  O   |   
  4.5e+06 |-+                                                               |   
    4e+06 |-+                                                               |   
          |                                                                 |   
  3.5e+06 |-+                                                               |   
    3e+06 |-+                                                               |   
  2.5e+06 |-+                                                               |   
          |                                                                 |   
    2e+06 |-+                                                               |   
  1.5e+06 |-+                                                               |   
          |.+. .+.   .+.+.++.+.             .+.+. .+.+.          .+.        |   
    1e+06 |-+ +   +.+          +.+.+.+.+.+.+     +     +.+.++.+.+   +.+     |   
   500000 +-----------------------------------------------------------------+   
                                                                                
                                                                                                                                                                
                                 stress-ng.fiemap.ops                           
                                                                                
    3e+06 +-----------------------------------------------------------------+   
          |                                                                 |   
  2.5e+06 |-+              O             O O   O O   O   O      O     O     |   
          | O O O O O O O O  O O O O O O     O     O   O   OO O   O O   O O |   
          |                                                                 |   
    2e+06 |-+                                                               |   
          |                                                                 |   
  1.5e+06 |-+                                                               |   
          |                                                                 |   
    1e+06 |-+                                                               |   
          |                                                                 |   
          |                                                                 |   
   500000 |.+.+.+. .+.+.+.++.+.+.+.+.+.+.+.+.+.+.+.+.+. .+. +.+. .+.+.+     |   
          |       +                                    +   +    +           |   
        0 +-----------------------------------------------------------------+   
                                                                                
                                                                                                                                                                
                            stress-ng.fiemap.ops_per_sec                        
                                                                                
  45000 +-------------------------------------------------------------------+   
        |         O       O             O O   O O   O   O O     O   O O     |   
  40000 |-O O O O   O O O   O O O   O O     O     O   O     O O   O     O O |   
  35000 |-+                       O                                         |   
        |                                                                   |   
  30000 |-+                                                                 |   
        |                                                                   |   
  25000 |-+                                                                 |   
        |                                                                   |   
  20000 |-+                                                                 |   
  15000 |-+                                                                 |   
        |                                                                   |   
  10000 |-+                                                                 |   
        |                                                                   |   
   5000 +-------------------------------------------------------------------+   
                                                                                
                                                                                
[*] bisect-good sample
[O] bisect-bad  sample

***************************************************************************************************
lkp-knm01: 288 threads Intel(R) Xeon Phi(TM) CPU 7295 @ 1.50GHz with 80G memory
=========================================================================================
compiler/cpufreq_governor/duration/iterations/kconfig/memory.high/memory.low/memory.max/nr_threads/pids.max/rootfs/tbox_group/testcase/ucode:
  gcc-9/performance/10s/100x/x86_64-rhel-8.3/90%/50%/max/200%/10000/debian-10.4-x86_64-20200603.cgz/lkp-knm01/ebizzy/0x11

commit: 
  2f06f70292 ("locking/rwsem: Prevent potential lock starvation")
  1a728dff85 ("locking/rwsem: Enable reader optimistic lock stealing")

2f06f702925b512a 1a728dff855a318bb58bcc1259b 
---------------- --------------------------- 
         %stddev     %change         %stddev
             \          |                \  
     69441            +3.2%      71637        ebizzy.throughput
    439.00            +4.1%     457.20 ±  2%  ebizzy.throughput.per_thread.max
     21.00 ±  4%     +17.1%      24.60 ±  4%  ebizzy.throughput.per_thread.min
      0.13            -4.7%       0.13        ebizzy.throughput.per_thread.stddev_percent
   8482903 ±  2%     +12.4%    9538261 ±  3%  ebizzy.time.involuntary_context_switches
  25933441            +7.1%   27767937        ebizzy.time.minor_page_faults
     17643            +3.8%      18307 ±  2%  ebizzy.time.percent_of_cpu_this_job_got
     26.79 ±  3%      -8.6%      24.49 ±  2%  ebizzy.time.sys
      3029 ±  3%      -7.5%       2801 ±  2%  ebizzy.time.system_time
      1745            +4.1%       1817 ±  2%  ebizzy.time.user
    181404            +4.0%     188628 ±  2%  ebizzy.time.user_time
   6878926            +6.9%    7352695 ±  2%  ebizzy.time.voluntary_context_switches
    694415            +3.2%     716375        ebizzy.workload
 7.227e+10 ±  3%     -11.2%  6.419e+10 ±  3%  cpuidle.C1.time
     24.82 ±  3%      -2.8       22.03 ±  3%  mpstat.cpu.all.idle%
     95468 ±  3%      -9.1%      86794 ±  3%  uptime.idle
    123043 ±  7%      +8.1%     133026 ±  3%  numa-vmstat.node0.nr_active_anon
    123041 ±  7%      +8.1%     133024 ±  3%  numa-vmstat.node0.nr_zone_active_anon
     25.00 ±  3%     -11.2%      22.20 ±  4%  vmstat.cpu.id
     59.80            +4.7%      62.60 ±  2%  vmstat.cpu.us
     22711            +9.6%      24890 ±  2%  vmstat.system.cs
  15324294 ±  2%     +10.9%   16994441 ±  2%  proc-vmstat.numa_hint_faults
  15324294 ±  2%     +10.9%   16994441 ±  2%  proc-vmstat.numa_hint_faults_local
  13689889            +1.1%   13838364        proc-vmstat.numa_hit
  13689887            +1.1%   13838363        proc-vmstat.numa_local
  17454375 ±  2%      +7.4%   18754398 ±  2%  proc-vmstat.numa_pte_updates
  28788248            +6.3%   30592933        proc-vmstat.pgfault
    985.00 ±  6%      -8.1%     905.00 ±  3%  proc-vmstat.thp_deferred_split_page
      1033 ±  5%      -7.7%     953.60 ±  3%  proc-vmstat.thp_fault_alloc
    985.00 ±  6%      -8.1%     905.00 ±  3%  proc-vmstat.thp_split_pmd
      2.83 ± 99%     -86.1%       0.39 ± 59%  perf-sched.sch_delay.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64
      1.26 ± 51%     -61.7%       0.48 ± 55%  perf-sched.sch_delay.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64
    584.47 ±108%     -95.0%      29.13 ±177%  perf-sched.sch_delay.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64
     11.14 ± 25%     -29.2%       7.89 ± 12%  perf-sched.sch_delay.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown]
      5.39 ±  8%     +12.0%       6.04 ±  4%  perf-sched.wait_and_delay.avg.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork
      1750 ± 32%     -33.7%       1160 ± 19%  perf-sched.wait_and_delay.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown]
      7.43 ± 87%    +144.7%      18.18 ± 15%  perf-sched.wait_time.avg.ms.preempt_schedule_common._cond_resched.remove_vma.exit_mmap.mmput
      6.59 ± 79%     +99.6%      13.16 ± 37%  perf-sched.wait_time.avg.ms.rwsem_down_read_slowpath.do_madvise.part.0.__x64_sys_madvise
     22.81 ± 20%     -29.1%      16.18 ± 13%  perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_common_interrupt.[unknown]
      1750 ± 32%     -33.7%       1159 ± 19%  perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown]
     14.55 ± 55%     -78.2%       3.18 ±200%  perf-sched.wait_time.max.ms.preempt_schedule_common._cond_resched.__anon_vma_prepare.do_anonymous_page.__handle_mm_fault
    199.86 ±183%     -95.3%       9.47 ± 64%  perf-sched.wait_time.max.ms.preempt_schedule_common._cond_resched.dput.step_into.path_openat
     11.06 ± 83%    +147.4%      27.35 ± 28%  perf-sched.wait_time.max.ms.preempt_schedule_common._cond_resched.remove_vma.exit_mmap.mmput
    267.71 ± 82%     -64.5%      94.95 ± 48%  perf-sched.wait_time.max.ms.rwsem_down_write_slowpath.down_write_killable.do_mprotect_pkey.__x64_sys_mprotect
   5407990 ±  3%      +8.0%    5841756 ±  5%  sched_debug.cfs_rq:/.min_vruntime.stddev
 -22333077           +24.9%  -27883340        sched_debug.cfs_rq:/.spread0.avg
 -36506634           +16.0%  -42349835        sched_debug.cfs_rq:/.spread0.min
   5383209 ±  3%      +8.0%    5814831 ±  5%  sched_debug.cfs_rq:/.spread0.stddev
     16.87 ± 23%     -23.3%      12.95 ±  7%  sched_debug.cfs_rq:/.util_est_enqueued.avg
    749.60 ±  8%     +13.7%     852.48 ±  9%  sched_debug.cfs_rq:/ebizzy.removed.runnable_avg.max
      3.35 ±  7%     +16.7%       3.91 ± 11%  sched_debug.cfs_rq:/ebizzy.removed.util_avg.avg
     34.88 ±  7%     +11.7%      38.97 ±  5%  sched_debug.cfs_rq:/ebizzy.removed.util_avg.stddev
   5417223 ±  3%      +7.7%    5833629 ±  5%  sched_debug.cfs_rq:/ebizzy.se->vruntime.stddev
-1.262e+08            +8.0% -1.363e+08        sched_debug.cfs_rq:/ebizzy.spread0.avg
-1.254e+08            +8.0% -1.355e+08        sched_debug.cfs_rq:/ebizzy.spread0.max
-1.263e+08            +8.0% -1.364e+08        sched_debug.cfs_rq:/ebizzy.spread0.min
      2661 ± 22%     +41.0%       3752 ± 15%  sched_debug.cfs_rq:/system.slice.min_vruntime.avg
      6270 ± 22%     +36.4%       8551 ± 16%  sched_debug.cfs_rq:/system.slice.min_vruntime.max
      2439 ± 16%     +38.9%       3388 ± 20%  sched_debug.cfs_rq:/system.slice.min_vruntime.stddev
      2663 ± 22%     +41.0%       3755 ± 15%  sched_debug.cfs_rq:/system.slice.se->sum_exec_runtime.avg
      6272 ± 22%     +36.4%       8553 ± 16%  sched_debug.cfs_rq:/system.slice.se->sum_exec_runtime.max
      2440 ± 16%     +38.9%       3388 ± 20%  sched_debug.cfs_rq:/system.slice.se->sum_exec_runtime.stddev
-1.264e+08            +7.9% -1.365e+08        sched_debug.cfs_rq:/system.slice.spread0.avg
-1.263e+08            +7.9% -1.363e+08        sched_debug.cfs_rq:/system.slice.spread0.max
-1.265e+08            +7.9% -1.366e+08        sched_debug.cfs_rq:/system.slice.spread0.min
     14.44 ± 65%     -71.6%       4.11 ± 91%  sched_debug.cfs_rq:/system.slice.tg_load_avg.stddev
      1053 ± 10%     -24.2%     798.80 ±  4%  sched_debug.cpu.nr_uninterruptible.max
    137.28 ±  6%     -15.1%     116.57 ±  3%  sched_debug.cpu.nr_uninterruptible.stddev
    224.91            +5.2%     236.61        perf-stat.i.MPKI
 2.237e+09 ±  2%      +3.3%   2.31e+09        perf-stat.i.branch-instructions
      9.18 ±  2%      -0.5        8.70        perf-stat.i.cache-miss-rate%
 1.614e+09            +5.3%  1.699e+09        perf-stat.i.cache-references
     27791 ±  2%     +16.0%      32229 ±  2%  perf-stat.i.context-switches
     41.49            +4.8%      43.48        perf-stat.i.cpi
 3.004e+11            +5.1%  3.157e+11        perf-stat.i.cpu-cycles
      1785 ±  3%      -8.5%       1633 ±  2%  perf-stat.i.cpu-migrations
 1.029e+10            +3.3%  1.063e+10        perf-stat.i.iTLB-loads
 1.027e+10            +3.3%  1.061e+10        perf-stat.i.instructions
      0.98            +4.5%       1.03        perf-stat.i.metric.GHz
      0.27           +10.3%       0.30        perf-stat.i.metric.K/sec
     54.40            +3.3%      56.22        perf-stat.i.metric.M/sec
     28401           +10.2%      31308        perf-stat.i.minor-faults
     28408           +10.2%      31311        perf-stat.i.page-faults
    164.92            +2.5%     169.01        perf-stat.overall.MPKI
     32.20            +2.6%      33.03        perf-stat.overall.cpi
      0.03            -2.5%       0.03        perf-stat.overall.ipc
  15354610            -2.6%   14949506 ±  2%  perf-stat.overall.path-length
 1.681e+09            +3.0%  1.732e+09        perf-stat.ps.cache-references
     22252            +9.5%      24360 ±  2%  perf-stat.ps.context-switches
 3.283e+11            +3.1%  3.386e+11        perf-stat.ps.cpu-cycles
      1293 ±  3%     -12.0%       1137 ±  3%  perf-stat.ps.cpu-migrations
     27422            +6.1%      29100        perf-stat.ps.minor-faults
     27432            +6.1%      29110        perf-stat.ps.page-faults
  35648455 ±  2%      +6.9%   38124075        interrupts.CAL:Function_call_interrupts
    117875 ±  2%      +8.0%     127250        interrupts.CPU118.CAL:Function_call_interrupts
    117833 ±  2%      +9.0%     128443        interrupts.CPU126.CAL:Function_call_interrupts
    358.40 ±  5%     +10.4%     395.80 ±  3%  interrupts.CPU14.RES:Rescheduling_interrupts
    118275 ±  3%      +8.3%     128037        interrupts.CPU140.CAL:Function_call_interrupts
    423.80 ± 11%     -21.2%     334.00 ± 14%  interrupts.CPU143.RES:Rescheduling_interrupts
    354.20 ±  8%     -21.5%     278.20 ± 14%  interrupts.CPU150.RES:Rescheduling_interrupts
    598.60 ± 31%     -35.7%     384.80 ± 12%  interrupts.CPU17.RES:Rescheduling_interrupts
    383.00 ± 12%     -24.6%     288.60 ±  7%  interrupts.CPU198.RES:Rescheduling_interrupts
    377.00 ±  5%     -24.2%     285.60 ± 16%  interrupts.CPU202.RES:Rescheduling_interrupts
    406.00 ± 13%     -25.4%     303.00 ± 10%  interrupts.CPU204.RES:Rescheduling_interrupts
    123729 ±  2%      +7.8%     133375        interrupts.CPU214.CAL:Function_call_interrupts
    372.40 ±  5%     +24.9%     465.00 ± 17%  interrupts.CPU217.RES:Rescheduling_interrupts
    469.00 ± 15%     -22.3%     364.60 ±  9%  interrupts.CPU22.RES:Rescheduling_interrupts
    129254 ±  2%      +7.9%     139462        interrupts.CPU220.CAL:Function_call_interrupts
    379.80 ±  4%     -28.5%     271.60 ± 11%  interrupts.CPU223.RES:Rescheduling_interrupts
    410.40 ± 19%     -37.2%     257.80 ± 20%  interrupts.CPU233.RES:Rescheduling_interrupts
    129819 ±  2%      +8.2%     140449        interrupts.CPU236.CAL:Function_call_interrupts
    414.20 ± 14%     -23.2%     318.00 ± 13%  interrupts.CPU248.RES:Rescheduling_interrupts
    435.20 ± 24%     -27.0%     317.60 ±  9%  interrupts.CPU266.RES:Rescheduling_interrupts
    361.20 ± 16%     -26.6%     265.00 ± 17%  interrupts.CPU272.RES:Rescheduling_interrupts
    405.00 ± 12%     -20.3%     322.60 ± 10%  interrupts.CPU277.RES:Rescheduling_interrupts
    148200 ±  2%      +8.1%     160199        interrupts.CPU285.CAL:Function_call_interrupts
    478.00 ± 34%     -39.5%     289.20 ± 11%  interrupts.CPU286.RES:Rescheduling_interrupts
    466.80 ± 10%     -20.9%     369.20 ±  3%  interrupts.CPU48.RES:Rescheduling_interrupts
    682.80 ±  8%     -15.8%     575.00 ±  6%  interrupts.CPU6.RES:Rescheduling_interrupts
    584.20 ± 10%     -19.0%     473.20 ± 10%  interrupts.CPU7.RES:Rescheduling_interrupts
    485.80 ± 28%     -41.2%     285.80 ± 10%  interrupts.CPU97.RES:Rescheduling_interrupts
     38727 ±  2%      -9.4%      35090 ±  2%  softirqs.CPU0.SCHED
     29480 ±  3%      -8.3%      27037 ±  2%  softirqs.CPU100.SCHED
     29759 ±  3%      -9.4%      26974 ±  5%  softirqs.CPU101.SCHED
     29473 ±  3%      -8.4%      27005 ±  3%  softirqs.CPU102.SCHED
     29767            -9.8%      26862 ±  4%  softirqs.CPU103.SCHED
     29738 ±  3%      -7.7%      27453 ±  4%  softirqs.CPU104.SCHED
     29810 ±  3%      -8.9%      27171 ±  3%  softirqs.CPU105.SCHED
     29456 ±  2%      -8.3%      27021 ±  4%  softirqs.CPU106.SCHED
     29566 ±  2%      -9.8%      26655 ±  4%  softirqs.CPU109.SCHED
     29943 ±  3%      -8.7%      27331 ±  2%  softirqs.CPU111.SCHED
     30038 ±  2%      -8.5%      27472 ±  3%  softirqs.CPU113.SCHED
     30141 ±  3%      -9.6%      27241 ±  3%  softirqs.CPU114.SCHED
     29966            -8.7%      27356 ±  4%  softirqs.CPU115.SCHED
     30180            -9.3%      27387 ±  3%  softirqs.CPU119.SCHED
     30359            -9.5%      27472 ±  3%  softirqs.CPU12.SCHED
     30323 ±  3%     -10.0%      27290 ±  3%  softirqs.CPU122.SCHED
     30255 ±  2%     -10.1%      27209 ±  3%  softirqs.CPU123.SCHED
     29788 ±  3%      -8.6%      27230 ±  2%  softirqs.CPU130.SCHED
     29789 ±  3%      -7.5%      27556 ±  2%  softirqs.CPU131.SCHED
     29371            -9.2%      26680 ±  3%  softirqs.CPU135.SCHED
     29356 ±  2%      -8.4%      26898 ±  5%  softirqs.CPU136.SCHED
     29975 ±  2%      -7.5%      27726 ±  4%  softirqs.CPU137.SCHED
     29730 ±  2%      -8.9%      27077 ±  3%  softirqs.CPU138.SCHED
     29858 ±  3%      -9.5%      27013 ±  2%  softirqs.CPU142.SCHED
     29380 ±  2%      -8.8%      26787 ±  3%  softirqs.CPU143.SCHED
     30048 ±  2%      -9.4%      27211 ±  3%  softirqs.CPU148.SCHED
     30090 ±  2%      -8.0%      27680 ±  3%  softirqs.CPU149.SCHED
     29788 ±  2%      -9.5%      26957 ±  2%  softirqs.CPU151.SCHED
     29992 ±  3%      -9.4%      27168 ±  4%  softirqs.CPU153.SCHED
     30148 ±  2%     -10.0%      27134 ±  5%  softirqs.CPU155.SCHED
     29901 ±  3%      -8.8%      27277 ±  3%  softirqs.CPU156.SCHED
     30178 ±  4%      -9.8%      27210 ±  4%  softirqs.CPU158.SCHED
     30117 ±  4%      -7.9%      27743 ±  3%  softirqs.CPU159.SCHED
     29649 ±  3%      -8.8%      27050 ±  6%  softirqs.CPU161.SCHED
     30050 ±  2%      -9.8%      27108 ±  2%  softirqs.CPU162.SCHED
     29844 ±  4%      -8.1%      27417 ±  2%  softirqs.CPU163.SCHED
     30158 ±  3%      -8.6%      27573 ±  2%  softirqs.CPU165.SCHED
     29879 ±  2%      -8.2%      27435 ±  2%  softirqs.CPU167.SCHED
     29684 ±  3%      -8.5%      27153 ±  4%  softirqs.CPU169.SCHED
     30443 ±  3%      -9.3%      27622 ±  2%  softirqs.CPU170.SCHED
     30339 ±  2%      -9.5%      27471 ±  3%  softirqs.CPU171.SCHED
     29990 ±  4%      -9.6%      27121 ±  5%  softirqs.CPU173.SCHED
     29645 ±  2%      -7.8%      27338 ±  4%  softirqs.CPU174.SCHED
     30096 ±  2%      -9.5%      27236 ±  2%  softirqs.CPU175.SCHED
     29290 ±  3%      -7.7%      27045 ±  3%  softirqs.CPU176.SCHED
     29834            -8.4%      27327 ±  4%  softirqs.CPU177.SCHED
     29755 ±  3%     -10.8%      26534 ±  5%  softirqs.CPU178.SCHED
     29934            -8.2%      27472 ±  5%  softirqs.CPU181.SCHED
     29781 ±  3%      -9.5%      26963 ±  3%  softirqs.CPU183.SCHED
     31004 ±  3%     -12.1%      27263 ±  2%  softirqs.CPU185.SCHED
     30258 ±  3%     -10.7%      27023 ±  3%  softirqs.CPU188.SCHED
     30280 ±  3%     -10.2%      27182 ±  4%  softirqs.CPU189.SCHED
     30360            -9.7%      27426 ±  2%  softirqs.CPU190.SCHED
     30862           -10.4%      27647 ±  2%  softirqs.CPU191.SCHED
     30190 ±  3%      -9.0%      27461 ±  3%  softirqs.CPU194.SCHED
     29625 ±  2%      -7.4%      27421 ±  4%  softirqs.CPU200.SCHED
     29711 ±  2%      -8.4%      27203 ±  2%  softirqs.CPU201.SCHED
     29566 ±  3%      -9.6%      26723 ±  2%  softirqs.CPU202.SCHED
     30234 ±  2%      -8.9%      27534 ±  4%  softirqs.CPU210.SCHED
     30660 ±  3%     -11.1%      27260 ±  4%  softirqs.CPU213.SCHED
     29629 ±  3%      -9.7%      26752 ±  2%  softirqs.CPU216.SCHED
     30415 ±  2%      -9.4%      27570 ±  3%  softirqs.CPU222.SCHED
     30594 ±  2%      -9.6%      27654 ±  3%  softirqs.CPU228.SCHED
     29315 ±  3%      -8.0%      26976 ±  2%  softirqs.CPU23.SCHED
     30136 ±  2%      -9.6%      27252 ±  5%  softirqs.CPU231.SCHED
     30814 ±  2%     -11.8%      27180 ±  4%  softirqs.CPU232.SCHED
     30792 ±  2%     -10.8%      27459 ±  4%  softirqs.CPU235.SCHED
     30797 ±  2%      -9.8%      27775 ±  2%  softirqs.CPU237.SCHED
     30488 ±  4%     -11.4%      26997 ±  3%  softirqs.CPU238.SCHED
     30106 ±  2%     -10.1%      27071 ±  3%  softirqs.CPU24.SCHED
     30537 ±  3%      -7.5%      28251 ±  3%  softirqs.CPU242.SCHED
     30424 ±  3%     -10.1%      27355 ±  3%  softirqs.CPU245.SCHED
     30559 ±  2%      -9.5%      27655 ±  3%  softirqs.CPU246.SCHED
     30424 ±  3%     -10.4%      27269 ±  4%  softirqs.CPU247.SCHED
     30618 ±  2%     -10.9%      27280 ±  3%  softirqs.CPU248.SCHED
     30218 ±  4%     -10.1%      27159 ±  3%  softirqs.CPU25.SCHED
     30392 ±  3%      -7.9%      28001 ±  4%  softirqs.CPU250.SCHED
     30464 ±  2%      -8.2%      27959 ±  3%  softirqs.CPU251.SCHED
     30292 ±  3%      -8.2%      27816 ±  4%  softirqs.CPU256.SCHED
     30472 ±  2%      -8.5%      27879 ±  2%  softirqs.CPU257.SCHED
     30030 ±  3%     -12.0%      26437 ±  2%  softirqs.CPU26.SCHED
     30714            -9.5%      27794 ±  4%  softirqs.CPU260.SCHED
     30711 ±  3%      -9.1%      27927 ±  4%  softirqs.CPU261.SCHED
     30670 ±  4%      -8.1%      28178        softirqs.CPU264.SCHED
     30594 ±  3%      -8.3%      28049 ±  4%  softirqs.CPU265.SCHED
     30253 ±  3%      -7.7%      27918 ±  3%  softirqs.CPU269.SCHED
     30517 ±  4%     -10.4%      27333 ±  4%  softirqs.CPU27.SCHED
     30300 ±  3%      -7.9%      27895 ±  3%  softirqs.CPU271.SCHED
     30083 ±  2%     -10.2%      27008 ±  3%  softirqs.CPU272.SCHED
     29850 ±  2%      -8.4%      27355 ±  3%  softirqs.CPU274.SCHED
     30215 ±  2%      -8.3%      27720 ±  3%  softirqs.CPU278.SCHED
     30452 ±  2%      -8.7%      27788 ±  3%  softirqs.CPU279.SCHED
     30541 ±  2%      -9.2%      27744 ±  3%  softirqs.CPU283.SCHED
     30472 ±  3%      -9.2%      27673 ±  4%  softirqs.CPU285.SCHED
     32047 ±  3%      -8.7%      29268 ±  3%  softirqs.CPU287.SCHED
     29721 ±  3%     -10.8%      26510 ±  3%  softirqs.CPU3.SCHED
     30367 ±  3%      -9.5%      27483 ±  2%  softirqs.CPU30.SCHED
     30380 ±  2%     -10.0%      27346 ±  2%  softirqs.CPU32.SCHED
     30029 ±  3%      -8.3%      27548 ±  2%  softirqs.CPU39.SCHED
     30114 ±  2%      -8.7%      27489 ±  3%  softirqs.CPU42.SCHED
     30307 ±  3%     -10.6%      27090 ±  3%  softirqs.CPU43.SCHED
     29833 ±  2%      -9.2%      27079 ±  3%  softirqs.CPU44.SCHED
     29975            -9.9%      27015 ±  5%  softirqs.CPU48.SCHED
     29789 ±  2%      -7.4%      27571 ±  3%  softirqs.CPU50.SCHED
     30049 ±  2%      -9.9%      27083 ±  4%  softirqs.CPU52.SCHED
     29929 ±  2%      -9.3%      27148 ±  2%  softirqs.CPU53.SCHED
     29558 ±  2%      -7.8%      27250 ±  4%  softirqs.CPU54.SCHED
     30103 ±  3%      -9.1%      27363        softirqs.CPU56.SCHED
     30001           -10.2%      26947 ±  3%  softirqs.CPU59.SCHED
     29905 ±  3%      -9.6%      27038 ±  3%  softirqs.CPU60.SCHED
     29804 ±  2%      -9.4%      27000 ±  2%  softirqs.CPU61.SCHED
     29763 ±  3%      -7.6%      27509 ±  4%  softirqs.CPU65.SCHED
     29689 ±  2%      -7.9%      27335 ±  2%  softirqs.CPU67.SCHED
     29616 ±  3%      -9.2%      26901        softirqs.CPU7.SCHED
     29761 ±  4%      -9.4%      26962 ±  4%  softirqs.CPU79.SCHED
     29962 ±  3%      -9.4%      27142 ±  4%  softirqs.CPU8.SCHED
     30014 ±  3%      -8.7%      27403 ±  2%  softirqs.CPU80.SCHED
     29827 ±  2%      -9.0%      27132 ±  3%  softirqs.CPU81.SCHED
     29704 ±  2%      -9.0%      27019 ±  3%  softirqs.CPU82.SCHED
     29972 ±  2%     -10.0%      26972 ±  5%  softirqs.CPU83.SCHED
     29847 ±  3%      -9.0%      27166 ±  2%  softirqs.CPU87.SCHED
     30018 ±  2%      -8.8%      27364 ±  2%  softirqs.CPU89.SCHED
     29622 ±  3%      -8.8%      27005 ±  3%  softirqs.CPU90.SCHED
     29522            -9.2%      26803 ±  3%  softirqs.CPU91.SCHED
     30119 ±  3%      -9.3%      27329 ±  4%  softirqs.CPU95.SCHED
     29845            -9.3%      27081 ±  4%  softirqs.CPU96.SCHED
     29574            -8.7%      26999 ±  3%  softirqs.CPU97.SCHED
     30053 ±  2%     -11.0%      26738 ±  3%  softirqs.CPU99.SCHED



***************************************************************************************************
lkp-cfl-e1: 16 threads Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz with 32G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_task/rootfs/runtime/tbox_group/test/testcase/ucode:
  gcc-9/performance/x86_64-rhel-8.3/30%/debian-10.4-x86_64-20200603.cgz/300s/lkp-cfl-e1/shell8/unixbench/0xde

commit: 
  2f06f70292 ("locking/rwsem: Prevent potential lock starvation")
  1a728dff85 ("locking/rwsem: Enable reader optimistic lock stealing")

2f06f702925b512a 1a728dff855a318bb58bcc1259b 
---------------- --------------------------- 
         %stddev     %change         %stddev
             \          |                \  
      3653            -2.9%       3547        proc-vmstat.thp_fault_alloc
     32.95 ±  7%     +11.4%      36.70 ±  2%  boot-time.boot
    476.36 ±  8%     +12.5%     536.00 ±  2%  boot-time.idle
    187408 ± 13%     -31.5%     128458 ± 11%  cpuidle.C10.time
    548.50 ±  5%     -10.7%     489.67 ± 10%  cpuidle.C8.usage
      2016 ±  6%     -11.1%       1792 ± 12%  slabinfo.skbuff_head_cache.active_objs
      2016 ±  6%      -9.0%       1834 ± 11%  slabinfo.skbuff_head_cache.num_objs
  1.03e+08            -0.9%  1.021e+08        perf-stat.i.cache-misses
      5374            -0.8%       5333        perf-stat.i.instructions-per-iTLB-miss
      4.19            -0.0        4.15        perf-stat.overall.cache-miss-rate%
 1.014e+08            -0.9%  1.005e+08        perf-stat.ps.cache-misses
    169.00 ± 52%    +665.9%       1294 ± 71%  interrupts.132:IR-PCI-MSI.2097153-edge.eth1-TxRx-0
    430.00 ± 68%     -72.1%     120.00 ± 56%  interrupts.134:IR-PCI-MSI.2097155-edge.eth1-TxRx-2
    178.50 ± 36%   +1729.3%       3265 ±107%  interrupts.135:IR-PCI-MSI.2097156-edge.eth1-TxRx-3
    169.00 ± 52%    +665.9%       1294 ± 71%  interrupts.CPU1.132:IR-PCI-MSI.2097153-edge.eth1-TxRx-0
    564.50 ±  6%      -5.0%     536.33 ±  4%  interrupts.CPU15.TLB:TLB_shootdowns
    430.00 ± 68%     -72.1%     120.00 ± 56%  interrupts.CPU3.134:IR-PCI-MSI.2097155-edge.eth1-TxRx-2
      2140 ± 14%     -14.9%       1822 ±  3%  interrupts.CPU3.CAL:Function_call_interrupts
    178.50 ± 36%   +1729.3%       3265 ±107%  interrupts.CPU4.135:IR-PCI-MSI.2097156-edge.eth1-TxRx-3
      7168           -14.6%       6122 ± 11%  sched_debug.cfs_rq:/.min_vruntime.stddev
     63.98 ± 50%     -52.3%      30.49 ±  4%  sched_debug.cfs_rq:/.removed.load_avg.avg
    161.83 ± 23%     -27.0%     118.09 ±  4%  sched_debug.cfs_rq:/.removed.load_avg.stddev
    414.50 ± 12%     -35.6%     266.83 ± 31%  sched_debug.cfs_rq:/.runnable_avg.min
    245.54 ± 12%     +54.6%     379.52 ±  7%  sched_debug.cfs_rq:/.runnable_avg.stddev
     20706 ±  9%     -56.0%       9115 ±103%  sched_debug.cfs_rq:/.spread0.max
      7168           -14.6%       6122 ± 11%  sched_debug.cfs_rq:/.spread0.stddev
    395.25 ± 17%     -35.6%     254.50 ± 35%  sched_debug.cfs_rq:/.util_avg.min
    241.48 ± 15%     +53.8%     371.47 ±  9%  sched_debug.cfs_rq:/.util_avg.stddev
    668659 ±  3%     -18.7%     543490 ± 14%  sched_debug.cpu.avg_idle.max
    153919 ±  3%     -17.0%     127802 ± 16%  sched_debug.cpu.avg_idle.stddev
      0.61 ± 26%     -30.0%       0.42 ±  7%  sched_debug.cpu.clock.stddev
      0.00 ±  7%      -6.1%       0.00 ±  7%  sched_debug.cpu.next_balance.stddev
      1.14 ±  4%     +26.9%       1.45 ±  5%  sched_debug.cpu.nr_running.avg
      7625 ±  9%     -23.0%       5873 ±  4%  sched_debug.cpu.nr_switches.stddev
      0.01 ± 71%    -100.0%       0.00        perf-sched.sch_delay.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
      3.14 ± 99%     -99.1%       0.03 ± 17%  perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait
      0.10 ± 98%     -98.6%       0.00 ±141%  perf-sched.sch_delay.avg.ms.worker_thread.kthread.ret_from_fork
      0.03 ± 88%    -100.0%       0.00        perf-sched.sch_delay.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
      0.02 ±100%    +259.1%       0.06 ± 55%  perf-sched.sch_delay.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read
      4.78 ± 98%     -98.8%       0.06 ± 20%  perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait
      0.01 ± 60%  +51990.7%       6.51 ±108%  perf-sched.sch_delay.max.ms.smpboot_thread_fn.kthread.ret_from_fork
      1.20 ± 99%     -99.9%       0.00 ±141%  perf-sched.sch_delay.max.ms.worker_thread.kthread.ret_from_fork
    162.50 ± 51%     -53.4%      75.67 ± 11%  perf-sched.total_wait_and_delay.count.ms
      0.65 ± 99%    +389.4%       3.19 ± 38%  perf-sched.wait_and_delay.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64
      0.65 ± 99%    +393.1%       3.22 ± 36%  perf-sched.wait_and_delay.avg.ms.do_syslog.part.0.kmsg_read.vfs_read
      0.27 ± 25%    -100.0%       0.00        perf-sched.wait_and_delay.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
      0.01 ± 50%    -100.0%       0.00        perf-sched.wait_and_delay.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64
      0.01 ±100%  +21092.3%       2.76 ± 54%  perf-sched.wait_and_delay.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read
      0.16 ± 95%     -42.6%       0.09 ±136%  perf-sched.wait_and_delay.avg.ms.preempt_schedule_common._cond_resched.stop_one_cpu.__set_cpus_allowed_ptr.sched_setaffinity
      0.12 ± 99%     -98.9%       0.00 ±141%  perf-sched.wait_and_delay.avg.ms.worker_thread.kthread.ret_from_fork
      1.50 ± 33%    +211.1%       4.67 ± 20%  perf-sched.wait_and_delay.count.devkmsg_read.vfs_read.ksys_read.do_syscall_64
      1.50 ± 33%    +166.7%       4.00 ± 20%  perf-sched.wait_and_delay.count.do_syslog.part.0.kmsg_read.vfs_read
      6.00 ± 66%    -100.0%       0.00        perf-sched.wait_and_delay.count.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
      3.50 ± 42%    -100.0%       0.00        perf-sched.wait_and_delay.count.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64
      1.50 ±100%    +322.2%       6.33 ± 41%  perf-sched.wait_and_delay.count.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork
      7.50 ± 73%     -91.1%       0.67 ± 70%  perf-sched.wait_and_delay.count.worker_thread.kthread.ret_from_fork
      1.28 ± 99%    +609.9%       9.10 ± 37%  perf-sched.wait_and_delay.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64
      1.27 ± 99%    +520.4%       7.91 ± 31%  perf-sched.wait_and_delay.max.ms.do_syslog.part.0.kmsg_read.vfs_read
      0.75 ± 10%    -100.0%       0.00        perf-sched.wait_and_delay.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
      0.03 ±  7%    -100.0%       0.00        perf-sched.wait_and_delay.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64
      0.31 ±100%   +2111.0%       6.83 ± 58%  perf-sched.wait_and_delay.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read
      3.79 ± 98%     -61.7%       1.45 ±138%  perf-sched.wait_and_delay.max.ms.preempt_schedule_common._cond_resched.stop_one_cpu.__set_cpus_allowed_ptr.sched_setaffinity
      1.20 ± 99%     -99.9%       0.00 ±141%  perf-sched.wait_and_delay.max.ms.worker_thread.kthread.ret_from_fork
      0.62 ±100%    +407.7%       3.15 ± 38%  perf-sched.wait_time.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64
      0.26 ± 28%    -100.0%       0.00        perf-sched.wait_time.avg.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
      0.01 ± 50%    -100.0%       0.00        perf-sched.wait_time.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64
      0.01 ±100%  +20866.7%       2.73 ± 54%  perf-sched.wait_time.avg.ms.pipe_read.new_sync_read.vfs_read.ksys_read
      0.16 ± 95%     -42.6%       0.09 ±136%  perf-sched.wait_time.avg.ms.preempt_schedule_common._cond_resched.stop_one_cpu.__set_cpus_allowed_ptr.sched_setaffinity
      0.39 ± 46%    +473.1%       2.25 ± 35%  perf-sched.wait_time.avg.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait
      1.24 ±100%    +630.4%       9.07 ± 37%  perf-sched.wait_time.max.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64
      0.74 ±  9%    -100.0%       0.00        perf-sched.wait_time.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
      0.03 ±  7%    -100.0%       0.00        perf-sched.wait_time.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64
      0.31 ±100%   +2114.2%       6.79 ± 58%  perf-sched.wait_time.max.ms.pipe_read.new_sync_read.vfs_read.ksys_read
      3.79 ± 98%     -61.7%       1.45 ±138%  perf-sched.wait_time.max.ms.preempt_schedule_common._cond_resched.stop_one_cpu.__set_cpus_allowed_ptr.sched_setaffinity
      1.52 ± 58%    +384.0%       7.38 ± 42%  perf-sched.wait_time.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait
      6.67 ±100%      -6.7        0.00        perf-profile.calltrace.cycles-pp.__libc_start_main
      6.67 ±100%      -6.7        0.00        perf-profile.calltrace.cycles-pp.main.__libc_start_main
      6.67 ±100%      -6.7        0.00        perf-profile.calltrace.cycles-pp.run_builtin.main.__libc_start_main
      6.67 ±100%      -6.7        0.00        perf-profile.calltrace.cycles-pp.cmd_sched.run_builtin.main.__libc_start_main
      6.67 ±100%      -6.7        0.00        perf-profile.calltrace.cycles-pp.cmd_record.cmd_sched.run_builtin.main.__libc_start_main
      6.67 ±100%      -6.7        0.00        perf-profile.calltrace.cycles-pp.__evlist__enable.cmd_record.cmd_sched.run_builtin.main
      3.73 ± 79%      -3.7        0.00        perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
      3.33 ±100%      -3.3        0.00        perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.ioctl.__evlist__enable.cmd_record.cmd_sched
      3.33 ±100%      -3.3        0.00        perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__close
      3.33 ±100%      -3.3        0.00        perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.open64
      3.33 ±100%      -3.3        0.00        perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.sched_setaffinity.__evlist__enable.cmd_record.cmd_sched
      3.33 ±100%      -3.3        0.00        perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.ioctl.__evlist__enable.cmd_record
      3.33 ±100%      -3.3        0.00        perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.open64
      3.33 ±100%      -3.3        0.00        perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.sched_setaffinity.__evlist__enable.cmd_record
      3.33 ±100%      -3.3        0.00        perf-profile.calltrace.cycles-pp.event_function_call.perf_event_for_each_child._perf_ioctl.perf_ioctl.__x64_sys_ioctl
      3.33 ±100%      -3.3        0.00        perf-profile.calltrace.cycles-pp.smp_call_function_single.event_function_call.perf_event_for_each_child._perf_ioctl.perf_ioctl
      3.33 ±100%      -3.3        0.00        perf-profile.calltrace.cycles-pp.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.__close
      3.33 ±100%      -3.3        0.00        perf-profile.calltrace.cycles-pp.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.__close
      3.33 ±100%      -3.3        0.00        perf-profile.calltrace.cycles-pp.task_work_run.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.__close
      3.33 ±100%      -3.3        0.00        perf-profile.calltrace.cycles-pp.__fput.task_work_run.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe
      3.33 ±100%      -3.3        0.00        perf-profile.calltrace.cycles-pp.__get_free_pages.pgd_alloc.mm_init.alloc_bprm.do_execveat_common
      3.33 ±100%      -3.3        0.00        perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.__get_free_pages.pgd_alloc.mm_init.alloc_bprm
      3.33 ±100%      -3.3        0.00        perf-profile.calltrace.cycles-pp.__mod_zone_page_state.rmqueue.get_page_from_freelist.__alloc_pages_nodemask.__get_free_pages
      3.33 ±100%      -3.3        0.00        perf-profile.calltrace.cycles-pp.d_add.simple_lookup.path_openat.do_filp_open.do_sys_openat2
      3.33 ±100%      -3.3        0.00        perf-profile.calltrace.cycles-pp.kfree.pipe_release.__fput.task_work_run.exit_to_user_mode_prepare
      3.33 ±100%      -3.3        0.00        perf-profile.calltrace.cycles-pp.poll_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
      3.33 ±100%      -3.3        0.00        perf-profile.calltrace.cycles-pp.__x64_sys_ioctl.do_syscall_64.entry_SYSCALL_64_after_hwframe.ioctl.__evlist__enable
      3.33 ±100%      -3.3        0.00        perf-profile.calltrace.cycles-pp.__x64_sys_sched_setaffinity.do_syscall_64.entry_SYSCALL_64_after_hwframe.sched_setaffinity.__evlist__enable
      3.33 ±100%      -3.3        0.00        perf-profile.calltrace.cycles-pp.__set_cpus_allowed_ptr.sched_setaffinity.__x64_sys_sched_setaffinity.do_syscall_64.entry_SYSCALL_64_after_hwframe
      3.33 ±100%      -3.3        0.00        perf-profile.calltrace.cycles-pp._perf_ioctl.perf_ioctl.__x64_sys_ioctl.do_syscall_64.entry_SYSCALL_64_after_hwframe
      3.33 ±100%      -3.3        0.00        perf-profile.calltrace.cycles-pp.perf_trace_sched_stat_runtime.update_curr.dequeue_entity.dequeue_task_fair.do_set_cpus_allowed
      3.33 ±100%      -3.3        0.00        perf-profile.calltrace.cycles-pp.perf_trace_add.event_sched_in.merge_sched_in.visit_groups_merge.ctx_sched_in
      3.33 ±100%      -3.3        0.00        perf-profile.calltrace.cycles-pp.__x64_sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe
      3.33 ±100%      -3.3        0.00        perf-profile.calltrace.cycles-pp.alloc_bprm.do_execveat_common.__x64_sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe
      3.33 ±100%      -3.3        0.00        perf-profile.calltrace.cycles-pp.dequeue_task_fair.do_set_cpus_allowed.__set_cpus_allowed_ptr.sched_setaffinity.__x64_sys_sched_setaffinity
      3.33 ±100%      -3.3        0.00        perf-profile.calltrace.cycles-pp.dequeue_entity.dequeue_task_fair.do_set_cpus_allowed.__set_cpus_allowed_ptr.sched_setaffinity
      3.33 ±100%      -3.3        0.00        perf-profile.calltrace.cycles-pp.ctx_resched.event_function.remote_function.generic_exec_single.smp_call_function_single
      3.33 ±100%      -3.3        0.00        perf-profile.calltrace.cycles-pp.ctx_sched_in.ctx_resched.event_function.remote_function.generic_exec_single
      3.33 ±100%      -3.3        0.00        perf-profile.calltrace.cycles-pp.do_execveat_common.__x64_sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe
      3.33 ±100%      -3.3        0.00        perf-profile.calltrace.cycles-pp.do_filp_open.do_sys_openat2.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
      3.33 ±100%      -3.3        0.00        perf-profile.calltrace.cycles-pp.do_set_cpus_allowed.__set_cpus_allowed_ptr.sched_setaffinity.__x64_sys_sched_setaffinity.do_syscall_64
      3.33 ±100%      -3.3        0.00        perf-profile.calltrace.cycles-pp.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe.open64
      3.33 ±100%      -3.3        0.00        perf-profile.calltrace.cycles-pp.do_sys_openat2.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe.open64
      3.33 ±100%      -3.3        0.00        perf-profile.calltrace.cycles-pp.generic_exec_single.smp_call_function_single.event_function_call.perf_event_for_each_child._perf_ioctl
      3.33 ±100%      -3.3        0.00        perf-profile.calltrace.cycles-pp.event_function.remote_function.generic_exec_single.smp_call_function_single.event_function_call
      3.33 ±100%      -3.3        0.00        perf-profile.calltrace.cycles-pp.merge_sched_in.visit_groups_merge.ctx_sched_in.ctx_resched.event_function
      3.33 ±100%      -3.3        0.00        perf-profile.calltrace.cycles-pp.event_sched_in.merge_sched_in.visit_groups_merge.ctx_sched_in.ctx_resched
      3.33 ±100%      -3.3        0.00        perf-profile.calltrace.cycles-pp.mm_init.alloc_bprm.do_execveat_common.__x64_sys_execve.do_syscall_64
      3.33 ±100%      -3.3        0.00        perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.__get_free_pages.pgd_alloc.mm_init
      3.33 ±100%      -3.3        0.00        perf-profile.calltrace.cycles-pp.path_openat.do_filp_open.do_sys_openat2.do_sys_open.do_syscall_64
      3.33 ±100%      -3.3        0.00        perf-profile.calltrace.cycles-pp.perf_ioctl.__x64_sys_ioctl.do_syscall_64.entry_SYSCALL_64_after_hwframe.ioctl
      3.33 ±100%      -3.3        0.00        perf-profile.calltrace.cycles-pp.pgd_alloc.mm_init.alloc_bprm.do_execveat_common.__x64_sys_execve
      3.33 ±100%      -3.3        0.00        perf-profile.calltrace.cycles-pp.pipe_release.__fput.task_work_run.exit_to_user_mode_prepare.syscall_exit_to_user_mode
      3.33 ±100%      -3.3        0.00        perf-profile.calltrace.cycles-pp.rmqueue.get_page_from_freelist.__alloc_pages_nodemask.__get_free_pages.pgd_alloc
      3.33 ±100%      -3.3        0.00        perf-profile.calltrace.cycles-pp.ioctl.__evlist__enable.cmd_record.cmd_sched.run_builtin
      3.33 ±100%      -3.3        0.00        perf-profile.calltrace.cycles-pp.sched_setaffinity.__evlist__enable.cmd_record.cmd_sched.run_builtin
      3.33 ±100%      -3.3        0.00        perf-profile.calltrace.cycles-pp.sched_setaffinity.__x64_sys_sched_setaffinity.do_syscall_64.entry_SYSCALL_64_after_hwframe.sched_setaffinity
      3.33 ±100%      -3.3        0.00        perf-profile.calltrace.cycles-pp.perf_event_for_each_child._perf_ioctl.perf_ioctl.__x64_sys_ioctl.do_syscall_64
      3.33 ±100%      -3.3        0.00        perf-profile.calltrace.cycles-pp.simple_lookup.path_openat.do_filp_open.do_sys_openat2.do_sys_open
      3.33 ±100%      -3.3        0.00        perf-profile.calltrace.cycles-pp.update_curr.dequeue_entity.dequeue_task_fair.do_set_cpus_allowed.__set_cpus_allowed_ptr
      3.33 ±100%      -3.3        0.00        perf-profile.calltrace.cycles-pp.remote_function.generic_exec_single.smp_call_function_single.event_function_call.perf_event_for_each_child
      3.33 ±100%      -3.3        0.00        perf-profile.calltrace.cycles-pp.visit_groups_merge.ctx_sched_in.ctx_resched.event_function.remote_function
      3.33 ±100%      -3.3        0.00        perf-profile.calltrace.cycles-pp.__close
      3.33 ±100%      -3.3        0.00        perf-profile.calltrace.cycles-pp.open64
     43.18 ± 38%     -18.9       24.31 ± 97%  perf-profile.children.cycles-pp.do_syscall_64
      6.67 ±100%      -6.7        0.00        perf-profile.children.cycles-pp.sched_setaffinity
      6.67 ±100%      -6.7        0.00        perf-profile.children.cycles-pp.__libc_start_main
      6.67 ±100%      -6.7        0.00        perf-profile.children.cycles-pp.main
      6.67 ±100%      -6.7        0.00        perf-profile.children.cycles-pp.run_builtin
      6.67 ±100%      -6.7        0.00        perf-profile.children.cycles-pp.cmd_sched
      6.67 ±100%      -6.7        0.00        perf-profile.children.cycles-pp.cmd_record
      6.67 ±100%      -6.7        0.00        perf-profile.children.cycles-pp.__evlist__enable
      3.33 ±100%      -3.3        0.00        perf-profile.children.cycles-pp.__get_free_pages
      3.33 ±100%      -3.3        0.00        perf-profile.children.cycles-pp.__alloc_pages_nodemask
      3.33 ±100%      -3.3        0.00        perf-profile.children.cycles-pp.__mod_zone_page_state
      3.33 ±100%      -3.3        0.00        perf-profile.children.cycles-pp.d_add
      3.33 ±100%      -3.3        0.00        perf-profile.children.cycles-pp.kfree
      3.33 ±100%      -3.3        0.00        perf-profile.children.cycles-pp.poll_idle
      3.33 ±100%      -3.3        0.00        perf-profile.children.cycles-pp.__x64_sys_ioctl
      3.33 ±100%      -3.3        0.00        perf-profile.children.cycles-pp.__x64_sys_sched_setaffinity
      3.33 ±100%      -3.3        0.00        perf-profile.children.cycles-pp.__set_cpus_allowed_ptr
      3.33 ±100%      -3.3        0.00        perf-profile.children.cycles-pp._perf_ioctl
      3.33 ±100%      -3.3        0.00        perf-profile.children.cycles-pp.perf_trace_sched_stat_runtime
      3.33 ±100%      -3.3        0.00        perf-profile.children.cycles-pp.perf_trace_add
      3.33 ±100%      -3.3        0.00        perf-profile.children.cycles-pp.__x64_sys_execve
      3.33 ±100%      -3.3        0.00        perf-profile.children.cycles-pp.alloc_bprm
      3.33 ±100%      -3.3        0.00        perf-profile.children.cycles-pp.dequeue_task_fair
      3.33 ±100%      -3.3        0.00        perf-profile.children.cycles-pp.dequeue_entity
      3.33 ±100%      -3.3        0.00        perf-profile.children.cycles-pp.ctx_resched
      3.33 ±100%      -3.3        0.00        perf-profile.children.cycles-pp.ctx_sched_in
      3.33 ±100%      -3.3        0.00        perf-profile.children.cycles-pp.do_execveat_common
      3.33 ±100%      -3.3        0.00        perf-profile.children.cycles-pp.do_filp_open
      3.33 ±100%      -3.3        0.00        perf-profile.children.cycles-pp.do_set_cpus_allowed
      3.33 ±100%      -3.3        0.00        perf-profile.children.cycles-pp.do_sys_open
      3.33 ±100%      -3.3        0.00        perf-profile.children.cycles-pp.do_sys_openat2
      3.33 ±100%      -3.3        0.00        perf-profile.children.cycles-pp.generic_exec_single
      3.33 ±100%      -3.3        0.00        perf-profile.children.cycles-pp.event_function
      3.33 ±100%      -3.3        0.00        perf-profile.children.cycles-pp.merge_sched_in
      3.33 ±100%      -3.3        0.00        perf-profile.children.cycles-pp.event_sched_in
      3.33 ±100%      -3.3        0.00        perf-profile.children.cycles-pp.mm_init
      3.33 ±100%      -3.3        0.00        perf-profile.children.cycles-pp.get_page_from_freelist
      3.33 ±100%      -3.3        0.00        perf-profile.children.cycles-pp.path_openat
      3.33 ±100%      -3.3        0.00        perf-profile.children.cycles-pp.perf_ioctl
      3.33 ±100%      -3.3        0.00        perf-profile.children.cycles-pp.pgd_alloc
      3.33 ±100%      -3.3        0.00        perf-profile.children.cycles-pp.pipe_release
      3.33 ±100%      -3.3        0.00        perf-profile.children.cycles-pp.rmqueue
      3.33 ±100%      -3.3        0.00        perf-profile.children.cycles-pp.ioctl
      3.33 ±100%      -3.3        0.00        perf-profile.children.cycles-pp.perf_event_for_each_child
      3.33 ±100%      -3.3        0.00        perf-profile.children.cycles-pp.simple_lookup
      3.33 ±100%      -3.3        0.00        perf-profile.children.cycles-pp.update_curr
      3.33 ±100%      -3.3        0.00        perf-profile.children.cycles-pp.remote_function
      3.33 ±100%      -3.3        0.00        perf-profile.children.cycles-pp.visit_groups_merge
      3.33 ±100%      -3.3        0.00        perf-profile.children.cycles-pp.__close
      3.33 ±100%      -3.3        0.00        perf-profile.children.cycles-pp.open64
     12.40 ±100%     -11.3        1.07 ±141%  perf-profile.self.cycles-pp.vprintk_emit
      3.33 ±100%      -3.3        0.00        perf-profile.self.cycles-pp.__mod_zone_page_state
      3.33 ±100%      -3.3        0.00        perf-profile.self.cycles-pp.d_add
      3.33 ±100%      -3.3        0.00        perf-profile.self.cycles-pp.kfree
      3.33 ±100%      -3.3        0.00        perf-profile.self.cycles-pp.poll_idle
      3.33 ±100%      -3.3        0.00        perf-profile.self.cycles-pp.perf_trace_sched_stat_runtime
      3.33 ±100%      -3.3        0.00        perf-profile.self.cycles-pp.perf_trace_add



***************************************************************************************************
lkp-csl-2ap2: 192 threads Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz with 192G memory


***************************************************************************************************
lkp-csl-2ap4: 192 threads Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz with 192G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/size/tbox_group/test/testcase/ucode:
  gcc-9/performance/x86_64-rhel-8.3/debian-10.4-x86_64-20200603.cgz/300s/8T/lkp-csl-2ap4/anon-w-seq-mt/vm-scalability/0x5003003

commit: 
  2f06f70292 ("locking/rwsem: Prevent potential lock starvation")
  1a728dff85 ("locking/rwsem: Enable reader optimistic lock stealing")

2f06f702925b512a 1a728dff855a318bb58bcc1259b 
---------------- --------------------------- 
         %stddev     %change         %stddev
             \          |                \  
   1718016            +6.1%    1823058        vm-scalability.time.voluntary_context_switches
     19546            +3.4%      20219        vmstat.system.cs
     31.86 ±  3%      +5.9%      33.74        boot-time.boot
      5245 ±  3%      +6.9%       5604        boot-time.idle
 3.987e+09 ± 97%    +133.5%  9.309e+09 ± 43%  cpuidle.C1E.time
  14584441 ± 52%     +64.7%   24018033 ± 22%  cpuidle.C1E.usage
   5154895 ±  4%     +35.6%    6991922 ± 10%  numa-numastat.node1.local_node
   5226081 ±  4%     +35.1%    7060230 ± 10%  numa-numastat.node1.numa_hit
     18155            +1.7%      18458        proc-vmstat.nr_page_table_pages
     95137            -2.1%      93129        proc-vmstat.nr_shmem
     18323 ± 14%     +53.7%      28153 ± 10%  proc-vmstat.numa_hint_faults_local
      2437 ±  5%     +12.5%       2743        slabinfo.PING.active_objs
      2437 ±  5%     +12.5%       2743        slabinfo.PING.num_objs
     14399 ±  5%      -7.9%      13261 ±  3%  slabinfo.skbuff_head_cache.active_objs
     14399 ±  5%      -7.3%      13341 ±  3%  slabinfo.skbuff_head_cache.num_objs
     20437            +1.7%      20786        perf-stat.i.context-switches
    799.34            +3.7%     829.28        perf-stat.i.cpu-migrations
   1122429 ±  3%      +5.5%    1184264 ±  2%  perf-stat.i.iTLB-loads
      1.23 ±  8%      -0.3        0.98 ± 16%  perf-stat.i.node-store-miss-rate%
   1377516           -15.9%    1158540 ± 17%  perf-stat.i.node-store-misses
     19456            +3.1%      20060        perf-stat.ps.context-switches
    758.99            +4.7%     794.90        perf-stat.ps.cpu-migrations
   1068686 ±  3%      +6.3%    1135784 ±  2%  perf-stat.ps.iTLB-loads
     18523 ±151%    +266.6%      67912 ± 18%  sched_debug.cfs_rq:/.MIN_vruntime.avg
   2017181 ±136%    +527.5%   12657399 ± 15%  sched_debug.cfs_rq:/.MIN_vruntime.max
    189080 ±144%    +388.8%     924185 ± 16%  sched_debug.cfs_rq:/.MIN_vruntime.stddev
      0.73 ± 39%     +63.6%       1.20 ± 32%  sched_debug.cfs_rq:/.load_avg.min
     62.25 ±  5%     +67.9%     104.50 ± 64%  sched_debug.cfs_rq:/.load_avg.stddev
     18523 ±151%    +266.6%      67913 ± 18%  sched_debug.cfs_rq:/.max_vruntime.avg
   2017181 ±136%    +527.5%   12657437 ± 15%  sched_debug.cfs_rq:/.max_vruntime.max
    189080 ±144%    +388.8%     924188 ± 16%  sched_debug.cfs_rq:/.max_vruntime.stddev
      1.14 ± 14%     +45.3%       1.66 ±  6%  sched_debug.cfs_rq:/.nr_running.max
    172.79 ± 22%     +68.7%     291.54 ± 33%  sched_debug.cfs_rq:/.runnable_avg.min
    825.49            -7.8%     761.19 ±  3%  sched_debug.cpu.clock_task.stddev
     23249            +6.3%      24708 ±  3%  softirqs.CPU128.RCU
     15177 ±  3%      +7.1%      16262 ±  3%  softirqs.CPU173.SCHED
     15397 ±  5%      +6.0%      16328 ±  5%  softirqs.CPU175.SCHED
     15437 ±  3%      +5.8%      16333 ±  3%  softirqs.CPU177.SCHED
     15302 ±  4%      +6.8%      16336 ±  4%  softirqs.CPU178.SCHED
     15410 ±  5%      +6.4%      16402 ±  6%  softirqs.CPU181.SCHED
     15553 ±  5%      +5.4%      16396 ±  4%  softirqs.CPU183.SCHED
     15229 ±  5%      +7.5%      16368 ±  4%  softirqs.CPU185.SCHED
     14115 ±  8%     +11.0%      15674 ±  4%  softirqs.CPU4.SCHED
     14951 ±  4%      +6.8%      15973 ±  4%  softirqs.CPU75.SCHED
     15009 ±  4%      +7.7%      16172 ±  4%  softirqs.CPU77.SCHED
     15260 ±  2%      +5.6%      16116 ±  3%  softirqs.CPU93.SCHED
      4071 ± 78%    +169.5%      10970 ± 46%  numa-meminfo.node1.Active
      3808 ± 86%    +181.1%      10708 ± 50%  numa-meminfo.node1.Active(anon)
   7778631           +17.2%    9117154 ±  3%  numa-meminfo.node1.AnonHugePages
  10272324           +14.5%   11761152 ±  2%  numa-meminfo.node1.AnonPages
  10294392           +14.7%   11805987 ±  3%  numa-meminfo.node1.Inactive
  10294392           +14.7%   11805987 ±  3%  numa-meminfo.node1.Inactive(anon)
      7119 ± 13%     +24.5%       8864 ±  3%  numa-meminfo.node1.KernelStack
  11180806           +14.2%   12763590 ±  2%  numa-meminfo.node1.MemUsed
     17000 ±  3%     +14.0%      19374 ±  3%  numa-meminfo.node1.PageTables
     68358 ±  8%     +25.6%      85872 ±  6%  numa-meminfo.node1.SUnreclaim
     96990 ± 12%     +26.5%     122735 ±  6%  numa-meminfo.node1.Slab
    181948 ± 16%     -30.5%     126366 ± 18%  numa-meminfo.node3.Active
    181698 ± 16%     -30.7%     125842 ± 17%  numa-meminfo.node3.Active(anon)
    505257 ±  8%     -20.2%     403068 ± 10%  numa-meminfo.node3.FilePages
    252415 ± 11%     -36.7%     159732 ± 26%  numa-meminfo.node3.Shmem
    952.00 ± 87%    +180.8%       2673 ± 50%  numa-vmstat.node1.nr_active_anon
   2526626           +15.5%    2919165 ±  3%  numa-vmstat.node1.nr_anon_pages
      3772 ±  2%     +17.5%       4431 ±  4%  numa-vmstat.node1.nr_anon_transparent_hugepages
   2532548           +15.7%    2930041 ±  3%  numa-vmstat.node1.nr_inactive_anon
      7116 ± 13%     +25.1%       8902 ±  3%  numa-vmstat.node1.nr_kernel_stack
      4223 ±  3%     +14.8%       4847 ±  3%  numa-vmstat.node1.nr_page_table_pages
     17089 ±  8%     +25.7%      21476 ±  6%  numa-vmstat.node1.nr_slab_unreclaimable
    952.00 ± 87%    +180.8%       2673 ± 50%  numa-vmstat.node1.nr_zone_active_anon
   2530238           +15.7%    2927655 ±  3%  numa-vmstat.node1.nr_zone_inactive_anon
   3070767 ±  5%     +27.6%    3918378 ± 14%  numa-vmstat.node1.numa_hit
   2931345 ±  6%     +29.6%    3799207 ± 14%  numa-vmstat.node1.numa_local
     45398 ± 16%     -30.9%      31375 ± 17%  numa-vmstat.node3.nr_active_anon
    126340 ±  8%     -20.3%     100683 ± 10%  numa-vmstat.node3.nr_file_pages
     63130 ± 11%     -36.9%      39849 ± 25%  numa-vmstat.node3.nr_shmem
     45398 ± 16%     -30.9%      31375 ± 17%  numa-vmstat.node3.nr_zone_active_anon
      0.81 ±152%     -92.9%       0.06 ± 11%  perf-sched.sch_delay.avg.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64
      0.01 ±122%    +570.0%       0.03 ± 56%  perf-sched.sch_delay.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_call_function.[unknown]
      0.68 ± 57%     -78.9%       0.14 ±127%  perf-sched.sch_delay.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown]
      3.08 ± 67%     -79.4%       0.63 ±  5%  perf-sched.sch_delay.avg.ms.preempt_schedule_common._cond_resched.__alloc_pages_nodemask.pte_alloc_one.do_huge_pmd_anonymous_page
      0.01 ±128%  +34121.7%       1.97 ±171%  perf-sched.sch_delay.avg.ms.preempt_schedule_common._cond_resched.down_read.do_user_addr_fault.exc_page_fault
      0.42 ±  6%     +34.4%       0.57 ±  9%  perf-sched.sch_delay.avg.ms.worker_thread.kthread.ret_from_fork
    262.11 ±163%     -99.8%       0.54 ± 99%  perf-sched.sch_delay.max.ms.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64
      0.04 ±139%    +223.3%       0.12 ± 40%  perf-sched.sch_delay.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_call_function.[unknown]
    255.33 ± 65%     -74.1%      66.07 ±146%  perf-sched.sch_delay.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_reschedule_ipi.[unknown]
    299.91 ± 65%     -91.5%      25.39 ± 27%  perf-sched.sch_delay.max.ms.preempt_schedule_common._cond_resched.__alloc_pages_nodemask.pte_alloc_one.do_huge_pmd_anonymous_page
    379.11 ± 48%     -58.7%     156.40 ±116%  perf-sched.sch_delay.max.ms.preempt_schedule_common._cond_resched.clear_huge_page.do_huge_pmd_anonymous_page.__handle_mm_fault
      0.01 ±128%  +34221.7%       1.97 ±170%  perf-sched.sch_delay.max.ms.preempt_schedule_common._cond_resched.down_read.do_user_addr_fault.exc_page_fault
      0.13 ± 19%     -62.9%       0.05 ±100%  perf-sched.sch_delay.max.ms.preempt_schedule_common._cond_resched.wait_for_completion.stop_two_cpus.migrate_swap
      0.14 ± 28%    +114.2%       0.29 ± 59%  perf-sched.sch_delay.max.ms.schedule_hrtimeout_range_clock.poll_schedule_timeout.constprop.0.do_sys_poll
    252.28 ± 99%     -99.9%       0.18 ± 15%  perf-sched.sch_delay.max.ms.schedule_timeout.kcompactd.kthread.ret_from_fork
      7.28 ±  6%     +12.4%       8.18 ±  4%  perf-sched.wait_and_delay.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64
     13.39 ± 12%     -29.1%       9.49 ± 21%  perf-sched.wait_and_delay.avg.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown]
      1.28 ± 10%     -22.6%       0.99 ±  7%  perf-sched.wait_and_delay.avg.ms.pipe_write.new_sync_write.vfs_write.ksys_write
     12.67 ± 21%     -36.6%       8.03 ± 24%  perf-sched.wait_and_delay.avg.ms.preempt_schedule_common._cond_resched.clear_huge_page.do_huge_pmd_anonymous_page.__handle_mm_fault
    210.63 ± 33%     -33.2%     140.75 ± 26%  perf-sched.wait_and_delay.avg.ms.preempt_schedule_common._cond_resched.ww_mutex_lock.drm_gem_vram_vunmap.drm_client_buffer_vunmap
      1455 ±  5%     -15.1%       1236 ±  4%  perf-sched.wait_and_delay.count.devkmsg_read.vfs_read.ksys_read.do_syscall_64
      1484 ±  5%     -19.4%       1197 ±  6%  perf-sched.wait_and_delay.count.do_syslog.part.0.kmsg_read.vfs_read
      8715 ± 10%     +17.8%      10262 ±  7%  perf-sched.wait_and_delay.count.pipe_write.new_sync_write.vfs_write.ksys_write
      1474 ±  5%     -18.8%       1198 ±  7%  perf-sched.wait_and_delay.count.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait
    596.58 ± 69%     -94.6%      32.22 ± 18%  perf-sched.wait_and_delay.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown]
    888.87 ± 80%     -85.7%     127.37 ± 89%  perf-sched.wait_and_delay.max.ms.pipe_write.new_sync_write.vfs_write.ksys_write
    680.18 ± 37%     -65.2%     236.40 ± 60%  perf-sched.wait_and_delay.max.ms.preempt_schedule_common._cond_resched.clear_huge_page.do_huge_pmd_anonymous_page.__handle_mm_fault
    843.41 ± 16%     -18.3%     688.68 ± 20%  perf-sched.wait_and_delay.max.ms.preempt_schedule_common._cond_resched.ww_mutex_lock.drm_gem_vram_vunmap.drm_client_buffer_vunmap
      6.97 ±  5%     +12.1%       7.81 ±  4%  perf-sched.wait_time.avg.ms.devkmsg_read.vfs_read.ksys_read.do_syscall_64
      7.24 ±  6%     +12.2%       8.12 ±  4%  perf-sched.wait_time.avg.ms.do_syslog.part.0.kmsg_read.vfs_read
      1.23 ± 11%     -21.6%       0.97 ±  7%  perf-sched.wait_time.avg.ms.pipe_write.new_sync_write.vfs_write.ksys_write
     10.24 ± 23%     -26.6%       7.52 ± 25%  perf-sched.wait_time.avg.ms.preempt_schedule_common._cond_resched.clear_huge_page.do_huge_pmd_anonymous_page.__handle_mm_fault
    204.66 ± 34%     -34.7%     133.64 ± 27%  perf-sched.wait_time.avg.ms.preempt_schedule_common._cond_resched.ww_mutex_lock.drm_gem_vram_vunmap.drm_client_buffer_vunmap
      1.18 ±  7%     +69.3%       2.00 ±  5%  perf-sched.wait_time.avg.ms.rwsem_down_read_slowpath.do_exit.__x64_sys_exit.do_syscall_64
      3.79 ±  6%     -13.9%       3.26 ±  6%  perf-sched.wait_time.avg.ms.rwsem_down_read_slowpath.do_madvise.part.0.__x64_sys_madvise
      4.77 ± 16%     -42.7%       2.73 ± 24%  perf-sched.wait_time.avg.ms.rwsem_down_write_slowpath.down_write_killable.do_mprotect_pkey.__x64_sys_mprotect
     13.67 ± 49%    +297.7%      54.37 ±108%  perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.irqentry_exit_to_user_mode.asm_sysvec_call_function_single.[unknown]
    596.46 ± 69%     -95.1%      29.15 ± 21%  perf-sched.wait_time.max.ms.exit_to_user_mode_prepare.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.[unknown]
    675.39 ± 57%     -81.1%     127.32 ± 89%  perf-sched.wait_time.max.ms.pipe_write.new_sync_write.vfs_write.ksys_write
    387.79 ± 28%     -53.4%     180.85 ± 58%  perf-sched.wait_time.max.ms.preempt_schedule_common._cond_resched.clear_huge_page.do_huge_pmd_anonymous_page.__handle_mm_fault
    835.14 ± 16%     -18.2%     682.90 ± 20%  perf-sched.wait_time.max.ms.preempt_schedule_common._cond_resched.ww_mutex_lock.drm_gem_vram_vunmap.drm_client_buffer_vunmap
      8.17 ± 18%     -29.0%       5.81 ± 14%  perf-sched.wait_time.max.ms.rwsem_down_write_slowpath.down_write_killable.do_mprotect_pkey.__x64_sys_mprotect
    348.61 ± 49%     -76.4%      82.42 ± 98%  perf-sched.wait_time.max.ms.schedule_hrtimeout_range_clock.ep_poll.do_epoll_wait.__x64_sys_epoll_wait
    470.90 ± 57%     -73.1%     126.74 ±101%  perf-sched.wait_time.max.ms.schedule_timeout.rcu_gp_kthread.kthread.ret_from_fork
   5450629            -2.4%    5318631        interrupts.CAL:Function_call_interrupts
     27446 ±  4%     -13.9%      23636 ±  8%  interrupts.CPU0.CAL:Function_call_interrupts
     25330 ±  5%     -15.8%      21316 ±  9%  interrupts.CPU0.TLB:TLB_shootdowns
     28353 ±  7%     -14.8%      24159 ±  7%  interrupts.CPU1.TLB:TLB_shootdowns
     29726 ±  8%     -10.9%      26476 ±  4%  interrupts.CPU10.CAL:Function_call_interrupts
    268.00 ±  6%    +144.8%     656.00 ± 56%  interrupts.CPU10.RES:Rescheduling_interrupts
     28812 ±  9%     -17.2%      23847 ±  6%  interrupts.CPU10.TLB:TLB_shootdowns
     29180 ±  8%      -9.4%      26443 ±  5%  interrupts.CPU100.CAL:Function_call_interrupts
     28076 ±  9%     -12.7%      24522 ±  6%  interrupts.CPU100.TLB:TLB_shootdowns
     29573 ±  8%     -10.1%      26577 ±  5%  interrupts.CPU102.CAL:Function_call_interrupts
     28335 ±  9%     -14.0%      24378 ±  7%  interrupts.CPU102.TLB:TLB_shootdowns
     29722 ±  8%     -10.6%      26580 ±  5%  interrupts.CPU103.CAL:Function_call_interrupts
     28765 ± 10%     -14.6%      24578 ±  5%  interrupts.CPU103.TLB:TLB_shootdowns
     30577 ±  7%     -13.8%      26360 ±  5%  interrupts.CPU104.CAL:Function_call_interrupts
     28903 ±  9%     -15.7%      24362 ±  6%  interrupts.CPU104.TLB:TLB_shootdowns
    110.50 ±  6%    +313.3%     456.75 ± 69%  interrupts.CPU105.RES:Rescheduling_interrupts
     28119 ±  7%     -14.0%      24182 ±  4%  interrupts.CPU105.TLB:TLB_shootdowns
     30100 ±  8%     -12.4%      26360 ±  3%  interrupts.CPU106.CAL:Function_call_interrupts
    121.75 ± 16%    +210.5%     378.00 ± 75%  interrupts.CPU106.RES:Rescheduling_interrupts
     29156 ±  9%     -16.3%      24404 ±  3%  interrupts.CPU106.TLB:TLB_shootdowns
     30005 ±  8%     -12.8%      26175 ±  4%  interrupts.CPU107.CAL:Function_call_interrupts
      4542 ± 27%     +39.4%       6329 ±  2%  interrupts.CPU107.NMI:Non-maskable_interrupts
      4542 ± 27%     +39.4%       6329 ±  2%  interrupts.CPU107.PMI:Performance_monitoring_interrupts
    110.75 ±  5%    +367.0%     517.25 ± 80%  interrupts.CPU107.RES:Rescheduling_interrupts
     28976 ±  9%     -17.4%      23937 ±  6%  interrupts.CPU107.TLB:TLB_shootdowns
     30959 ±  7%     -12.8%      26995 ±  5%  interrupts.CPU108.CAL:Function_call_interrupts
     29758 ±  9%     -16.2%      24949 ±  6%  interrupts.CPU108.TLB:TLB_shootdowns
     30866 ±  9%     -14.7%      26342 ±  4%  interrupts.CPU109.CAL:Function_call_interrupts
     29340 ± 11%     -17.3%      24273 ±  5%  interrupts.CPU109.TLB:TLB_shootdowns
     29831 ±  7%     -15.5%      25217 ±  6%  interrupts.CPU11.CAL:Function_call_interrupts
     28834 ±  9%     -19.5%      23212 ±  7%  interrupts.CPU11.TLB:TLB_shootdowns
     30624 ±  9%     -12.0%      26950 ±  3%  interrupts.CPU110.CAL:Function_call_interrupts
     29560 ± 10%     -17.8%      24311 ±  5%  interrupts.CPU110.TLB:TLB_shootdowns
     30699 ±  9%     -13.5%      26542 ±  3%  interrupts.CPU111.CAL:Function_call_interrupts
     29618 ±  9%     -17.8%      24343 ±  4%  interrupts.CPU111.TLB:TLB_shootdowns
     30487 ±  8%     -15.1%      25879 ±  5%  interrupts.CPU112.CAL:Function_call_interrupts
     29308 ±  8%     -18.8%      23793 ±  6%  interrupts.CPU112.TLB:TLB_shootdowns
     30696 ±  7%     -15.2%      26019 ±  3%  interrupts.CPU113.CAL:Function_call_interrupts
     29452 ±  9%     -18.8%      23927 ±  4%  interrupts.CPU113.TLB:TLB_shootdowns
     30551 ± 10%     -13.2%      26530 ±  4%  interrupts.CPU114.CAL:Function_call_interrupts
     29649 ± 11%     -17.3%      24522 ±  4%  interrupts.CPU114.TLB:TLB_shootdowns
     30784 ±  7%     -12.5%      26947 ±  7%  interrupts.CPU115.CAL:Function_call_interrupts
    136.25 ± 26%    +324.6%     578.50 ± 44%  interrupts.CPU115.RES:Rescheduling_interrupts
     29742 ±  8%     -18.2%      24316 ±  4%  interrupts.CPU115.TLB:TLB_shootdowns
     30699 ±  9%     -13.6%      26521 ±  3%  interrupts.CPU116.CAL:Function_call_interrupts
     29724 ± 10%     -18.3%      24295 ±  4%  interrupts.CPU116.TLB:TLB_shootdowns
     30528 ±  9%     -13.5%      26404 ±  3%  interrupts.CPU117.CAL:Function_call_interrupts
     29418 ± 11%     -17.7%      24219 ±  4%  interrupts.CPU117.TLB:TLB_shootdowns
     31792 ±  8%     -16.0%      26712 ±  6%  interrupts.CPU118.CAL:Function_call_interrupts
     29972 ± 11%     -18.9%      24320 ±  5%  interrupts.CPU118.TLB:TLB_shootdowns
     30507 ± 10%     -14.6%      26039 ±  4%  interrupts.CPU119.CAL:Function_call_interrupts
     29477 ± 11%     -19.4%      23760 ±  5%  interrupts.CPU119.TLB:TLB_shootdowns
     30443 ±  7%     -12.3%      26697 ±  4%  interrupts.CPU12.CAL:Function_call_interrupts
     29451 ±  7%     -17.6%      24271 ±  6%  interrupts.CPU12.TLB:TLB_shootdowns
    634.50 ± 47%     -73.9%     165.75 ± 16%  interrupts.CPU120.RES:Rescheduling_interrupts
     26140 ±  5%     +13.6%      29693 ±  4%  interrupts.CPU121.CAL:Function_call_interrupts
     24373 ±  5%     +14.3%      27852 ±  4%  interrupts.CPU121.TLB:TLB_shootdowns
     26032 ±  6%     +14.2%      29718 ±  3%  interrupts.CPU122.CAL:Function_call_interrupts
    597.50 ± 43%     -76.4%     141.00 ±  9%  interrupts.CPU122.RES:Rescheduling_interrupts
     24256 ±  5%     +14.8%      27837 ±  3%  interrupts.CPU122.TLB:TLB_shootdowns
     26581 ±  4%     +12.4%      29888 ±  4%  interrupts.CPU123.CAL:Function_call_interrupts
    520.25 ± 49%     -72.9%     140.75 ± 16%  interrupts.CPU123.RES:Rescheduling_interrupts
     24638 ±  5%     +14.0%      28097 ±  3%  interrupts.CPU123.TLB:TLB_shootdowns
     25875 ±  5%     +17.5%      30396 ±  2%  interrupts.CPU124.CAL:Function_call_interrupts
     24386 ±  5%     +14.3%      27871 ±  2%  interrupts.CPU124.TLB:TLB_shootdowns
     25848 ±  6%     +16.7%      30158 ±  3%  interrupts.CPU125.CAL:Function_call_interrupts
      5928 ±  4%     +11.4%       6602 ±  3%  interrupts.CPU125.NMI:Non-maskable_interrupts
      5928 ±  4%     +11.4%       6602 ±  3%  interrupts.CPU125.PMI:Performance_monitoring_interrupts
     24424 ±  6%     +16.0%      28322 ±  4%  interrupts.CPU125.TLB:TLB_shootdowns
     25678 ±  3%     +14.7%      29459 ±  3%  interrupts.CPU126.CAL:Function_call_interrupts
     24232 ±  4%     +14.1%      27643 ±  3%  interrupts.CPU126.TLB:TLB_shootdowns
     26170 ±  2%     +12.8%      29522        interrupts.CPU127.CAL:Function_call_interrupts
    482.25 ± 50%     -71.7%     136.25 ± 23%  interrupts.CPU127.RES:Rescheduling_interrupts
     24353 ±  3%     +13.9%      27727        interrupts.CPU127.TLB:TLB_shootdowns
     25722 ±  6%     +17.8%      30298 ±  3%  interrupts.CPU128.CAL:Function_call_interrupts
     24379 ±  7%     +15.9%      28251 ±  3%  interrupts.CPU128.TLB:TLB_shootdowns
     25646 ±  5%     +17.3%      30093 ±  2%  interrupts.CPU129.CAL:Function_call_interrupts
    546.25 ± 45%     -81.0%     104.00 ± 18%  interrupts.CPU129.RES:Rescheduling_interrupts
     24288 ±  6%     +17.0%      28405 ±  3%  interrupts.CPU129.TLB:TLB_shootdowns
     30286 ±  8%     -15.2%      25690 ±  4%  interrupts.CPU13.CAL:Function_call_interrupts
     29251 ± 10%     -18.9%      23719 ±  5%  interrupts.CPU13.TLB:TLB_shootdowns
     25771 ±  4%     +16.5%      30029 ±  4%  interrupts.CPU130.CAL:Function_call_interrupts
     24433 ±  4%     +14.4%      27952 ±  3%  interrupts.CPU130.TLB:TLB_shootdowns
     25852 ±  4%     +16.1%      30006 ±  4%  interrupts.CPU131.CAL:Function_call_interrupts
      5384 ± 27%     +23.3%       6638 ±  3%  interrupts.CPU131.NMI:Non-maskable_interrupts
      5384 ± 27%     +23.3%       6638 ±  3%  interrupts.CPU131.PMI:Performance_monitoring_interrupts
     24638 ±  4%     +13.5%      27958 ±  5%  interrupts.CPU131.TLB:TLB_shootdowns
     25957 ±  7%     +15.4%      29948 ±  2%  interrupts.CPU132.CAL:Function_call_interrupts
      4631 ± 32%     +42.4%       6597 ±  4%  interrupts.CPU132.NMI:Non-maskable_interrupts
      4631 ± 32%     +42.4%       6597 ±  4%  interrupts.CPU132.PMI:Performance_monitoring_interrupts
     26030 ±  5%     +12.9%      29392 ±  2%  interrupts.CPU133.CAL:Function_call_interrupts
     24616 ±  5%     +12.1%      27590 ±  2%  interrupts.CPU133.TLB:TLB_shootdowns
     26410 ±  4%     +16.1%      30667 ±  2%  interrupts.CPU134.CAL:Function_call_interrupts
     24834 ±  5%     +15.6%      28700 ±  4%  interrupts.CPU134.TLB:TLB_shootdowns
     25560 ±  4%     +17.0%      29895 ±  3%  interrupts.CPU135.CAL:Function_call_interrupts
     24262 ±  4%     +15.4%      27986 ±  3%  interrupts.CPU135.TLB:TLB_shootdowns
     26161 ±  5%     +13.8%      29783 ±  3%  interrupts.CPU136.CAL:Function_call_interrupts
     24771 ±  6%     +13.0%      27982 ±  3%  interrupts.CPU136.TLB:TLB_shootdowns
     26012 ±  6%     +14.1%      29691 ±  4%  interrupts.CPU137.CAL:Function_call_interrupts
      5269 ± 25%     +25.8%       6629 ±  3%  interrupts.CPU137.NMI:Non-maskable_interrupts
      5269 ± 25%     +25.8%       6629 ±  3%  interrupts.CPU137.PMI:Performance_monitoring_interrupts
     24071 ±  5%     +15.7%      27850 ±  4%  interrupts.CPU137.TLB:TLB_shootdowns
     25884 ±  7%     +16.1%      30045 ±  2%  interrupts.CPU138.CAL:Function_call_interrupts
    693.75 ± 30%     -76.8%     160.75 ± 20%  interrupts.CPU138.RES:Rescheduling_interrupts
     24152 ±  5%     +16.8%      28214 ±  2%  interrupts.CPU138.TLB:TLB_shootdowns
     25618 ±  6%     +20.6%      30885        interrupts.CPU139.CAL:Function_call_interrupts
     24321 ±  6%     +17.1%      28485 ±  3%  interrupts.CPU139.TLB:TLB_shootdowns
     29486 ±  8%     -11.8%      26004 ±  4%  interrupts.CPU14.CAL:Function_call_interrupts
     28670 ±  8%     -16.8%      23864 ±  5%  interrupts.CPU14.TLB:TLB_shootdowns
     25992 ±  4%     +16.3%      30222 ±  4%  interrupts.CPU140.CAL:Function_call_interrupts
     24518 ±  4%     +15.8%      28387 ±  4%  interrupts.CPU140.TLB:TLB_shootdowns
     26055 ±  5%     +16.6%      30391        interrupts.CPU141.CAL:Function_call_interrupts
     24493 ±  5%     +16.5%      28525        interrupts.CPU141.TLB:TLB_shootdowns
     26363 ±  3%     +16.8%      30795 ±  3%  interrupts.CPU142.CAL:Function_call_interrupts
     24628 ±  4%     +17.0%      28806 ±  3%  interrupts.CPU142.TLB:TLB_shootdowns
     25914 ±  5%     +17.6%      30482 ±  3%  interrupts.CPU143.CAL:Function_call_interrupts
     24338 ±  5%     +17.2%      28513 ±  3%  interrupts.CPU143.TLB:TLB_shootdowns
     28172 ±  7%     -13.7%      24302 ±  5%  interrupts.CPU144.TLB:TLB_shootdowns
     29921 ±  7%     -12.5%      26172 ±  5%  interrupts.CPU145.CAL:Function_call_interrupts
    139.50 ± 20%    +336.4%     608.75 ± 38%  interrupts.CPU145.RES:Rescheduling_interrupts
     28743 ±  7%     -17.7%      23655 ±  5%  interrupts.CPU145.TLB:TLB_shootdowns
     29570 ±  6%     -10.6%      26447 ±  3%  interrupts.CPU146.CAL:Function_call_interrupts
     28331 ±  6%     -15.2%      24030 ±  5%  interrupts.CPU146.TLB:TLB_shootdowns
     28339 ±  8%     -14.6%      24193 ±  7%  interrupts.CPU147.TLB:TLB_shootdowns
     29264 ±  8%      -9.5%      26476 ±  3%  interrupts.CPU148.CAL:Function_call_interrupts
     28244 ±  8%     -13.8%      24347 ±  3%  interrupts.CPU148.TLB:TLB_shootdowns
     28539 ± 11%     -16.9%      23726 ±  5%  interrupts.CPU15.TLB:TLB_shootdowns
     28300 ± 10%     -14.3%      24259 ±  6%  interrupts.CPU150.TLB:TLB_shootdowns
     27945 ±  8%     -13.4%      24196 ±  3%  interrupts.CPU152.TLB:TLB_shootdowns
     28619 ±  9%     -14.9%      24345 ±  5%  interrupts.CPU153.TLB:TLB_shootdowns
    114.25 ± 36%    +288.8%     444.25 ± 72%  interrupts.CPU154.RES:Rescheduling_interrupts
     28760 ±  9%     -15.6%      24269 ±  4%  interrupts.CPU156.TLB:TLB_shootdowns
     28492 ±  9%     -16.5%      23790 ±  7%  interrupts.CPU157.TLB:TLB_shootdowns
     29360 ±  9%     -10.2%      26362 ±  4%  interrupts.CPU159.CAL:Function_call_interrupts
     28503 ±  9%     -14.5%      24359 ±  5%  interrupts.CPU159.TLB:TLB_shootdowns
     28564 ±  9%     -15.0%      24274 ±  6%  interrupts.CPU160.TLB:TLB_shootdowns
    154.25 ± 17%    +164.7%     408.25 ± 61%  interrupts.CPU162.RES:Rescheduling_interrupts
     28789 ±  7%     -14.4%      24636 ±  5%  interrupts.CPU162.TLB:TLB_shootdowns
     28479 ±  9%     -13.6%      24598 ±  4%  interrupts.CPU165.TLB:TLB_shootdowns
    127.25 ±  7%    +308.4%     519.75 ± 62%  interrupts.CPU166.RES:Rescheduling_interrupts
     31976 ±  7%     -17.0%      26547 ±  6%  interrupts.CPU167.CAL:Function_call_interrupts
     29008 ± 10%     -15.8%      24419 ±  7%  interrupts.CPU167.TLB:TLB_shootdowns
     29467 ±  7%     -11.0%      26214 ±  2%  interrupts.CPU17.CAL:Function_call_interrupts
     28584 ±  7%     -16.4%      23889 ±  4%  interrupts.CPU17.TLB:TLB_shootdowns
    424.00 ± 77%     -70.3%     126.00 ± 11%  interrupts.CPU172.RES:Rescheduling_interrupts
    272.75 ± 49%     -56.1%     119.75 ±  7%  interrupts.CPU174.RES:Rescheduling_interrupts
    359.00 ± 93%     -65.6%     123.50 ± 16%  interrupts.CPU176.RES:Rescheduling_interrupts
      6664 ±  5%     -27.3%       4842 ± 32%  interrupts.CPU177.NMI:Non-maskable_interrupts
      6664 ±  5%     -27.3%       4842 ± 32%  interrupts.CPU177.PMI:Performance_monitoring_interrupts
     29846 ±  8%     -13.1%      25937 ±  5%  interrupts.CPU18.CAL:Function_call_interrupts
     28931 ± 10%     -16.8%      24081 ±  5%  interrupts.CPU18.TLB:TLB_shootdowns
     29203 ±  5%     -10.7%      26080 ±  2%  interrupts.CPU19.CAL:Function_call_interrupts
     28260 ±  5%     -15.9%      23769 ±  5%  interrupts.CPU19.TLB:TLB_shootdowns
     27743 ±  7%     -14.0%      23865 ±  7%  interrupts.CPU2.TLB:TLB_shootdowns
     30112 ±  7%     -13.4%      26086 ±  4%  interrupts.CPU20.CAL:Function_call_interrupts
     28869 ± 10%     -17.3%      23876 ±  6%  interrupts.CPU20.TLB:TLB_shootdowns
     29898 ± 10%     -14.1%      25673 ±  4%  interrupts.CPU21.CAL:Function_call_interrupts
     29010 ± 11%     -18.4%      23681 ±  5%  interrupts.CPU21.TLB:TLB_shootdowns
     30968 ±  9%     -14.7%      26414 ±  4%  interrupts.CPU22.CAL:Function_call_interrupts
    159.75 ± 20%    +202.2%     482.75 ± 63%  interrupts.CPU22.RES:Rescheduling_interrupts
     29704 ± 11%     -18.3%      24261 ±  5%  interrupts.CPU22.TLB:TLB_shootdowns
     26117 ±  4%     +11.9%      29219 ±  3%  interrupts.CPU24.CAL:Function_call_interrupts
     24304 ±  5%     +11.6%      27111 ±  3%  interrupts.CPU24.TLB:TLB_shootdowns
     25367 ±  7%     +14.7%      29099 ±  3%  interrupts.CPU25.CAL:Function_call_interrupts
     25348 ±  4%     +15.2%      29207 ±  2%  interrupts.CPU26.CAL:Function_call_interrupts
     23792 ±  5%     +15.4%      27454 ±  3%  interrupts.CPU26.TLB:TLB_shootdowns
     25720 ±  3%     +15.5%      29701 ±  2%  interrupts.CPU27.CAL:Function_call_interrupts
     24213 ±  4%     +15.4%      27948 ±  2%  interrupts.CPU27.TLB:TLB_shootdowns
     25508 ±  5%     +15.3%      29407        interrupts.CPU28.CAL:Function_call_interrupts
     23994 ±  5%     +14.5%      27478 ±  2%  interrupts.CPU28.TLB:TLB_shootdowns
     26138 ±  5%     +12.4%      29372 ±  3%  interrupts.CPU29.CAL:Function_call_interrupts
     24422 ±  6%     +13.3%      27663 ±  3%  interrupts.CPU29.TLB:TLB_shootdowns
     30243 ±  7%     -11.4%      26810 ±  3%  interrupts.CPU3.CAL:Function_call_interrupts
     29002 ±  8%     -16.0%      24374 ±  5%  interrupts.CPU3.TLB:TLB_shootdowns
     25918 ±  4%     +12.7%      29215        interrupts.CPU30.CAL:Function_call_interrupts
     24138 ±  5%     +14.0%      27527 ±  2%  interrupts.CPU30.TLB:TLB_shootdowns
     25906 ±  3%     +12.9%      29260 ±  2%  interrupts.CPU31.CAL:Function_call_interrupts
    725.00 ± 51%     -78.2%     158.25 ± 14%  interrupts.CPU31.RES:Rescheduling_interrupts
     24290 ±  3%     +12.5%      27325 ±  3%  interrupts.CPU31.TLB:TLB_shootdowns
     25795 ±  5%     +14.1%      29427 ±  2%  interrupts.CPU32.CAL:Function_call_interrupts
      5933 ±  6%     +11.2%       6596 ±  4%  interrupts.CPU32.NMI:Non-maskable_interrupts
      5933 ±  6%     +11.2%       6596 ±  4%  interrupts.CPU32.PMI:Performance_monitoring_interrupts
     24330 ±  5%     +14.2%      27782 ±  3%  interrupts.CPU32.TLB:TLB_shootdowns
     25697 ±  4%     +14.8%      29508 ±  2%  interrupts.CPU33.CAL:Function_call_interrupts
     24149 ±  5%     +15.6%      27907 ±  2%  interrupts.CPU33.TLB:TLB_shootdowns
     25303 ±  4%     +17.4%      29716 ±  2%  interrupts.CPU34.CAL:Function_call_interrupts
    802.00 ± 55%     -81.3%     150.25 ±  6%  interrupts.CPU34.RES:Rescheduling_interrupts
     23581 ±  6%     +18.3%      27888        interrupts.CPU34.TLB:TLB_shootdowns
     25676 ±  5%     +14.6%      29414 ±  3%  interrupts.CPU35.CAL:Function_call_interrupts
      6050 ±  2%      +9.2%       6608 ±  4%  interrupts.CPU35.NMI:Non-maskable_interrupts
      6050 ±  2%      +9.2%       6608 ±  4%  interrupts.CPU35.PMI:Performance_monitoring_interrupts
     24201 ±  5%     +14.3%      27668 ±  4%  interrupts.CPU35.TLB:TLB_shootdowns
     25430 ±  7%     +14.8%      29187 ±  2%  interrupts.CPU36.CAL:Function_call_interrupts
    732.75 ± 45%     -79.2%     152.25 ±  8%  interrupts.CPU36.RES:Rescheduling_interrupts
     23799 ±  8%     +16.5%      27736 ±  2%  interrupts.CPU36.TLB:TLB_shootdowns
     25728 ±  4%     +14.5%      29454 ±  2%  interrupts.CPU37.CAL:Function_call_interrupts
     23819 ±  6%     +16.3%      27690 ±  2%  interrupts.CPU37.TLB:TLB_shootdowns
     25920 ±  7%     +17.0%      30321 ±  3%  interrupts.CPU38.CAL:Function_call_interrupts
    657.50 ± 46%     -72.9%     178.00 ± 34%  interrupts.CPU38.RES:Rescheduling_interrupts
     24472 ±  9%     +15.0%      28146 ±  2%  interrupts.CPU38.TLB:TLB_shootdowns
     26067 ±  3%     +12.9%      29429 ±  3%  interrupts.CPU39.CAL:Function_call_interrupts
     24368 ±  4%     +13.8%      27729 ±  3%  interrupts.CPU39.TLB:TLB_shootdowns
     29788 ±  7%     -12.2%      26150 ±  5%  interrupts.CPU4.CAL:Function_call_interrupts
     28740 ±  8%     -16.0%      24129 ±  6%  interrupts.CPU4.TLB:TLB_shootdowns
     25685 ±  6%     +24.5%      31972 ±  8%  interrupts.CPU40.CAL:Function_call_interrupts
      5314 ± 26%     +24.3%       6604 ±  3%  interrupts.CPU40.NMI:Non-maskable_interrupts
      5314 ± 26%     +24.3%       6604 ±  3%  interrupts.CPU40.PMI:Performance_monitoring_interrupts
     24146 ±  7%     +16.5%      28131 ±  3%  interrupts.CPU40.TLB:TLB_shootdowns
     25868 ±  4%     +13.9%      29454 ±  2%  interrupts.CPU41.CAL:Function_call_interrupts
      6072 ±  2%      +8.8%       6605        interrupts.CPU41.NMI:Non-maskable_interrupts
      6072 ±  2%      +8.8%       6605        interrupts.CPU41.PMI:Performance_monitoring_interrupts
     24167 ±  6%     +14.3%      27623 ±  2%  interrupts.CPU41.TLB:TLB_shootdowns
     25587 ±  4%     +16.7%      29854 ±  4%  interrupts.CPU42.CAL:Function_call_interrupts
    717.00 ± 45%     -81.2%     135.00 ± 11%  interrupts.CPU42.RES:Rescheduling_interrupts
     23796 ±  5%     +18.3%      28160 ±  4%  interrupts.CPU42.TLB:TLB_shootdowns
     25710 ±  4%     +15.0%      29559 ±  2%  interrupts.CPU43.CAL:Function_call_interrupts
      3721 ± 36%     +51.4%       5635 ± 23%  interrupts.CPU43.NMI:Non-maskable_interrupts
      3721 ± 36%     +51.4%       5635 ± 23%  interrupts.CPU43.PMI:Performance_monitoring_interrupts
     24281 ±  5%     +14.3%      27747 ±  2%  interrupts.CPU43.TLB:TLB_shootdowns
     25978 ±  3%     +15.2%      29919 ±  4%  interrupts.CPU44.CAL:Function_call_interrupts
    764.75 ± 45%     -81.9%     138.50 ±  3%  interrupts.CPU44.RES:Rescheduling_interrupts
     24141 ±  5%     +16.5%      28133 ±  5%  interrupts.CPU44.TLB:TLB_shootdowns
     26141 ±  5%     +13.2%      29590 ±  2%  interrupts.CPU45.CAL:Function_call_interrupts
     24466 ±  6%     +13.8%      27846 ±  2%  interrupts.CPU45.TLB:TLB_shootdowns
     25667 ±  6%     +23.5%      31689 ± 18%  interrupts.CPU46.CAL:Function_call_interrupts
    698.75 ± 47%     -79.2%     145.50 ±  7%  interrupts.CPU46.RES:Rescheduling_interrupts
     25224 ±  6%     +15.9%      29229 ±  5%  interrupts.CPU47.CAL:Function_call_interrupts
    604.50 ± 42%     -79.4%     124.75 ± 12%  interrupts.CPU47.RES:Rescheduling_interrupts
     23723 ±  7%     +16.0%      27518 ±  5%  interrupts.CPU47.TLB:TLB_shootdowns
     27660 ±  7%     -13.6%      23910 ±  6%  interrupts.CPU48.TLB:TLB_shootdowns
     29106 ±  6%     -10.6%      26021 ±  5%  interrupts.CPU5.CAL:Function_call_interrupts
     27937 ±  8%     -13.7%      24113 ±  5%  interrupts.CPU5.TLB:TLB_shootdowns
     29009 ±  9%     -16.4%      24265 ±  6%  interrupts.CPU53.TLB:TLB_shootdowns
     28324 ±  7%     -15.1%      24043 ±  8%  interrupts.CPU55.TLB:TLB_shootdowns
     30255 ± 10%     -13.8%      26084 ±  2%  interrupts.CPU57.CAL:Function_call_interrupts
     28116 ±  9%     -15.2%      23834 ±  4%  interrupts.CPU57.TLB:TLB_shootdowns
    169.75 ± 18%    +128.0%     387.00 ± 37%  interrupts.CPU58.RES:Rescheduling_interrupts
     29965 ±  7%     -12.9%      26105 ±  5%  interrupts.CPU6.CAL:Function_call_interrupts
    307.75 ±  9%     +73.8%     534.75 ± 40%  interrupts.CPU6.RES:Rescheduling_interrupts
     28910 ±  8%     -16.9%      24011 ±  6%  interrupts.CPU6.TLB:TLB_shootdowns
     27892 ±  5%     -14.2%      23934 ±  6%  interrupts.CPU60.TLB:TLB_shootdowns
    141.00 ± 22%    +287.4%     546.25 ± 72%  interrupts.CPU61.RES:Rescheduling_interrupts
    129.75 ± 16%    +236.4%     436.50 ± 69%  interrupts.CPU62.RES:Rescheduling_interrupts
     27672 ±  7%     -12.7%      24163 ±  5%  interrupts.CPU63.TLB:TLB_shootdowns
     29787 ±  8%     -11.5%      26355 ±  3%  interrupts.CPU64.CAL:Function_call_interrupts
     28841 ±  8%     -15.9%      24259 ±  5%  interrupts.CPU64.TLB:TLB_shootdowns
     28058 ±  6%     -15.5%      23705 ±  6%  interrupts.CPU66.TLB:TLB_shootdowns
     27955 ±  8%     -14.1%      24003 ±  6%  interrupts.CPU67.TLB:TLB_shootdowns
     29515 ±  8%     -11.1%      26250 ±  3%  interrupts.CPU7.CAL:Function_call_interrupts
     28331 ±  9%     -15.2%      24026 ±  5%  interrupts.CPU7.TLB:TLB_shootdowns
    127.00 ±  2%    +371.1%     598.25 ± 74%  interrupts.CPU70.RES:Rescheduling_interrupts
     29637 ±  8%     -12.2%      26024 ±  5%  interrupts.CPU71.CAL:Function_call_interrupts
     28588 ±  9%     -17.7%      23535 ±  7%  interrupts.CPU71.TLB:TLB_shootdowns
      4991 ± 30%     +35.3%       6753 ±  4%  interrupts.CPU75.NMI:Non-maskable_interrupts
      4991 ± 30%     +35.3%       6753 ±  4%  interrupts.CPU75.PMI:Performance_monitoring_interrupts
      4599 ± 31%     +36.1%       6259 ±  3%  interrupts.CPU8.NMI:Non-maskable_interrupts
      4599 ± 31%     +36.1%       6259 ±  3%  interrupts.CPU8.PMI:Performance_monitoring_interrupts
     28179 ±  4%     -14.3%      24148 ±  5%  interrupts.CPU8.TLB:TLB_shootdowns
      3995 ± 34%     +68.6%       6736 ±  5%  interrupts.CPU86.NMI:Non-maskable_interrupts
      3995 ± 34%     +68.6%       6736 ±  5%  interrupts.CPU86.PMI:Performance_monitoring_interrupts
     26829 ±  4%     -14.4%      22975 ±  6%  interrupts.CPU9.TLB:TLB_shootdowns
    512.00 ± 56%     -66.2%     173.00 ± 24%  interrupts.CPU90.RES:Rescheduling_interrupts
     27124 ±  7%     -14.9%      23086 ±  5%  interrupts.CPU96.TLB:TLB_shootdowns
     28241 ±  8%     -14.9%      24026 ±  5%  interrupts.CPU97.TLB:TLB_shootdowns
     28551 ± 10%     -16.1%      23965 ±  6%  interrupts.CPU99.TLB:TLB_shootdowns





Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


Thanks,
Oliver Sang


View attachment "config-5.10.0-rc6-00078-g1a728dff855a" of type "text/plain" (171050 bytes)

View attachment "job-script" of type "text/plain" (8336 bytes)

View attachment "job.yaml" of type "text/plain" (5746 bytes)

View attachment "reproduce" of type "text/plain" (553 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ