lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [day] [month] [year] [list]
Date:   Mon, 14 Sep 2020 10:43:22 +0800
From:   kernel test robot <rong.a.chen@...el.com>
To:     Linus Torvalds <torvalds@...ux-foundation.org>
Cc:     Peter Xu <peterx@...hat.com>, LKML <linux-kernel@...r.kernel.org>,
        lkp@...ts.01.org, lkp@...el.com, ying.huang@...el.com,
        feng.tang@...el.com, zhengjun.xing@...el.com
Subject: [mm] 09854ba94c: vm-scalability.throughput 31.4% improvement

Greeting,

FYI, we noticed a 31.4% improvement of vm-scalability.throughput due to commit:


commit: 09854ba94c6aad7886996bfbee2530b3d8a7f4f4 ("mm: do_wp_page() simplification")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master


in testcase: vm-scalability
on test machine: 104 threads Skylake with 192G memory
with following parameters:

	runtime: 300s
	size: 8T
	test: anon-cow-seq
	cpufreq_governor: performance
	ucode: 0x2006906

test-description: The motivation behind this suite is to exercise functions and regions of the mm/ of the Linux kernel which are of interest to us.
test-url: https://git.kernel.org/cgit/linux/kernel/git/wfg/vm-scalability.git/

In addition to that, the commit also has significant impact on the following tests:

+------------------+-------------------------------------------------------------------+
| testcase: change | vm-scalability: vm-scalability.throughput 7.7% improvement        |
| test machine     | 16 threads Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz with 32G memory |
| test parameters  | cpufreq_governor=performance                                      |
|                  | runtime=300s                                                      |
|                  | size=512G                                                         |
|                  | test=anon-cow-rand                                                |
|                  | ucode=0xd6                                                        |
+------------------+-------------------------------------------------------------------+




Details are as below:
-------------------------------------------------------------------------------------------------->


To reproduce:

        git clone https://github.com/intel/lkp-tests.git
        cd lkp-tests
        bin/lkp install job.yaml  # job file is attached in this email
        bin/lkp run     job.yaml

=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/size/tbox_group/test/testcase/ucode:
  gcc-9/performance/x86_64-rhel-8.3/debian-10.4-x86_64-20200603.cgz/300s/8T/lkp-skl-fpga01/anon-cow-seq/vm-scalability/0x2006906

commit: 
  v5.8
  09854ba94c ("mm: do_wp_page() simplification")

            v5.8 09854ba94c6aad7886996bfbee2 
---------------- --------------------------- 
       fail:runs  %reproduction    fail:runs
           |             |             |    
          4:8           64%           9:16    perf-profile.calltrace.cycles-pp.sync_regs.error_entry.do_access
          9:8          120%          19:16    perf-profile.calltrace.cycles-pp.error_entry.do_access
         11:8          157%          23:16    perf-profile.children.cycles-pp.error_entry
          5:8           79%          11:16    perf-profile.self.cycles-pp.error_entry
         %stddev     %change         %stddev
             \          |                \  
    268551           +35.1%     362826        vm-scalability.median
  28753307           +31.4%   37793854        vm-scalability.throughput
    134481 ±  6%     +23.5%     166124 ±  2%  vm-scalability.time.involuntary_context_switches
 1.403e+09           +26.2%   1.77e+09        vm-scalability.time.minor_page_faults
      6444           +34.1%       8640        vm-scalability.time.percent_of_cpu_this_job_got
     14353           +34.2%      19255        vm-scalability.time.system_time
      5188           +32.9%       6893 ±  4%  vm-scalability.time.user_time
 1.555e+08 ±  4%     -99.2%    1279562 ±  5%  vm-scalability.time.voluntary_context_switches
 6.302e+09           +26.2%  7.953e+09        vm-scalability.workload
 6.874e+08 ±  3%     +27.3%  8.749e+08        numa-numastat.node0.local_node
 6.874e+08 ±  3%     +27.3%  8.749e+08        numa-numastat.node0.numa_hit
 7.174e+08 ±  3%     +25.2%  8.978e+08        numa-numastat.node1.local_node
 7.174e+08 ±  3%     +25.2%  8.978e+08        numa-numastat.node1.numa_hit
 1.096e+09 ±  5%     -95.5%   49493536 ±159%  cpuidle.C1.time
  39622278 ±  4%     -97.9%     827336 ± 80%  cpuidle.C1.usage
  31389280 ± 10%     -75.1%    7813790 ± 22%  cpuidle.C1E.usage
 5.883e+08 ±  5%     -98.9%    6340935 ±  8%  cpuidle.POLL.time
  88921823 ±  6%     -99.3%     609604 ±  8%  cpuidle.POLL.usage
     32.68           -17.4       15.24 ±  3%  mpstat.cpu.all.idle%
      4.01 ±  7%      -4.0        0.03 ± 16%  mpstat.cpu.all.iowait%
      1.54            -0.3        1.24 ±  3%  mpstat.cpu.all.irq%
      0.11            -0.1        0.04 ±  3%  mpstat.cpu.all.soft%
     45.27           +16.2       61.49        mpstat.cpu.all.sys%
     16.40            +5.6       21.96 ±  4%  mpstat.cpu.all.usr%
  34719024           +29.1%   44835140        meminfo.Active
  34718776           +29.1%   44834810        meminfo.Active(anon)
  34487360           +28.5%   44314921        meminfo.AnonPages
  78184290           +12.4%   87858835 ±  2%  meminfo.Committed_AS
  37213454           +27.4%   47409435        meminfo.Memused
    158668           +12.0%     177734        meminfo.PageTables
    252730           +28.5%     324817        meminfo.max_used_kB
     32.62           -53.1%      15.31 ±  3%  vmstat.cpu.id
     46.00           +34.4%      61.81        vmstat.cpu.sy
     16.00           +33.6%      21.38 ±  4%  vmstat.cpu.us
   5252099 ±129%    -100.0%       0.00        vmstat.procs.b
     65.38           +40.2%      91.62        vmstat.procs.r
   1016936 ±  4%     -98.9%      11412 ±  3%  vmstat.system.cs
    304498           -32.0%     207124 ±  3%  vmstat.system.in
  17153583           +29.0%   22136482 ±  4%  numa-meminfo.node0.Active
  17153413           +29.0%   22136310 ±  4%  numa-meminfo.node0.Active(anon)
  17047962           +28.6%   21924106 ±  4%  numa-meminfo.node0.AnonPages
  18352394 ±  2%     +27.7%   23432911 ±  4%  numa-meminfo.node0.MemUsed
  17592839           +29.4%   22761355 ±  4%  numa-meminfo.node1.Active
  17592761           +29.4%   22761198 ±  4%  numa-meminfo.node1.Active(anon)
  17466514           +28.6%   22454452 ±  4%  numa-meminfo.node1.AnonPages
  18888900 ±  2%     +27.3%   24039768 ±  4%  numa-meminfo.node1.MemUsed
   4279901 ±  2%     +30.1%    5568775 ±  4%  numa-vmstat.node0.nr_active_anon
   4253727           +29.7%    5515159 ±  4%  numa-vmstat.node0.nr_anon_pages
   4279762 ±  2%     +30.1%    5568563 ±  4%  numa-vmstat.node0.nr_zone_active_anon
 3.435e+08 ±  3%     +27.5%  4.381e+08        numa-vmstat.node0.numa_hit
 3.434e+08 ±  3%     +27.6%  4.381e+08        numa-vmstat.node0.numa_local
   4394975           +30.0%    5713855 ±  4%  numa-vmstat.node1.nr_active_anon
   4363511           +29.1%    5635227 ±  4%  numa-vmstat.node1.nr_anon_pages
   4394841           +30.0%    5713610 ±  4%  numa-vmstat.node1.nr_zone_active_anon
 3.605e+08 ±  3%     +24.9%  4.503e+08        numa-vmstat.node1.numa_hit
 3.604e+08 ±  3%     +24.9%  4.502e+08        numa-vmstat.node1.numa_local
     72302 ±  2%     +16.1%      83948 ±  2%  slabinfo.anon_vma.active_objs
      1572 ±  2%     +16.1%       1825 ±  2%  slabinfo.anon_vma.active_slabs
     72343 ±  2%     +16.1%      84013 ±  2%  slabinfo.anon_vma.num_objs
      1572 ±  2%     +16.1%       1825 ±  2%  slabinfo.anon_vma.num_slabs
    139153 ±  3%     +10.3%     153501        slabinfo.anon_vma_chain.active_objs
      2176 ±  3%     +10.3%       2400        slabinfo.anon_vma_chain.active_slabs
    139330 ±  3%     +10.3%     153676        slabinfo.anon_vma_chain.num_objs
      2176 ±  3%     +10.3%       2400        slabinfo.anon_vma_chain.num_slabs
      3574 ±  4%     +14.9%       4107 ±  4%  slabinfo.khugepaged_mm_slot.active_objs
      3574 ±  4%     +14.9%       4107 ±  4%  slabinfo.khugepaged_mm_slot.num_objs
      7595 ±  3%     +11.8%       8490 ±  4%  slabinfo.signal_cache.active_objs
      7614 ±  3%     +11.7%       8503 ±  3%  slabinfo.signal_cache.num_objs
   8676099           +29.4%   11226006        proc-vmstat.nr_active_anon
   8618051           +28.8%   11096095        proc-vmstat.nr_anon_pages
    186.00           +63.4%     303.94 ± 76%  proc-vmstat.nr_dirtied
   3960653            -6.5%    3703377        proc-vmstat.nr_dirty_background_threshold
   7930992            -6.5%    7415811        proc-vmstat.nr_dirty_threshold
    275794            +3.8%     286144        proc-vmstat.nr_file_pages
  39882917            -6.4%   37312878        proc-vmstat.nr_free_pages
     28737            +1.8%      29250        proc-vmstat.nr_inactive_anon
    387.38            +5.8%     409.81 ±  3%  proc-vmstat.nr_inactive_file
      9494            +4.5%       9919        proc-vmstat.nr_mapped
     39670           +12.8%      44752 ±  2%  proc-vmstat.nr_page_table_pages
     38106            +7.6%      41012        proc-vmstat.nr_shmem
    237268            +3.1%     244670        proc-vmstat.nr_unevictable
    173.00           +67.1%     289.12 ± 80%  proc-vmstat.nr_written
   8676099           +29.4%   11226005        proc-vmstat.nr_zone_active_anon
     28737            +1.8%      29250        proc-vmstat.nr_zone_inactive_anon
    387.38            +5.8%     409.81 ±  3%  proc-vmstat.nr_zone_inactive_file
    237268            +3.1%     244670        proc-vmstat.nr_zone_unevictable
   1377947           +25.9%    1734779        proc-vmstat.numa_hint_faults
    728235           +38.0%    1004832        proc-vmstat.numa_hint_faults_local
 1.405e+09           +26.2%  1.773e+09        proc-vmstat.numa_hit
   1331719           +26.6%    1686290        proc-vmstat.numa_huge_pte_updates
 1.405e+09           +26.2%  1.773e+09        proc-vmstat.numa_local
 1.002e+09            +4.9%  1.051e+09 ±  2%  proc-vmstat.numa_pte_updates
     14101 ±  3%     +50.0%      21151 ±  7%  proc-vmstat.pgactivate
 1.659e+09           +25.8%  2.087e+09        proc-vmstat.pgalloc_normal
 1.403e+09           +26.2%  1.771e+09        proc-vmstat.pgfault
 1.658e+09           +25.6%  2.083e+09        proc-vmstat.pgfree
 2.407e+08 ±  3%     +23.7%  2.977e+08 ±  5%  proc-vmstat.pgmigrate_fail
     26254           +26.2%      33130        proc-vmstat.thp_fault_alloc
   2729468           +26.2%    3444694        proc-vmstat.thp_split_pmd
   4932939 ± 19%     -69.3%    1513787 ± 96%  sched_debug.cfs_rq:/.MIN_vruntime.max
     84033 ± 11%     +41.8%     119159 ±  9%  sched_debug.cfs_rq:/.exec_clock.avg
     71200 ± 10%     +54.4%     109919 ±  9%  sched_debug.cfs_rq:/.exec_clock.min
   4932940 ± 19%     -69.3%    1513787 ± 96%  sched_debug.cfs_rq:/.max_vruntime.max
   6123507 ± 11%     +88.9%   11566046 ±  9%  sched_debug.cfs_rq:/.min_vruntime.avg
   6855360 ± 12%     +76.2%   12079127 ±  9%  sched_debug.cfs_rq:/.min_vruntime.max
   5364188 ± 10%     +99.8%   10715451 ±  9%  sched_debug.cfs_rq:/.min_vruntime.min
      0.35 ±  8%     -49.7%       0.18 ± 21%  sched_debug.cfs_rq:/.nr_running.stddev
     54.01 ± 12%     +52.0%      82.10 ±  9%  sched_debug.cfs_rq:/.nr_spread_over.avg
    191.68 ± 39%    +118.7%     419.26 ± 14%  sched_debug.cfs_rq:/.nr_spread_over.max
     27.61 ± 22%    +119.7%      60.66 ± 12%  sched_debug.cfs_rq:/.nr_spread_over.min
     19.60 ± 30%     +85.2%      36.29 ± 14%  sched_debug.cfs_rq:/.nr_spread_over.stddev
      1331 ± 13%     +23.3%       1641 ± 10%  sched_debug.cfs_rq:/.runnable_avg.max
     53.31 ± 73%    +279.8%     202.45 ± 36%  sched_debug.cfs_rq:/.runnable_avg.min
     48.95 ± 81%    +280.1%     186.09 ± 35%  sched_debug.cfs_rq:/.util_avg.min
    396.93 ± 11%     +48.4%     589.24 ± 17%  sched_debug.cfs_rq:/.util_est_enqueued.avg
      0.02 ±264%  +1.5e+05%      31.03 ±198%  sched_debug.cfs_rq:/.util_est_enqueued.min
    218292 ± 12%     +32.3%     288869 ± 13%  sched_debug.cpu.avg_idle.stddev
      5646 ±  8%     +38.6%       7826 ± 18%  sched_debug.cpu.curr->pid.avg
      2974 ± 10%     -43.5%       1681 ± 28%  sched_debug.cpu.curr->pid.stddev
   1333544 ± 12%     -98.7%      17020 ±  9%  sched_debug.cpu.nr_switches.avg
   1950550 ± 14%     -98.0%      38090 ±  9%  sched_debug.cpu.nr_switches.max
    789316 ±  9%     -99.0%       8155 ± 28%  sched_debug.cpu.nr_switches.min
    446932 ± 27%     -98.5%       6869 ± 23%  sched_debug.cpu.nr_switches.stddev
      0.16 ± 48%     -94.3%       0.01 ±157%  sched_debug.cpu.nr_uninterruptible.avg
     24.52 ±  9%     -20.2%      19.57 ± 14%  sched_debug.cpu.nr_uninterruptible.stddev
   1332157 ± 12%     -98.8%      15736 ± 10%  sched_debug.cpu.sched_count.avg
   1947528 ± 14%     -98.3%      33519 ± 11%  sched_debug.cpu.sched_count.max
    787480 ±  9%     -99.1%       7471 ± 29%  sched_debug.cpu.sched_count.min
    446663 ± 27%     -98.6%       6318 ± 26%  sched_debug.cpu.sched_count.stddev
    664464 ± 12%     -99.0%       6592 ± 10%  sched_debug.cpu.sched_goidle.avg
    972338 ± 14%     -98.5%      15040 ± 12%  sched_debug.cpu.sched_goidle.max
    391974 ±  9%     -99.3%       2756 ± 34%  sched_debug.cpu.sched_goidle.min
    223557 ± 27%     -98.7%       2997 ± 29%  sched_debug.cpu.sched_goidle.stddev
    667116 ± 12%     -98.9%       7572 ± 10%  sched_debug.cpu.ttwu_count.avg
    937077 ± 13%     -98.2%      17318 ± 13%  sched_debug.cpu.ttwu_count.max
    428250 ±  9%     -99.3%       3178 ± 29%  sched_debug.cpu.ttwu_count.min
    183076 ± 26%     -98.1%       3391 ± 23%  sched_debug.cpu.ttwu_count.stddev
      1669 ±  9%    +300.6%       6686 ± 19%  sched_debug.cpu.ttwu_local.max
    672.88 ± 12%     -29.3%     475.56 ± 11%  sched_debug.cpu.ttwu_local.min
    184.37 ± 14%    +363.6%     854.73 ± 20%  sched_debug.cpu.ttwu_local.stddev
     10.09            -5.9%       9.50        perf-stat.i.MPKI
 3.452e+10           +13.1%  3.904e+10        perf-stat.i.branch-instructions
      0.21 ±  2%      -0.1        0.11 ±  2%  perf-stat.i.branch-miss-rate%
  59081166 ±  2%     -44.2%   32984841        perf-stat.i.branch-misses
     35.55           -12.2       23.31 ±  4%  perf-stat.i.cache-miss-rate%
 4.767e+08           -36.0%  3.052e+08 ±  4%  perf-stat.i.cache-misses
 1.411e+09            -7.7%  1.302e+09        perf-stat.i.cache-references
   1115456 ±  4%     -98.8%      13912 ±  4%  perf-stat.i.context-switches
      1.50           +41.0%       2.11 ±  2%  perf-stat.i.cpi
    105792           +10.7%     117110        perf-stat.i.cpu-clock
 1.933e+11           +46.4%   2.83e+11        perf-stat.i.cpu-cycles
      2384 ±  4%     -81.2%     448.09 ±  6%  perf-stat.i.cpu-migrations
    431.76          +126.4%     977.49 ±  4%  perf-stat.i.cycles-between-cache-misses
 2.883e+10           +19.6%  3.449e+10        perf-stat.i.dTLB-loads
      0.15            +0.0        0.18        perf-stat.i.dTLB-store-miss-rate%
  15666838           +38.2%   21653648        perf-stat.i.dTLB-store-misses
 9.651e+09           +24.2%  1.199e+10        perf-stat.i.dTLB-stores
     14.00            +4.1       18.07        perf-stat.i.iTLB-load-miss-rate%
   9001555 ±  3%     +72.9%   15566626 ±  2%  perf-stat.i.iTLB-load-misses
  53605109           +30.2%   69780056        perf-stat.i.iTLB-loads
     15977           -42.8%       9133 ±  2%  perf-stat.i.instructions-per-iTLB-miss
      0.71           -29.1%       0.50        perf-stat.i.ipc
      1.77           +30.0%       2.30        perf-stat.i.metric.GHz
      0.05 ± 24%    +151.1%       0.12 ±  4%  perf-stat.i.metric.K/sec
    685.48            +8.4%     743.38        perf-stat.i.metric.M/sec
   5007741           +39.6%    6991127        perf-stat.i.minor-faults
  75936116 ±  2%     -57.0%   32653994 ±  6%  perf-stat.i.node-load-misses
  36986631 ±  2%     -61.5%   14242141 ± 20%  perf-stat.i.node-loads
     16.90 ±  3%      -5.8       11.10 ±  8%  perf-stat.i.node-store-miss-rate%
  28049976           +33.8%   37519554        perf-stat.i.node-stores
   5007740           +39.6%    6991122        perf-stat.i.page-faults
    105792           +10.7%     117110        perf-stat.i.task-clock
     10.44            -8.5%       9.56        perf-stat.overall.MPKI
      0.17 ±  2%      -0.1        0.09        perf-stat.overall.branch-miss-rate%
     33.97           -10.6       23.34 ±  3%  perf-stat.overall.cache-miss-rate%
      1.47           +42.7%       2.09        perf-stat.overall.cpi
    413.26          +127.3%     939.14 ±  3%  perf-stat.overall.cycles-between-cache-misses
      0.16            +0.0        0.18        perf-stat.overall.dTLB-store-miss-rate%
     14.35 ±  2%      +3.8       18.18        perf-stat.overall.iTLB-load-miss-rate%
     15053 ±  2%     -41.7%       8779        perf-stat.overall.instructions-per-iTLB-miss
      0.68           -29.9%       0.48        perf-stat.overall.ipc
     14.56 ±  2%      -4.1       10.48 ±  6%  perf-stat.overall.node-store-miss-rate%
      6008           -28.1%       4321        perf-stat.overall.path-length
 3.197e+10            +2.0%   3.26e+10        perf-stat.ps.branch-instructions
  55126365 ±  2%     -49.1%   28060044        perf-stat.ps.branch-misses
 4.421e+08           -42.7%  2.534e+08 ±  3%  perf-stat.ps.cache-misses
 1.302e+09           -16.5%  1.086e+09        perf-stat.ps.cache-references
   1022856 ±  4%     -98.9%      11408 ±  3%  perf-stat.ps.context-switches
 1.827e+11           +30.1%  2.377e+11        perf-stat.ps.cpu-cycles
      2222 ±  4%     -81.9%     402.70 ±  5%  perf-stat.ps.cpu-migrations
 2.681e+10            +7.6%  2.884e+10        perf-stat.ps.dTLB-loads
  14423754           +25.4%   18081863        perf-stat.ps.dTLB-store-misses
  9.01e+09           +11.5%  1.005e+10        perf-stat.ps.dTLB-stores
   8286270 ±  3%     +56.3%   12953590 ±  2%  perf-stat.ps.iTLB-load-misses
  49469188           +17.9%   58313749        perf-stat.ps.iTLB-loads
 1.246e+11            -8.8%  1.137e+11        perf-stat.ps.instructions
   4608940           +26.5%    5828391        perf-stat.ps.minor-faults
  70645873           -61.8%   27020155 ±  7%  perf-stat.ps.node-load-misses
  34484923           -65.9%   11769356 ± 17%  perf-stat.ps.node-loads
   4389253 ±  2%     -14.4%    3759325 ±  6%  perf-stat.ps.node-store-misses
  25749167           +24.8%   32132643        perf-stat.ps.node-stores
   4608940           +26.5%    5828391        perf-stat.ps.page-faults
 3.787e+13            -9.2%  3.437e+13        perf-stat.total.instructions
     56161 ±  8%     -75.0%      14034 ± 24%  softirqs.CPU0.SCHED
     54147 ±  8%     -77.4%      12259 ± 21%  softirqs.CPU1.SCHED
     55175 ±  9%     -79.2%      11491 ± 22%  softirqs.CPU10.SCHED
     55228 ±  7%     -77.9%      12193 ± 20%  softirqs.CPU100.SCHED
     55982 ± 10%     -78.1%      12234 ± 19%  softirqs.CPU101.SCHED
     55442 ± 11%     -77.9%      12241 ± 19%  softirqs.CPU102.SCHED
     55518 ±  8%     -78.5%      11947 ± 19%  softirqs.CPU103.SCHED
     53922 ± 10%     -78.7%      11489 ± 23%  softirqs.CPU11.SCHED
     55586 ±  8%     -79.6%      11321 ± 24%  softirqs.CPU12.SCHED
     55173 ± 10%     -79.4%      11392 ± 24%  softirqs.CPU13.SCHED
     55394 ±  8%     -79.0%      11629 ± 24%  softirqs.CPU14.SCHED
     55491 ±  8%     -79.1%      11593 ± 24%  softirqs.CPU15.SCHED
     55674 ±  9%     -79.3%      11531 ± 23%  softirqs.CPU16.SCHED
     55917 ±  9%     -79.3%      11563 ± 23%  softirqs.CPU17.SCHED
     54978 ±  8%     -78.9%      11582 ± 24%  softirqs.CPU18.SCHED
     55355 ±  9%     -79.1%      11566 ± 25%  softirqs.CPU19.SCHED
     54475 ±  9%     -77.7%      12164 ± 21%  softirqs.CPU2.SCHED
     55032 ±  9%     -79.0%      11582 ± 25%  softirqs.CPU20.SCHED
     55773 ±  8%     -79.5%      11455 ± 24%  softirqs.CPU21.SCHED
     55528 ±  8%     -78.9%      11695 ± 25%  softirqs.CPU22.SCHED
     54925 ±  8%     -79.1%      11472 ± 24%  softirqs.CPU23.SCHED
     55020 ±  7%     -79.2%      11451 ± 25%  softirqs.CPU24.SCHED
     56676 ±  8%     -79.6%      11554 ± 27%  softirqs.CPU25.SCHED
     52001 ±  9%     -77.5%      11677 ± 17%  softirqs.CPU26.SCHED
     11388 ±  9%     +19.1%      13562 ± 10%  softirqs.CPU27.RCU
     52240 ± 13%     -76.4%      12324 ± 21%  softirqs.CPU27.SCHED
     52709 ± 14%     -76.9%      12173 ± 21%  softirqs.CPU28.SCHED
     53661 ± 15%     -76.6%      12567 ± 20%  softirqs.CPU29.SCHED
     54832 ± 11%     -79.2%      11383 ± 23%  softirqs.CPU3.SCHED
     53860 ± 13%     -76.6%      12598 ± 19%  softirqs.CPU30.SCHED
     53010 ± 14%     -76.1%      12651 ± 21%  softirqs.CPU31.SCHED
     54592 ± 11%     -77.0%      12560 ± 19%  softirqs.CPU32.SCHED
     53792 ± 12%     -77.1%      12344 ± 21%  softirqs.CPU33.SCHED
     53907 ± 10%     -76.9%      12438 ± 20%  softirqs.CPU34.SCHED
     54309 ± 12%     -77.1%      12447 ± 20%  softirqs.CPU35.SCHED
     54252 ± 11%     -76.7%      12637 ± 20%  softirqs.CPU36.SCHED
     54257 ± 10%     -77.0%      12462 ± 20%  softirqs.CPU37.SCHED
     54904 ± 10%     -77.2%      12497 ± 20%  softirqs.CPU38.SCHED
     53990 ± 10%     -77.0%      12405 ± 20%  softirqs.CPU39.SCHED
     54821 ± 10%     -78.9%      11543 ± 24%  softirqs.CPU4.SCHED
     53301 ± 12%     -76.4%      12555 ± 20%  softirqs.CPU40.SCHED
     54519 ± 11%     -76.9%      12600 ± 18%  softirqs.CPU41.SCHED
     53100 ± 10%     -76.4%      12529 ± 20%  softirqs.CPU42.SCHED
     54225 ± 10%     -76.8%      12587 ± 20%  softirqs.CPU43.SCHED
     55777 ± 11%     -77.4%      12591 ± 21%  softirqs.CPU44.SCHED
     55576 ± 11%     -77.8%      12341 ± 21%  softirqs.CPU45.SCHED
     54704 ± 11%     -77.2%      12462 ± 20%  softirqs.CPU46.SCHED
     54317 ± 11%     -76.5%      12775 ± 20%  softirqs.CPU47.SCHED
     54594 ± 12%     -77.2%      12470 ± 20%  softirqs.CPU48.SCHED
     11759 ±  8%     +15.0%      13522 ±  9%  softirqs.CPU49.RCU
     54531 ± 10%     -77.1%      12512 ± 20%  softirqs.CPU49.SCHED
     54946 ± 10%     -79.3%      11365 ± 24%  softirqs.CPU5.SCHED
     55452 ± 10%     -77.7%      12373 ± 21%  softirqs.CPU50.SCHED
     52610 ±  8%     -76.5%      12358 ± 21%  softirqs.CPU51.SCHED
     54711 ±  7%     -80.5%      10649 ± 20%  softirqs.CPU52.SCHED
     55017 ±  7%     -79.5%      11276 ± 22%  softirqs.CPU53.SCHED
     56543 ±  8%     -80.2%      11210 ± 20%  softirqs.CPU54.SCHED
     11937 ±  7%     +15.0%      13731 ± 10%  softirqs.CPU55.RCU
     55301 ±  8%     -80.1%      10992 ± 20%  softirqs.CPU55.SCHED
     55110 ±  6%     -80.2%      10900 ± 22%  softirqs.CPU56.SCHED
     56739 ±  9%     -80.5%      11072 ± 22%  softirqs.CPU57.SCHED
     56275 ±  7%     -80.1%      11197 ± 21%  softirqs.CPU58.SCHED
     11751 ±  7%     +15.7%      13597 ± 10%  softirqs.CPU59.RCU
     55990 ±  7%     -80.0%      11195 ± 21%  softirqs.CPU59.SCHED
     54866 ±  9%     -78.9%      11571 ± 22%  softirqs.CPU6.SCHED
     56393 ±  7%     -80.3%      11129 ± 23%  softirqs.CPU60.SCHED
     55101 ±  8%     -79.5%      11322 ± 21%  softirqs.CPU61.SCHED
     55478 ±  8%     -79.6%      11336 ± 21%  softirqs.CPU62.SCHED
     55893 ±  7%     -79.9%      11249 ± 21%  softirqs.CPU63.SCHED
     56486 ±  7%     -80.2%      11180 ± 22%  softirqs.CPU64.SCHED
     56751 ±  7%     -80.4%      11140 ± 23%  softirqs.CPU65.SCHED
     56496 ±  7%     -80.0%      11281 ± 22%  softirqs.CPU66.SCHED
     55843 ± 10%     -79.7%      11346 ± 22%  softirqs.CPU67.SCHED
     56365 ±  8%     -79.8%      11384 ± 22%  softirqs.CPU68.SCHED
     56332 ±  8%     -80.0%      11245 ± 22%  softirqs.CPU69.SCHED
     55551 ±  9%     -79.8%      11246 ± 22%  softirqs.CPU7.SCHED
     56376 ±  8%     -79.8%      11412 ± 20%  softirqs.CPU70.SCHED
     57543 ±  7%     -80.5%      11221 ± 22%  softirqs.CPU71.SCHED
     56590 ±  8%     -80.1%      11250 ± 22%  softirqs.CPU72.SCHED
     56141 ±  8%     -80.0%      11223 ± 23%  softirqs.CPU73.SCHED
     57131 ±  8%     -80.1%      11376 ± 22%  softirqs.CPU74.SCHED
     56374 ±  8%     -79.9%      11306 ± 23%  softirqs.CPU75.SCHED
     57457 ±  6%     -80.5%      11181 ± 23%  softirqs.CPU76.SCHED
     10833 ±  8%     +16.9%      12667 ±  9%  softirqs.CPU77.RCU
     57177 ±  7%     -80.3%      11243 ± 25%  softirqs.CPU77.SCHED
     54421 ±  8%     -78.7%      11579 ± 18%  softirqs.CPU78.SCHED
     55021 ± 10%     -78.0%      12098 ± 17%  softirqs.CPU79.SCHED
     54123 ± 10%     -79.1%      11327 ± 26%  softirqs.CPU8.SCHED
     54622 ± 10%     -77.3%      12389 ± 17%  softirqs.CPU80.SCHED
     55160 ± 10%     -77.5%      12387 ± 17%  softirqs.CPU81.SCHED
     54067 ± 11%     -77.1%      12372 ± 19%  softirqs.CPU82.SCHED
     54866 ± 10%     -77.7%      12240 ± 19%  softirqs.CPU83.SCHED
     54955 ±  9%     -77.6%      12283 ± 17%  softirqs.CPU84.SCHED
     54322 ± 10%     -77.7%      12117 ± 20%  softirqs.CPU85.SCHED
     53794 ± 12%     -77.0%      12374 ± 19%  softirqs.CPU86.SCHED
     54479 ± 11%     -77.7%      12164 ± 18%  softirqs.CPU87.SCHED
     54409 ± 10%     -77.4%      12321 ± 18%  softirqs.CPU88.SCHED
     55144 ± 11%     -77.7%      12305 ± 18%  softirqs.CPU89.SCHED
     54151 ±  9%     -78.5%      11620 ± 23%  softirqs.CPU9.SCHED
     55803 ±  8%     -77.9%      12348 ± 20%  softirqs.CPU90.SCHED
     55109 ± 11%     -77.9%      12202 ± 17%  softirqs.CPU91.SCHED
     55000 ±  9%     -77.5%      12370 ± 19%  softirqs.CPU92.SCHED
     55289 ± 10%     -77.9%      12245 ± 19%  softirqs.CPU93.SCHED
     54811 ± 10%     -77.4%      12404 ± 19%  softirqs.CPU94.SCHED
     55395 ± 10%     -77.7%      12370 ± 19%  softirqs.CPU95.SCHED
     55795 ± 11%     -78.0%      12256 ± 19%  softirqs.CPU96.SCHED
     55877 ±  8%     -78.2%      12162 ± 20%  softirqs.CPU97.SCHED
     55228 ± 10%     -78.2%      12064 ± 19%  softirqs.CPU98.SCHED
     55294 ± 10%     -77.6%      12376 ± 19%  softirqs.CPU99.SCHED
   5730596           -78.4%    1236237 ±  2%  softirqs.SCHED
     41.84 ±  2%     -40.3        1.54 ± 24%  perf-profile.calltrace.cycles-pp.do_wp_page.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault
     27.67 ±  4%     -27.7        0.00        perf-profile.calltrace.cycles-pp.wp_page_copy.do_wp_page.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
     15.43 ±  8%     -13.7        1.76 ± 27%  perf-profile.calltrace.cycles-pp.secondary_startup_64
     15.28 ±  8%     -13.6        1.73 ± 26%  perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
     15.28 ±  8%     -13.6        1.73 ± 26%  perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
     15.27 ±  8%     -13.5        1.73 ± 26%  perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
     13.06 ±  9%     -11.4        1.68 ± 27%  perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
     13.04 ±  9%     -11.4        1.67 ± 27%  perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary
     10.34 ± 12%     -10.3        0.00        perf-profile.calltrace.cycles-pp.lru_cache_add.wp_page_copy.do_wp_page.__handle_mm_fault.handle_mm_fault
     10.20 ± 13%     -10.2        0.00        perf-profile.calltrace.cycles-pp.pagevec_lru_move_fn.lru_cache_add.wp_page_copy.do_wp_page.__handle_mm_fault
      8.94 ± 14%      -8.9        0.00        perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.pagevec_lru_move_fn.lru_cache_add.wp_page_copy.do_wp_page
     10.29 ± 12%      -8.7        1.58 ± 28%  perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
      8.65 ±  2%      -8.6        0.00        perf-profile.calltrace.cycles-pp.reuse_swap_page.do_wp_page.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
      7.08 ±  3%      -7.1        0.00        perf-profile.calltrace.cycles-pp.copy_page.wp_page_copy.do_wp_page.__handle_mm_fault.handle_mm_fault
      0.00            +0.7        0.70 ± 13%  perf-profile.calltrace.cycles-pp.__mod_lruvec_state.page_add_new_anon_rmap.wp_page_copy.__handle_mm_fault.handle_mm_fault
      0.00            +0.8        0.84 ± 13%  perf-profile.calltrace.cycles-pp.try_charge.mem_cgroup_charge.wp_page_copy.__handle_mm_fault.handle_mm_fault
      0.00            +0.9        0.94 ± 11%  perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.wp_page_copy.__handle_mm_fault
      0.00            +0.9        0.94 ± 11%  perf-profile.calltrace.cycles-pp.__pagevec_lru_add_fn.pagevec_lru_move_fn.lru_cache_add.wp_page_copy.__handle_mm_fault
      0.00            +1.1        1.12 ± 23%  perf-profile.calltrace.cycles-pp.page_remove_rmap.wp_page_copy.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
      0.00            +1.2        1.21 ± 15%  perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.alloc_pages_vma.wp_page_copy.__handle_mm_fault.handle_mm_fault
      0.00            +1.2        1.25 ± 14%  perf-profile.calltrace.cycles-pp.page_add_new_anon_rmap.wp_page_copy.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
      0.00            +1.3        1.31 ± 15%  perf-profile.calltrace.cycles-pp.alloc_pages_vma.wp_page_copy.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
      0.00            +1.6        1.62 ± 16%  perf-profile.calltrace.cycles-pp.get_mem_cgroup_from_mm.mem_cgroup_charge.wp_page_copy.__handle_mm_fault.handle_mm_fault
      0.00            +2.4        2.38 ± 13%  perf-profile.calltrace.cycles-pp.flush_tlb_func_common.flush_tlb_mm_range.ptep_clear_flush.wp_page_copy.__handle_mm_fault
      0.00            +2.8        2.78 ± 12%  perf-profile.calltrace.cycles-pp.flush_tlb_mm_range.ptep_clear_flush.wp_page_copy.__handle_mm_fault.handle_mm_fault
      0.00            +2.9        2.90 ± 12%  perf-profile.calltrace.cycles-pp.ptep_clear_flush.wp_page_copy.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
      0.00            +4.4        4.40 ± 12%  perf-profile.calltrace.cycles-pp.mem_cgroup_charge.wp_page_copy.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
      8.90 ± 14%      +6.0       14.92 ±  6%  perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.pagevec_lru_move_fn.lru_cache_add.wp_page_copy
      0.00            +6.1        6.07 ±  8%  perf-profile.calltrace.cycles-pp.copy_page.wp_page_copy.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
      0.00           +15.0       14.97 ±  6%  perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.pagevec_lru_move_fn.lru_cache_add.wp_page_copy.__handle_mm_fault
     11.96 ± 12%     +15.1       27.01 ± 14%  perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.release_pages.tlb_flush_mmu.zap_pte_range
     11.97 ± 12%     +15.1       27.03 ± 14%  perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.release_pages.tlb_flush_mmu.zap_pte_range.unmap_page_range
     14.65 ± 11%     +15.1       29.76 ± 14%  perf-profile.calltrace.cycles-pp.unmap_vmas.exit_mmap.mmput.do_exit.do_group_exit
     14.65 ± 11%     +15.1       29.76 ± 14%  perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.exit_mmap.mmput.do_exit
     14.64 ± 11%     +15.1       29.75 ± 14%  perf-profile.calltrace.cycles-pp.zap_pte_range.unmap_page_range.unmap_vmas.exit_mmap.mmput
     14.75 ± 11%     +15.2       29.91 ± 14%  perf-profile.calltrace.cycles-pp.__x64_sys_exit_group.do_syscall_64.entry_SYSCALL_64_after_hwframe
     14.75 ± 11%     +15.2       29.91 ± 14%  perf-profile.calltrace.cycles-pp.do_group_exit.__x64_sys_exit_group.do_syscall_64.entry_SYSCALL_64_after_hwframe
     14.75 ± 11%     +15.2       29.91 ± 14%  perf-profile.calltrace.cycles-pp.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64.entry_SYSCALL_64_after_hwframe
     14.74 ± 11%     +15.2       29.90 ± 14%  perf-profile.calltrace.cycles-pp.mmput.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
     14.74 ± 11%     +15.2       29.90 ± 14%  perf-profile.calltrace.cycles-pp.exit_mmap.mmput.do_exit.do_group_exit.__x64_sys_exit_group
     14.76 ± 11%     +15.2       29.92 ± 14%  perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
     14.76 ± 11%     +15.2       29.92 ± 14%  perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
     13.89 ± 11%     +15.2       29.14 ± 14%  perf-profile.calltrace.cycles-pp.release_pages.tlb_flush_mmu.zap_pte_range.unmap_page_range.unmap_vmas
     13.98 ± 11%     +15.3       29.23 ± 14%  perf-profile.calltrace.cycles-pp.tlb_flush_mmu.zap_pte_range.unmap_page_range.unmap_vmas.exit_mmap
      0.00           +16.4       16.39 ±  6%  perf-profile.calltrace.cycles-pp.pagevec_lru_move_fn.lru_cache_add.wp_page_copy.__handle_mm_fault.handle_mm_fault
      0.00           +16.6       16.56 ±  6%  perf-profile.calltrace.cycles-pp.lru_cache_add.wp_page_copy.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
      0.00           +36.4       36.40 ±  7%  perf-profile.calltrace.cycles-pp.wp_page_copy.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault
     41.85 ±  2%     -40.2        1.68 ± 19%  perf-profile.children.cycles-pp.do_wp_page
     15.43 ±  8%     -13.7        1.76 ± 27%  perf-profile.children.cycles-pp.secondary_startup_64
     15.43 ±  8%     -13.7        1.76 ± 27%  perf-profile.children.cycles-pp.cpu_startup_entry
     15.42 ±  8%     -13.7        1.76 ± 27%  perf-profile.children.cycles-pp.do_idle
     15.28 ±  8%     -13.6        1.73 ± 26%  perf-profile.children.cycles-pp.start_secondary
     13.19 ±  9%     -11.5        1.71 ± 27%  perf-profile.children.cycles-pp.cpuidle_enter
     13.19 ±  9%     -11.5        1.71 ± 27%  perf-profile.children.cycles-pp.cpuidle_enter_state
     10.40 ± 11%      -8.8        1.61 ± 29%  perf-profile.children.cycles-pp.intel_idle
      8.77 ±  2%      -8.7        0.05 ± 27%  perf-profile.children.cycles-pp.reuse_swap_page
      7.15 ±  3%      -1.0        6.14 ±  7%  perf-profile.children.cycles-pp.copy_page
      1.03 ±  3%      -0.6        0.44 ±  6%  perf-profile.children.cycles-pp.asm_call_on_stack
      0.52 ± 10%      -0.4        0.13 ± 16%  perf-profile.children.cycles-pp._raw_spin_lock_irq
      0.37 ±  5%      -0.3        0.06 ± 10%  perf-profile.children.cycles-pp.update_load_avg
      0.84 ±  3%      -0.2        0.65 ±  7%  perf-profile.children.cycles-pp.sysvec_apic_timer_interrupt
      1.02 ±  2%      -0.2        0.85 ±  6%  perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt
      0.49 ±  4%      -0.2        0.34 ±  5%  perf-profile.children.cycles-pp.__list_del_entry_valid
      0.20 ±  6%      -0.1        0.05 ± 28%  perf-profile.children.cycles-pp.ktime_get
      0.19 ±  7%      -0.1        0.07 ± 18%  perf-profile.children.cycles-pp.irq_exit_rcu
      0.13 ± 10%      -0.1        0.07 ±  9%  perf-profile.children.cycles-pp.clockevents_program_event
      0.20 ±  4%      -0.1        0.15 ±  9%  perf-profile.children.cycles-pp._find_next_bit
      0.15 ±  4%      -0.0        0.11 ± 12%  perf-profile.children.cycles-pp.up_read
      0.10 ± 13%      -0.0        0.07 ± 17%  perf-profile.children.cycles-pp.update_cfs_group
      0.19 ±  5%      +0.1        0.24 ±  8%  perf-profile.children.cycles-pp.scheduler_tick
      0.09 ± 11%      +0.1        0.15 ± 12%  perf-profile.children.cycles-pp.tlb_finish_mmu
      0.06 ± 10%      +0.1        0.13 ± 11%  perf-profile.children.cycles-pp.__unlock_page_memcg
      0.12 ±  7%      +0.1        0.19 ±  8%  perf-profile.children.cycles-pp.task_tick_fair
      0.00            +0.1        0.09 ± 23%  perf-profile.children.cycles-pp.unlock_page_memcg
      0.00            +0.2        0.20 ±  9%  perf-profile.children.cycles-pp._raw_spin_unlock_irqrestore
      1.70 ±  2%      +0.2        1.93 ±  7%  perf-profile.children.cycles-pp.prepare_exit_to_usermode
      0.22 ±  4%      +0.5        0.75 ± 22%  perf-profile.children.cycles-pp.lock_page_memcg
      1.13 ±  9%      +0.6        1.77 ± 10%  perf-profile.children.cycles-pp.get_mem_cgroup_from_mm
      0.82 ±  6%      +0.8        1.60 ± 13%  perf-profile.children.cycles-pp.page_remove_rmap
      3.05 ±  7%      +1.4        4.47 ± 10%  perf-profile.children.cycles-pp.mem_cgroup_charge
     10.54 ± 11%      +6.1       16.61 ±  6%  perf-profile.children.cycles-pp.pagevec_lru_move_fn
     10.67 ± 11%      +6.1       16.76 ±  6%  perf-profile.children.cycles-pp.lru_cache_add
     27.68 ±  4%      +8.7       36.42 ±  7%  perf-profile.children.cycles-pp.wp_page_copy
     14.65 ± 11%     +15.1       29.76 ± 14%  perf-profile.children.cycles-pp.unmap_vmas
     14.65 ± 11%     +15.1       29.76 ± 14%  perf-profile.children.cycles-pp.unmap_page_range
     14.64 ± 11%     +15.1       29.76 ± 14%  perf-profile.children.cycles-pp.zap_pte_range
     14.75 ± 11%     +15.2       29.91 ± 14%  perf-profile.children.cycles-pp.__x64_sys_exit_group
     14.75 ± 11%     +15.2       29.91 ± 14%  perf-profile.children.cycles-pp.do_group_exit
     14.75 ± 11%     +15.2       29.91 ± 14%  perf-profile.children.cycles-pp.do_exit
     14.75 ± 11%     +15.2       29.91 ± 14%  perf-profile.children.cycles-pp.mmput
     14.75 ± 11%     +15.2       29.91 ± 14%  perf-profile.children.cycles-pp.exit_mmap
     15.15 ± 11%     +15.2       30.35 ± 14%  perf-profile.children.cycles-pp.do_syscall_64
     15.15 ± 11%     +15.2       30.35 ± 14%  perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
     14.07 ± 11%     +15.3       29.38 ± 14%  perf-profile.children.cycles-pp.tlb_flush_mmu
     14.09 ± 11%     +15.3       29.41 ± 14%  perf-profile.children.cycles-pp.release_pages
     22.48 ± 10%     +19.8       42.33 ±  8%  perf-profile.children.cycles-pp._raw_spin_lock_irqsave
     23.12 ± 10%     +20.0       43.17 ±  8%  perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
     10.39 ± 11%      -8.8        1.61 ± 29%  perf-profile.self.cycles-pp.intel_idle
      8.69 ±  2%      -8.6        0.05 ± 38%  perf-profile.self.cycles-pp.reuse_swap_page
      7.08 ±  3%      -1.0        6.09 ±  7%  perf-profile.self.cycles-pp.copy_page
      0.45 ±  4%      -0.4        0.07 ±  7%  perf-profile.self.cycles-pp._raw_spin_lock_irqsave
      0.48 ±  4%      -0.2        0.33 ±  5%  perf-profile.self.cycles-pp.__list_del_entry_valid
      0.33 ±  4%      -0.1        0.24 ±  9%  perf-profile.self.cycles-pp._raw_spin_lock
      0.17 ± 10%      -0.1        0.11 ± 11%  perf-profile.self.cycles-pp.zap_pte_range
      0.20 ±  5%      -0.1        0.15 ±  9%  perf-profile.self.cycles-pp._find_next_bit
      0.14 ±  5%      -0.0        0.11 ± 11%  perf-profile.self.cycles-pp.up_read
      0.16 ±  7%      -0.0        0.13 ±  9%  perf-profile.self.cycles-pp.vmacache_find
      0.14 ±  3%      -0.0        0.12 ±  7%  perf-profile.self.cycles-pp.___might_sleep
      0.13 ±  7%      +0.0        0.16 ±  8%  perf-profile.self.cycles-pp.lru_cache_add
      0.06 ±  9%      +0.1        0.13 ± 10%  perf-profile.self.cycles-pp.__unlock_page_memcg
      0.00            +0.1        0.09 ± 21%  perf-profile.self.cycles-pp.unlock_page_memcg
      0.47 ±  3%      +0.1        0.57 ±  8%  perf-profile.self.cycles-pp.page_add_new_anon_rmap
      0.36 ±  3%      +0.3        0.65 ± 11%  perf-profile.self.cycles-pp.page_remove_rmap
      0.22 ±  5%      +0.5        0.74 ± 22%  perf-profile.self.cycles-pp.lock_page_memcg
      0.44 ± 15%      +0.6        1.04 ± 12%  perf-profile.self.cycles-pp.mem_cgroup_charge
      1.12 ±  9%      +0.6        1.75 ± 10%  perf-profile.self.cycles-pp.get_mem_cgroup_from_mm
      0.84 ±  3%      +0.8        1.65 ± 19%  perf-profile.self.cycles-pp.do_wp_page
      0.62 ±  2%      +1.0        1.61 ± 20%  perf-profile.self.cycles-pp.wp_page_copy
     23.12 ± 10%     +20.0       43.17 ±  8%  perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
  28859127           -99.3%     196613 ±  3%  interrupts.CAL:Function_call_interrupts
    236575 ± 11%     -99.1%       2094 ± 19%  interrupts.CPU0.CAL:Function_call_interrupts
      5873 ± 35%     -81.9%       1062 ± 14%  interrupts.CPU0.RES:Rescheduling_interrupts
    254146 ±  9%     -99.2%       2128 ± 22%  interrupts.CPU1.CAL:Function_call_interrupts
      5669 ± 38%     -89.5%     595.38 ± 39%  interrupts.CPU1.RES:Rescheduling_interrupts
    281815 ± 14%     -99.4%       1764 ± 10%  interrupts.CPU10.CAL:Function_call_interrupts
      6227 ± 41%     -91.3%     538.88 ± 26%  interrupts.CPU10.RES:Rescheduling_interrupts
    345264 ± 45%     -74.5%      88030 ± 45%  interrupts.CPU10.TLB:TLB_shootdowns
    257795 ± 11%     -99.2%       1989 ± 10%  interrupts.CPU100.CAL:Function_call_interrupts
      4587 ± 24%     +52.7%       7006 ±  5%  interrupts.CPU100.NMI:Non-maskable_interrupts
      4587 ± 24%     +52.7%       7006 ±  5%  interrupts.CPU100.PMI:Performance_monitoring_interrupts
      6223 ± 42%     -92.0%     495.31 ± 21%  interrupts.CPU100.RES:Rescheduling_interrupts
    254459 ± 13%     -99.2%       1927 ±  7%  interrupts.CPU101.CAL:Function_call_interrupts
      6364 ± 44%     -91.7%     526.31 ± 24%  interrupts.CPU101.RES:Rescheduling_interrupts
    262103 ± 16%     -99.3%       1919 ±  5%  interrupts.CPU102.CAL:Function_call_interrupts
      6446 ± 43%     -91.2%     565.69 ± 39%  interrupts.CPU102.RES:Rescheduling_interrupts
    207081 ± 37%     -69.1%      64047 ± 70%  interrupts.CPU102.TLB:TLB_shootdowns
    247274 ± 13%     -99.2%       1883 ±  8%  interrupts.CPU103.CAL:Function_call_interrupts
      6352 ± 42%     -91.6%     536.12 ± 24%  interrupts.CPU103.RES:Rescheduling_interrupts
    212327 ± 45%     -64.2%      76050 ± 59%  interrupts.CPU103.TLB:TLB_shootdowns
    272351 ± 16%     -99.3%       1916 ± 31%  interrupts.CPU11.CAL:Function_call_interrupts
      5962 ± 40%     -90.7%     555.50 ± 26%  interrupts.CPU11.RES:Rescheduling_interrupts
    377485 ± 35%     -76.1%      90096 ± 43%  interrupts.CPU11.TLB:TLB_shootdowns
    274253 ± 12%     -99.4%       1720 ± 11%  interrupts.CPU12.CAL:Function_call_interrupts
      3712 ± 37%     +61.6%       6001 ± 23%  interrupts.CPU12.NMI:Non-maskable_interrupts
      3712 ± 37%     +61.6%       6001 ± 23%  interrupts.CPU12.PMI:Performance_monitoring_interrupts
      5921 ± 36%     -90.8%     546.50 ± 27%  interrupts.CPU12.RES:Rescheduling_interrupts
    403722 ± 49%     -78.8%      85577 ± 44%  interrupts.CPU12.TLB:TLB_shootdowns
    270063 ± 18%     -99.3%       1775 ± 11%  interrupts.CPU13.CAL:Function_call_interrupts
      5957 ± 41%     -91.2%     524.12 ± 25%  interrupts.CPU13.RES:Rescheduling_interrupts
    363588 ± 40%     -78.3%      78783 ± 44%  interrupts.CPU13.TLB:TLB_shootdowns
    279866 ± 15%     -99.4%       1780 ± 10%  interrupts.CPU14.CAL:Function_call_interrupts
      6083 ± 39%     -91.3%     530.19 ± 28%  interrupts.CPU14.RES:Rescheduling_interrupts
    354391 ± 53%     -79.9%      71350 ± 60%  interrupts.CPU14.TLB:TLB_shootdowns
    278040 ± 14%     -99.3%       1809 ± 12%  interrupts.CPU15.CAL:Function_call_interrupts
      6099 ± 38%     -91.1%     544.56 ± 35%  interrupts.CPU15.RES:Rescheduling_interrupts
    272619 ± 17%     -99.3%       1855 ± 25%  interrupts.CPU16.CAL:Function_call_interrupts
      6278 ± 39%     -91.7%     518.44 ± 27%  interrupts.CPU16.RES:Rescheduling_interrupts
    276781 ± 15%     -99.3%       1858 ± 25%  interrupts.CPU17.CAL:Function_call_interrupts
      5994 ± 40%     -91.3%     520.69 ± 26%  interrupts.CPU17.RES:Rescheduling_interrupts
    354528 ± 48%     -79.6%      72213 ± 59%  interrupts.CPU17.TLB:TLB_shootdowns
    273028 ± 18%     -99.3%       1790 ± 13%  interrupts.CPU18.CAL:Function_call_interrupts
      6020 ± 37%     -91.4%     516.12 ± 28%  interrupts.CPU18.RES:Rescheduling_interrupts
    270901 ± 17%     -99.4%       1744 ± 11%  interrupts.CPU19.CAL:Function_call_interrupts
      6012 ± 37%     -91.1%     537.19 ± 35%  interrupts.CPU19.RES:Rescheduling_interrupts
    258570 ± 12%     -99.3%       1728 ± 14%  interrupts.CPU2.CAL:Function_call_interrupts
      5867 ± 39%     -89.4%     622.50 ± 53%  interrupts.CPU2.RES:Rescheduling_interrupts
    274427 ± 16%     -99.4%       1762 ± 14%  interrupts.CPU20.CAL:Function_call_interrupts
      5928 ± 38%     -91.0%     530.75 ± 33%  interrupts.CPU20.RES:Rescheduling_interrupts
    270596 ± 16%     -99.3%       1828 ± 13%  interrupts.CPU21.CAL:Function_call_interrupts
      6193 ± 37%     -92.0%     494.88 ± 26%  interrupts.CPU21.RES:Rescheduling_interrupts
    266280 ± 16%     -99.3%       1835 ± 13%  interrupts.CPU22.CAL:Function_call_interrupts
      6198 ± 38%     -91.7%     512.44 ± 31%  interrupts.CPU22.RES:Rescheduling_interrupts
    344139 ± 45%     -78.6%      73709 ± 44%  interrupts.CPU22.TLB:TLB_shootdowns
    262026 ± 13%     -99.3%       1869 ± 24%  interrupts.CPU23.CAL:Function_call_interrupts
      6176 ± 37%     -92.2%     483.81 ± 26%  interrupts.CPU23.RES:Rescheduling_interrupts
    258980 ± 15%     -99.3%       1754 ± 12%  interrupts.CPU24.CAL:Function_call_interrupts
      6205 ± 39%     -92.3%     480.81 ± 31%  interrupts.CPU24.RES:Rescheduling_interrupts
    333871 ± 37%     -74.2%      85996 ± 45%  interrupts.CPU24.TLB:TLB_shootdowns
    283396 ± 15%     -99.4%       1765 ± 13%  interrupts.CPU25.CAL:Function_call_interrupts
      6184 ± 37%     -92.1%     489.75 ± 28%  interrupts.CPU25.RES:Rescheduling_interrupts
    247688 ± 13%     -99.4%       1584 ±  6%  interrupts.CPU26.CAL:Function_call_interrupts
      5499 ± 37%     -89.2%     594.88 ± 23%  interrupts.CPU26.RES:Rescheduling_interrupts
    500203 ± 53%     -67.5%     162801 ± 96%  interrupts.CPU26.TLB:TLB_shootdowns
    259091 ± 19%     -99.3%       1874 ±  9%  interrupts.CPU27.CAL:Function_call_interrupts
      5973 ± 43%     -89.8%     606.62 ± 21%  interrupts.CPU27.RES:Rescheduling_interrupts
    444527 ± 70%     -78.1%      97165 ± 78%  interrupts.CPU27.TLB:TLB_shootdowns
    269264 ± 21%     -99.3%       2000 ± 20%  interrupts.CPU28.CAL:Function_call_interrupts
      5820 ± 45%     -89.3%     622.50 ± 27%  interrupts.CPU28.RES:Rescheduling_interrupts
    406902 ± 48%     -80.0%      81212 ± 72%  interrupts.CPU28.TLB:TLB_shootdowns
    268200 ± 20%     -99.3%       1945 ±  8%  interrupts.CPU29.CAL:Function_call_interrupts
      5921 ± 46%     -90.1%     588.50 ± 25%  interrupts.CPU29.RES:Rescheduling_interrupts
    373606 ± 59%     -84.8%      56709 ± 81%  interrupts.CPU29.TLB:TLB_shootdowns
    269262 ± 14%     -99.3%       1764 ± 11%  interrupts.CPU3.CAL:Function_call_interrupts
      5926 ± 37%     -91.1%     527.19 ± 19%  interrupts.CPU3.RES:Rescheduling_interrupts
    271528 ± 13%     -99.2%       2069 ± 21%  interrupts.CPU30.CAL:Function_call_interrupts
      5789 ± 43%     -90.0%     578.81 ± 19%  interrupts.CPU30.RES:Rescheduling_interrupts
    406837 ± 63%     -84.2%      64217 ± 77%  interrupts.CPU30.TLB:TLB_shootdowns
    261885 ± 19%     -99.2%       1984 ±  9%  interrupts.CPU31.CAL:Function_call_interrupts
      5815 ± 45%     -89.9%     587.12 ± 25%  interrupts.CPU31.RES:Rescheduling_interrupts
    449728 ± 54%     -87.3%      57267 ± 78%  interrupts.CPU31.TLB:TLB_shootdowns
    281457 ± 18%     -99.3%       2029 ± 10%  interrupts.CPU32.CAL:Function_call_interrupts
      6231 ± 45%     -90.1%     615.69 ± 37%  interrupts.CPU32.RES:Rescheduling_interrupts
    414606 ± 57%     -87.2%      53170 ± 65%  interrupts.CPU32.TLB:TLB_shootdowns
    276473 ± 18%     -99.3%       1925 ±  9%  interrupts.CPU33.CAL:Function_call_interrupts
      5979 ± 44%     -89.9%     606.62 ± 32%  interrupts.CPU33.RES:Rescheduling_interrupts
    340514 ± 52%     -84.6%      52396 ± 92%  interrupts.CPU33.TLB:TLB_shootdowns
    277365 ± 16%     -99.3%       1981 ±  9%  interrupts.CPU34.CAL:Function_call_interrupts
      6088 ± 43%     -91.0%     545.31 ± 24%  interrupts.CPU34.RES:Rescheduling_interrupts
    339744 ± 61%     -81.0%      64621 ± 63%  interrupts.CPU34.TLB:TLB_shootdowns
    280125 ± 17%     -99.3%       1911 ± 10%  interrupts.CPU35.CAL:Function_call_interrupts
      6203 ± 42%     -91.2%     546.25 ± 27%  interrupts.CPU35.RES:Rescheduling_interrupts
    369904 ± 59%     -87.1%      47869 ± 47%  interrupts.CPU35.TLB:TLB_shootdowns
    281759 ± 16%     -99.3%       2089 ± 10%  interrupts.CPU36.CAL:Function_call_interrupts
      6238 ± 42%     -90.8%     573.38 ± 32%  interrupts.CPU36.RES:Rescheduling_interrupts
    376171 ± 57%     -86.4%      51069 ± 63%  interrupts.CPU36.TLB:TLB_shootdowns
    289651 ± 16%     -99.3%       1930 ±  8%  interrupts.CPU37.CAL:Function_call_interrupts
      6270 ± 42%     -91.5%     532.56 ± 25%  interrupts.CPU37.RES:Rescheduling_interrupts
    341623 ± 45%     -84.3%      53804 ± 60%  interrupts.CPU37.TLB:TLB_shootdowns
    287179 ± 16%     -99.3%       2010 ± 23%  interrupts.CPU38.CAL:Function_call_interrupts
      5988 ± 42%     -91.0%     537.69 ± 27%  interrupts.CPU38.RES:Rescheduling_interrupts
    379793 ± 54%     -88.4%      44230 ± 60%  interrupts.CPU38.TLB:TLB_shootdowns
    270672 ± 15%     -99.3%       1926 ±  7%  interrupts.CPU39.CAL:Function_call_interrupts
      5956 ± 41%     -89.3%     636.00 ± 51%  interrupts.CPU39.RES:Rescheduling_interrupts
    272921 ± 14%     -99.4%       1751 ± 13%  interrupts.CPU4.CAL:Function_call_interrupts
      5961 ± 39%     -90.5%     564.62 ± 29%  interrupts.CPU4.RES:Rescheduling_interrupts
    270867 ± 16%     -99.3%       1966 ±  8%  interrupts.CPU40.CAL:Function_call_interrupts
      5974 ± 42%     -90.7%     553.56 ± 29%  interrupts.CPU40.RES:Rescheduling_interrupts
    363839 ± 55%     -87.1%      47093 ± 59%  interrupts.CPU40.TLB:TLB_shootdowns
    291528 ± 17%     -99.3%       2058 ± 11%  interrupts.CPU41.CAL:Function_call_interrupts
      6193 ± 41%     -90.5%     588.38 ± 36%  interrupts.CPU41.RES:Rescheduling_interrupts
    350894 ± 55%     -86.7%      46677 ± 66%  interrupts.CPU41.TLB:TLB_shootdowns
    267317 ± 14%     -99.2%       2071 ± 12%  interrupts.CPU42.CAL:Function_call_interrupts
      5873 ± 43%     -90.6%     553.19 ± 24%  interrupts.CPU42.RES:Rescheduling_interrupts
    317772 ± 45%     -84.0%      50967 ± 52%  interrupts.CPU42.TLB:TLB_shootdowns
    294788 ± 16%     -99.3%       2120 ± 33%  interrupts.CPU43.CAL:Function_call_interrupts
      6345 ± 45%     -91.2%     555.75 ± 25%  interrupts.CPU43.RES:Rescheduling_interrupts
    324788 ± 41%     -86.5%      43783 ± 55%  interrupts.CPU43.TLB:TLB_shootdowns
    295855 ± 15%     -99.3%       2028 ± 13%  interrupts.CPU44.CAL:Function_call_interrupts
      6121 ± 43%     -90.9%     559.75 ± 33%  interrupts.CPU44.RES:Rescheduling_interrupts
    325984 ± 56%     -86.9%      42813 ± 62%  interrupts.CPU44.TLB:TLB_shootdowns
    293053 ± 16%     -99.4%       1858 ± 10%  interrupts.CPU45.CAL:Function_call_interrupts
      6088 ± 42%     -91.6%     510.94 ± 25%  interrupts.CPU45.RES:Rescheduling_interrupts
    305496 ± 49%     -85.6%      43977 ± 73%  interrupts.CPU45.TLB:TLB_shootdowns
    283413 ± 16%     -99.3%       1980 ± 17%  interrupts.CPU46.CAL:Function_call_interrupts
      6263 ± 41%     -91.6%     527.81 ± 23%  interrupts.CPU46.RES:Rescheduling_interrupts
    339644 ± 46%     -85.9%      48054 ± 62%  interrupts.CPU46.TLB:TLB_shootdowns
    283190 ± 16%     -99.2%       2152 ± 42%  interrupts.CPU47.CAL:Function_call_interrupts
      6090 ± 42%     -91.3%     530.81 ± 21%  interrupts.CPU47.RES:Rescheduling_interrupts
    305344 ± 31%     -80.6%      59186 ± 62%  interrupts.CPU47.TLB:TLB_shootdowns
    285498 ± 17%     -99.3%       2000 ±  9%  interrupts.CPU48.CAL:Function_call_interrupts
      4830 ± 24%     +44.3%       6971 ±  4%  interrupts.CPU48.NMI:Non-maskable_interrupts
      4830 ± 24%     +44.3%       6971 ±  4%  interrupts.CPU48.PMI:Performance_monitoring_interrupts
      6163 ± 47%     -90.6%     579.38 ± 31%  interrupts.CPU48.RES:Rescheduling_interrupts
    374123 ± 40%     -87.2%      47885 ± 74%  interrupts.CPU48.TLB:TLB_shootdowns
    283342 ± 12%     -99.3%       1992 ±  8%  interrupts.CPU49.CAL:Function_call_interrupts
      6182 ± 42%     -91.5%     528.00 ± 31%  interrupts.CPU49.RES:Rescheduling_interrupts
    296317 ± 56%     -84.9%      44774 ± 80%  interrupts.CPU49.TLB:TLB_shootdowns
    269975 ± 15%     -99.3%       1773 ± 16%  interrupts.CPU5.CAL:Function_call_interrupts
      6231 ± 40%     -91.7%     520.00 ± 22%  interrupts.CPU5.RES:Rescheduling_interrupts
    279242 ± 14%     -99.3%       1984 ±  8%  interrupts.CPU50.CAL:Function_call_interrupts
      6206 ± 43%     -91.2%     543.81 ± 26%  interrupts.CPU50.RES:Rescheduling_interrupts
    287745 ± 53%     -80.0%      57453 ± 95%  interrupts.CPU50.TLB:TLB_shootdowns
    284642 ± 10%     -99.3%       1949 ±  7%  interrupts.CPU51.CAL:Function_call_interrupts
      6153 ± 41%     -91.1%     546.06 ± 25%  interrupts.CPU51.RES:Rescheduling_interrupts
    350304 ± 47%     -86.0%      49176 ± 61%  interrupts.CPU51.TLB:TLB_shootdowns
    269390 ± 11%     -99.3%       1809 ± 30%  interrupts.CPU52.CAL:Function_call_interrupts
      5893 ± 39%     -92.5%     443.12 ± 29%  interrupts.CPU52.RES:Rescheduling_interrupts
    278682 ± 14%     -99.2%       2144 ± 71%  interrupts.CPU53.CAL:Function_call_interrupts
      6089 ± 40%     -92.0%     484.62 ± 34%  interrupts.CPU53.RES:Rescheduling_interrupts
    282560 ± 13%     -99.4%       1767 ± 14%  interrupts.CPU54.CAL:Function_call_interrupts
      6112 ± 40%     -91.9%     495.56 ± 27%  interrupts.CPU54.RES:Rescheduling_interrupts
    278386 ± 13%     -99.4%       1807 ± 11%  interrupts.CPU55.CAL:Function_call_interrupts
      5862 ± 37%     -92.2%     458.06 ± 19%  interrupts.CPU55.RES:Rescheduling_interrupts
    278461 ±  8%     -99.3%       1912 ± 33%  interrupts.CPU56.CAL:Function_call_interrupts
      6093 ± 37%     -92.8%     439.75 ± 25%  interrupts.CPU56.RES:Rescheduling_interrupts
    205528 ± 20%     -54.0%      94511 ± 48%  interrupts.CPU56.TLB:TLB_shootdowns
    296955 ± 15%     -99.4%       1805 ± 19%  interrupts.CPU57.CAL:Function_call_interrupts
      6130 ± 42%     -92.6%     453.56 ± 28%  interrupts.CPU57.RES:Rescheduling_interrupts
    282967 ± 12%     -99.4%       1772 ± 15%  interrupts.CPU58.CAL:Function_call_interrupts
      6156 ± 38%     -92.4%     469.56 ± 26%  interrupts.CPU58.RES:Rescheduling_interrupts
    278009 ± 11%     -99.4%       1762 ± 14%  interrupts.CPU59.CAL:Function_call_interrupts
      6070 ± 36%     -92.6%     448.19 ± 27%  interrupts.CPU59.RES:Rescheduling_interrupts
    219405 ± 50%     -64.5%      77887 ± 50%  interrupts.CPU59.TLB:TLB_shootdowns
    270342 ± 14%     -99.4%       1745 ± 16%  interrupts.CPU6.CAL:Function_call_interrupts
      5904 ± 38%     -91.1%     528.25 ± 26%  interrupts.CPU6.RES:Rescheduling_interrupts
    283384 ± 12%     -99.4%       1825 ± 13%  interrupts.CPU60.CAL:Function_call_interrupts
      6235 ± 37%     -92.8%     450.44 ± 26%  interrupts.CPU60.RES:Rescheduling_interrupts
    245711 ± 42%     -68.2%      78157 ± 57%  interrupts.CPU60.TLB:TLB_shootdowns
    275351 ± 11%     -99.3%       1831 ±  9%  interrupts.CPU61.CAL:Function_call_interrupts
      6107 ± 39%     -92.2%     479.00 ± 34%  interrupts.CPU61.RES:Rescheduling_interrupts
    280718 ± 13%     -99.4%       1769 ± 11%  interrupts.CPU62.CAL:Function_call_interrupts
      6319 ± 41%     -92.9%     450.06 ± 28%  interrupts.CPU62.RES:Rescheduling_interrupts
    282203 ± 12%     -99.4%       1746 ± 13%  interrupts.CPU63.CAL:Function_call_interrupts
      6069 ± 36%     -92.8%     439.25 ± 32%  interrupts.CPU63.RES:Rescheduling_interrupts
    298698 ± 13%     -99.4%       1710 ± 11%  interrupts.CPU64.CAL:Function_call_interrupts
      6409 ± 37%     -92.8%     459.75 ± 34%  interrupts.CPU64.RES:Rescheduling_interrupts
    278523 ± 13%     -99.4%       1763 ± 15%  interrupts.CPU65.CAL:Function_call_interrupts
      6290 ± 38%     -92.8%     451.38 ± 27%  interrupts.CPU65.RES:Rescheduling_interrupts
    266089 ± 31%     -69.7%      80607 ± 52%  interrupts.CPU65.TLB:TLB_shootdowns
    287640 ± 11%     -99.4%       1737 ± 12%  interrupts.CPU66.CAL:Function_call_interrupts
      6394 ± 39%     -93.0%     446.25 ± 31%  interrupts.CPU66.RES:Rescheduling_interrupts
    292804 ± 20%     -99.4%       1814 ± 18%  interrupts.CPU67.CAL:Function_call_interrupts
      6318 ± 41%     -93.0%     441.12 ± 32%  interrupts.CPU67.RES:Rescheduling_interrupts
    277669 ± 16%     -99.3%       1807 ± 14%  interrupts.CPU68.CAL:Function_call_interrupts
      6346 ± 38%     -92.4%     485.12 ± 43%  interrupts.CPU68.RES:Rescheduling_interrupts
    221011 ± 37%     -64.9%      77605 ± 51%  interrupts.CPU68.TLB:TLB_shootdowns
    283218 ± 16%     -99.4%       1732 ± 13%  interrupts.CPU69.CAL:Function_call_interrupts
      6024 ± 38%     -92.5%     449.38 ± 30%  interrupts.CPU69.RES:Rescheduling_interrupts
    273986 ± 15%     -99.4%       1749 ± 11%  interrupts.CPU7.CAL:Function_call_interrupts
      5865 ± 41%     -91.0%     527.56 ± 22%  interrupts.CPU7.RES:Rescheduling_interrupts
    383011 ± 40%     -72.6%     104799 ± 46%  interrupts.CPU7.TLB:TLB_shootdowns
    284794 ± 16%     -99.4%       1715 ± 11%  interrupts.CPU70.CAL:Function_call_interrupts
      6067 ± 41%     -93.1%     420.88 ± 27%  interrupts.CPU70.RES:Rescheduling_interrupts
    301229 ± 14%     -99.4%       1760 ± 13%  interrupts.CPU71.CAL:Function_call_interrupts
      6022 ± 37%     -92.9%     427.94 ± 30%  interrupts.CPU71.RES:Rescheduling_interrupts
    206452 ± 46%     -62.3%      77816 ± 39%  interrupts.CPU71.TLB:TLB_shootdowns
    285872 ± 17%     -99.4%       1737 ± 11%  interrupts.CPU72.CAL:Function_call_interrupts
      6208 ± 39%     -93.1%     431.00 ± 26%  interrupts.CPU72.RES:Rescheduling_interrupts
    225654 ± 44%     -65.2%      78634 ± 49%  interrupts.CPU72.TLB:TLB_shootdowns
    267068 ± 15%     -99.3%       1958 ± 40%  interrupts.CPU73.CAL:Function_call_interrupts
      6509 ± 41%     -92.0%     518.50 ± 40%  interrupts.CPU73.RES:Rescheduling_interrupts
    255758 ± 12%     -99.3%       1751 ±  9%  interrupts.CPU74.CAL:Function_call_interrupts
      6423 ± 40%     -92.1%     509.50 ± 41%  interrupts.CPU74.RES:Rescheduling_interrupts
    214026 ± 51%     -59.0%      87732 ± 45%  interrupts.CPU74.TLB:TLB_shootdowns
    261002 ± 13%     -99.3%       1766 ±  9%  interrupts.CPU75.CAL:Function_call_interrupts
      6359 ± 39%     -92.8%     458.00 ± 31%  interrupts.CPU75.RES:Rescheduling_interrupts
    255352 ± 10%     -99.3%       1721 ± 16%  interrupts.CPU76.CAL:Function_call_interrupts
      6310 ± 38%     -93.1%     437.00 ± 31%  interrupts.CPU76.RES:Rescheduling_interrupts
    262224 ± 12%     -99.3%       1934 ± 36%  interrupts.CPU77.CAL:Function_call_interrupts
      6459 ± 41%     -92.0%     515.56 ± 39%  interrupts.CPU77.RES:Rescheduling_interrupts
    282389 ± 12%     -99.4%       1587 ± 14%  interrupts.CPU78.CAL:Function_call_interrupts
      5996 ± 43%     -92.3%     459.44 ± 22%  interrupts.CPU78.RES:Rescheduling_interrupts
    199766 ± 58%     -78.4%      43049 ± 60%  interrupts.CPU78.TLB:TLB_shootdowns
    291051 ± 17%     -99.3%       1996 ±  9%  interrupts.CPU79.CAL:Function_call_interrupts
      6044 ± 42%     -91.3%     525.19 ± 20%  interrupts.CPU79.RES:Rescheduling_interrupts
    180645 ± 51%     -80.1%      35874 ± 49%  interrupts.CPU79.TLB:TLB_shootdowns
    258588 ± 14%     -99.2%       1953 ± 32%  interrupts.CPU8.CAL:Function_call_interrupts
      5952 ± 39%     -91.0%     534.56 ± 24%  interrupts.CPU8.RES:Rescheduling_interrupts
    286661 ± 15%     -99.3%       1929 ±  7%  interrupts.CPU80.CAL:Function_call_interrupts
      6102 ± 45%     -91.2%     538.94 ± 25%  interrupts.CPU80.RES:Rescheduling_interrupts
    229221 ± 54%     -81.0%      43445 ± 72%  interrupts.CPU80.TLB:TLB_shootdowns
    294718 ± 14%     -99.3%       1983 ±  7%  interrupts.CPU81.CAL:Function_call_interrupts
      6417 ± 42%     -92.0%     510.94 ± 18%  interrupts.CPU81.RES:Rescheduling_interrupts
    228030 ± 57%     -83.8%      36977 ± 54%  interrupts.CPU81.TLB:TLB_shootdowns
    282427 ± 16%     -99.3%       2103 ± 20%  interrupts.CPU82.CAL:Function_call_interrupts
      6011 ± 45%     -90.1%     595.38 ± 41%  interrupts.CPU82.RES:Rescheduling_interrupts
    247992 ± 34%     -83.5%      40801 ± 67%  interrupts.CPU82.TLB:TLB_shootdowns
    285126 ± 16%     -99.3%       1940 ±  9%  interrupts.CPU83.CAL:Function_call_interrupts
      4710 ± 23%     +49.0%       7016 ±  5%  interrupts.CPU83.NMI:Non-maskable_interrupts
      4710 ± 23%     +49.0%       7016 ±  5%  interrupts.CPU83.PMI:Performance_monitoring_interrupts
      6066 ± 44%     -91.8%     494.50 ± 23%  interrupts.CPU83.RES:Rescheduling_interrupts
    212594 ± 45%     -83.9%      34328 ± 87%  interrupts.CPU83.TLB:TLB_shootdowns
    294133 ± 13%     -99.3%       1973 ± 12%  interrupts.CPU84.CAL:Function_call_interrupts
      6151 ± 43%     -91.7%     511.25 ± 22%  interrupts.CPU84.RES:Rescheduling_interrupts
    170484 ± 75%     -81.7%      31255 ± 72%  interrupts.CPU84.TLB:TLB_shootdowns
    296658 ± 20%     -99.3%       1949 ±  7%  interrupts.CPU85.CAL:Function_call_interrupts
      6073 ± 45%     -91.5%     513.88 ± 20%  interrupts.CPU85.RES:Rescheduling_interrupts
    209309 ± 51%     -79.7%      42480 ± 60%  interrupts.CPU85.TLB:TLB_shootdowns
    283092 ± 18%     -99.3%       1988 ± 10%  interrupts.CPU86.CAL:Function_call_interrupts
      6049 ± 46%     -90.7%     562.12 ± 34%  interrupts.CPU86.RES:Rescheduling_interrupts
    233435 ± 47%     -83.7%      38026 ± 66%  interrupts.CPU86.TLB:TLB_shootdowns
    274265 ± 16%     -99.3%       1957 ±  8%  interrupts.CPU87.CAL:Function_call_interrupts
      6042 ± 43%     -91.7%     500.75 ± 24%  interrupts.CPU87.RES:Rescheduling_interrupts
    194705 ± 53%     -76.6%      45647 ± 69%  interrupts.CPU87.TLB:TLB_shootdowns
    284202 ± 13%     -99.3%       1992 ± 10%  interrupts.CPU88.CAL:Function_call_interrupts
      4866 ± 25%     +43.1%       6962 ±  5%  interrupts.CPU88.NMI:Non-maskable_interrupts
      4866 ± 25%     +43.1%       6962 ±  5%  interrupts.CPU88.PMI:Performance_monitoring_interrupts
      6041 ± 41%     -91.6%     505.06 ± 18%  interrupts.CPU88.RES:Rescheduling_interrupts
    228907 ± 34%     -80.1%      45542 ± 49%  interrupts.CPU88.TLB:TLB_shootdowns
    292926 ± 14%     -99.3%       1948 ±  8%  interrupts.CPU89.CAL:Function_call_interrupts
      5920 ± 42%     -91.0%     531.94 ± 27%  interrupts.CPU89.RES:Rescheduling_interrupts
    177476 ± 37%     -77.7%      39596 ± 77%  interrupts.CPU89.TLB:TLB_shootdowns
    260528 ± 14%     -99.3%       1798 ± 13%  interrupts.CPU9.CAL:Function_call_interrupts
      6035 ± 38%     -91.2%     531.56 ± 23%  interrupts.CPU9.RES:Rescheduling_interrupts
    299905 ± 15%     -99.3%       2151 ± 48%  interrupts.CPU90.CAL:Function_call_interrupts
      6125 ± 46%     -92.0%     487.00 ± 25%  interrupts.CPU90.RES:Rescheduling_interrupts
    216493 ± 45%     -78.3%      46905 ± 59%  interrupts.CPU90.TLB:TLB_shootdowns
    280441 ± 17%     -99.3%       1944 ±  7%  interrupts.CPU91.CAL:Function_call_interrupts
      6320 ± 46%     -91.8%     518.31 ± 28%  interrupts.CPU91.RES:Rescheduling_interrupts
    210306 ± 43%     -80.2%      41673 ± 78%  interrupts.CPU91.TLB:TLB_shootdowns
    284700 ± 16%     -99.3%       2034 ±  8%  interrupts.CPU92.CAL:Function_call_interrupts
      6027 ± 42%     -91.0%     542.06 ± 29%  interrupts.CPU92.RES:Rescheduling_interrupts
    157865 ± 45%     -67.1%      51966 ± 70%  interrupts.CPU92.TLB:TLB_shootdowns
    289762 ± 18%     -99.3%       1999 ±  9%  interrupts.CPU93.CAL:Function_call_interrupts
      6257 ± 43%     -91.8%     512.62 ± 22%  interrupts.CPU93.RES:Rescheduling_interrupts
    193767 ± 52%     -70.6%      56953 ± 58%  interrupts.CPU93.TLB:TLB_shootdowns
    291026 ± 14%     -99.3%       2135 ± 17%  interrupts.CPU94.CAL:Function_call_interrupts
      6160 ± 46%     -90.9%     559.62 ± 27%  interrupts.CPU94.RES:Rescheduling_interrupts
    192657 ± 34%     -73.3%      51503 ± 69%  interrupts.CPU94.TLB:TLB_shootdowns
    295831 ± 16%     -99.3%       1927 ±  7%  interrupts.CPU95.CAL:Function_call_interrupts
      6151 ± 43%     -91.8%     504.56 ± 23%  interrupts.CPU95.RES:Rescheduling_interrupts
    191279 ± 37%     -77.0%      43973 ± 55%  interrupts.CPU95.TLB:TLB_shootdowns
    291597 ± 20%     -99.3%       1949 ±  9%  interrupts.CPU96.CAL:Function_call_interrupts
      6064 ± 44%     -91.9%     490.88 ± 22%  interrupts.CPU96.RES:Rescheduling_interrupts
    189614 ± 48%     -73.2%      50736 ± 78%  interrupts.CPU96.TLB:TLB_shootdowns
    298823 ± 14%     -99.4%       1890 ± 11%  interrupts.CPU97.CAL:Function_call_interrupts
      6053 ± 41%     -91.2%     532.81 ± 28%  interrupts.CPU97.RES:Rescheduling_interrupts
    261476 ± 38%     -83.3%      43773 ± 61%  interrupts.CPU97.TLB:TLB_shootdowns
    290625 ± 17%     -99.3%       1929 ±  7%  interrupts.CPU98.CAL:Function_call_interrupts
      6051 ± 44%     -91.6%     510.56 ± 22%  interrupts.CPU98.RES:Rescheduling_interrupts
    220709 ± 48%     -78.6%      47320 ± 71%  interrupts.CPU98.TLB:TLB_shootdowns
    282786 ± 14%     -99.3%       1968 ±  9%  interrupts.CPU99.CAL:Function_call_interrupts
      6157 ± 42%     -91.9%     497.19 ± 18%  interrupts.CPU99.RES:Rescheduling_interrupts
    509956 ±  3%     +29.7%     661249 ±  4%  interrupts.NMI:Non-maskable_interrupts
    509956 ±  3%     +29.7%     661249 ±  4%  interrupts.PMI:Performance_monitoring_interrupts
    635122 ±  3%     -91.4%      54578 ±  3%  interrupts.RES:Rescheduling_interrupts
  30211373 ± 39%     -74.6%    7677486 ± 39%  interrupts.TLB:TLB_shootdowns


                                                                                
                         vm-scalability.time.minor_page_faults                  
                                                                                
  1.9e+09 +-----------------------------------------------------------------+   
          |                                                                 |   
  1.8e+09 |OOOOOOOOO OOOOOO OOOO O OOOOO O OOOOOO O O  OO  O OOOOOO         |   
          |         O      O    O O     O O      O O OO OOO O               |   
  1.7e+09 |-+                                                               |   
          |                                                                 |   
  1.6e+09 |-+                                                               |   
          |                                                                 |   
  1.5e+09 |-+                                                               |   
          |                                       + +        +   + +        |   
  1.4e+09 |-+                                  +++ + ++++++++ +++ + ++++++++|   
          |                                    :                            |   
  1.3e+09 |-+                                 :                             |   
          |+ + ++    + + ++++++++++++ +++++++++                             |   
  1.2e+09 +-----------------------------------------------------------------+   
                                                                                
                                                                                                                                                                
                    vm-scalability.time.voluntary_context_switches              
                                                                                
  1.8e+08 +-----------------------------------------------------------------+   
          |                                    +              +             |   
  1.6e+08 |-+                                  :++ +++++++++++ ++ ++++ +++++|   
  1.4e+08 |-+                                  :  +     +        +    +     |   
          |                                    :                            |   
  1.2e+08 |-+                                 :                             |   
    1e+08 |-+                      +          :                             |   
          |++++++++++++++++++++++++ +++++++++++                             |   
    8e+07 |-+                                                               |   
    6e+07 |-+                                                               |   
          |                                                                 |   
    4e+07 |-+                                                               |   
    2e+07 |-+                                                               |   
          |                                                                 |   
        0 +-----------------------------------------------------------------+   
                                                                                
                                                                                                                                                                
                               vm-scalability.throughput                        
                                                                                
    4e+07 +-----------------------------------------------------------------+   
          |O O OOOOO O O OO O                                               |   
  3.8e+07 |-O O     O O O  O OOO OOOOOOO OOOOOOOO OOOO OOOOO OOOOOO         |   
          |                     O       O        O    O O   O               |   
  3.6e+07 |-+                                                               |   
          |                                                                 |   
  3.4e+07 |-+                                                               |   
          |                                                                 |   
  3.2e+07 |-+                                                               |   
          |                                                                 |   
    3e+07 |-+                                     +          +   +          |   
          |                                    +++ ++++++++++ +++ ++++++++++|   
  2.8e+07 |-+            ++      +  +         +                             |   
          |++++++++++++++  ++++++ ++ +++++++++                              |   
  2.6e+07 +-----------------------------------------------------------------+   
                                                                                
                                                                                                                                                                
                                vm-scalability.median                           
                                                                                
  380000 +------------------------------------------------------------------+   
         | O OO   OOOO O OO OOOO OOOOOO OOOOOOOOO OO  O O    OOOOOO         |   
  360000 |-+                    O      O         O  OO O OOOO               |   
  340000 |-+                                                                |   
         |                                                                  |   
  320000 |-+                                                                |   
         |                                                                  |   
  300000 |-+                                                                |   
         |                                                                  |   
  280000 |-+                                      ++++ +     +   + ++ +     |   
  260000 |-+                                   +++ +  + +++++ +++ +  + +++++|   
         |                                     :                            |   
  240000 |-++       + + ++ + +  ++ + +    +++ +                             |   
         |++ +++++++ + +  + + ++  + + ++++   +                              |   
  220000 +------------------------------------------------------------------+   
                                                                                
                                                                                                                                                                
                                vm-scalability.workload                         
                                                                                
  8.5e+09 +-----------------------------------------------------------------+   
          |                                                                 |   
    8e+09 |OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO OOOOOOOOOOOO         |   
          |                                           O                     |   
  7.5e+09 |-+                                                               |   
          |                                                                 |   
    7e+09 |-+                                                               |   
          |                                                                 |   
  6.5e+09 |-+                                     + +        +   + +        |   
          |                                    +++ + ++++++++ +++ + ++++++++|   
    6e+09 |-+                                  :                            |   
          |                                   :                             |   
  5.5e+09 |++++++++++++++++++++++++++++++++++++                             |   
          |                                                                 |   
    5e+09 +-----------------------------------------------------------------+   
                                                                                
                                                                                
[*] bisect-good sample
[O] bisect-bad  sample

***************************************************************************************************
lkp-cfl-e1: 16 threads Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz with 32G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/size/tbox_group/test/testcase/ucode:
  gcc-9/performance/x86_64-rhel-8.3/debian-10.4-x86_64-20200603.cgz/300s/512G/lkp-cfl-e1/anon-cow-rand/vm-scalability/0xd6

commit: 
  v5.8
  09854ba94c ("mm: do_wp_page() simplification")

            v5.8 09854ba94c6aad7886996bfbee2 
---------------- --------------------------- 
       fail:runs  %reproduction    fail:runs
           |             |             |    
         34:7         -488%            :16    perf-profile.calltrace.cycles-pp.error_entry
         35:7         -386%           8:16    perf-profile.children.cycles-pp.error_entry
         %stddev     %change         %stddev
             \          |                \  
     53578            +7.6%      57674        vm-scalability.median
    857728            +7.7%     923526        vm-scalability.throughput
  57967803            +6.7%   61831024        vm-scalability.time.minor_page_faults
    716.06           -40.6%     425.06        vm-scalability.time.system_time
      4173            +6.3%       4435        vm-scalability.time.user_time
   1036150           -99.5%       4761        vm-scalability.time.voluntary_context_switches
 2.606e+08            +6.7%   2.78e+08        vm-scalability.workload
     14.46            -5.8        8.69        mpstat.cpu.all.sys%
     22.66            +1.6%      23.03        boot-time.boot
    305.13            +2.5%     312.76        boot-time.idle
      1250           -11.9%       1101        meminfo.Inactive(file)
     18156           -11.1%      16149        meminfo.VmallocUsed
     82.00            +7.3%      88.00        vmstat.cpu.us
      7411           -86.6%     992.31 ±  3%  vmstat.system.cs
     34030            -2.8%      33088        vmstat.system.in
    159333 ±  2%     -97.1%       4626 ±109%  cpuidle.C1.usage
    247724 ±  5%     -53.1%     116196 ± 12%  cpuidle.C1E.time
     14950 ±  8%     -87.5%       1865 ±  7%  cpuidle.C1E.usage
   2940981           -99.5%      15906 ± 20%  cpuidle.POLL.time
    662285           -99.4%       3870 ±  9%  cpuidle.POLL.usage
    104.14 ±  9%    -100.0%       0.00        slabinfo.btrfs_inode.active_objs
    104.14 ±  9%    -100.0%       0.00        slabinfo.btrfs_inode.num_objs
    150.43 ± 16%     -74.1%      39.00        slabinfo.buffer_head.active_objs
    150.43 ± 16%     -74.1%      39.00        slabinfo.buffer_head.num_objs
     88.86 ± 46%    +200.2%     266.75 ±  9%  slabinfo.xfs_buf.active_objs
     88.86 ± 46%    +200.2%     266.75 ±  9%  slabinfo.xfs_buf.num_objs
     20730 ±  4%     -76.3%       4918 ± 14%  softirqs.CPU0.SCHED
     19150           -77.7%       4269 ± 12%  softirqs.CPU1.SCHED
     18436           -83.3%       3082 ±  8%  softirqs.CPU10.SCHED
     18360           -83.9%       2957 ±  4%  softirqs.CPU11.SCHED
     18413           -84.0%       2948 ±  8%  softirqs.CPU12.SCHED
     18439 ±  2%     -84.2%       2911 ±  8%  softirqs.CPU13.SCHED
     18210           -84.7%       2786 ±  9%  softirqs.CPU14.SCHED
     18399 ±  2%     -83.5%       3029 ±  9%  softirqs.CPU15.SCHED
     18361           -82.0%       3305 ±  6%  softirqs.CPU2.SCHED
     18560 ±  3%     -83.0%       3153 ± 12%  softirqs.CPU3.SCHED
     18497 ±  2%     -83.0%       3144 ± 16%  softirqs.CPU4.SCHED
     18260           -84.5%       2839 ± 10%  softirqs.CPU5.SCHED
     18194           -83.4%       3019 ±  9%  softirqs.CPU6.SCHED
     18213           -84.0%       2917 ±  9%  softirqs.CPU7.SCHED
     18503 ±  2%     -82.4%       3247 ± 11%  softirqs.CPU8.SCHED
     18262           -83.2%       3061 ±  8%  softirqs.CPU9.SCHED
    296998           -82.6%      51595        softirqs.SCHED
    284640           -98.3%       4863 ± 22%  interrupts.CAL:Function_call_interrupts
     17552 ±  4%     -97.8%     388.25 ± 47%  interrupts.CPU0.CAL:Function_call_interrupts
     17492 ±  4%     -96.8%     555.81 ± 64%  interrupts.CPU1.CAL:Function_call_interrupts
     17333 ±  4%     -98.9%     188.88 ± 47%  interrupts.CPU10.CAL:Function_call_interrupts
     17308 ±  3%     -98.9%     195.94 ± 47%  interrupts.CPU11.CAL:Function_call_interrupts
     18119 ±  5%     -99.0%     189.06 ± 42%  interrupts.CPU12.CAL:Function_call_interrupts
     18200 ±  2%     -98.9%     201.12 ± 57%  interrupts.CPU13.CAL:Function_call_interrupts
     18330 ±  2%     -98.9%     200.19 ± 44%  interrupts.CPU14.CAL:Function_call_interrupts
     18143 ±  3%     -99.1%     169.06 ± 51%  interrupts.CPU15.CAL:Function_call_interrupts
     17191 ±  4%     -97.5%     426.38 ± 43%  interrupts.CPU2.CAL:Function_call_interrupts
     17100 ±  3%     -97.8%     370.50 ± 31%  interrupts.CPU3.CAL:Function_call_interrupts
     17880 ±  4%     -98.0%     358.12 ± 31%  interrupts.CPU4.CAL:Function_call_interrupts
     18097 ±  6%     -97.9%     387.44 ± 41%  interrupts.CPU5.CAL:Function_call_interrupts
     18283 ±  3%     -97.6%     437.06 ± 37%  interrupts.CPU6.CAL:Function_call_interrupts
     18517 ±  2%     -97.8%     407.75 ± 33%  interrupts.CPU7.CAL:Function_call_interrupts
     17901 ±  2%     -99.1%     169.56 ± 35%  interrupts.CPU8.CAL:Function_call_interrupts
     17189 ±  4%     -98.7%     218.56 ± 45%  interrupts.CPU9.CAL:Function_call_interrupts
     44209 ±  5%     -16.3%      36983 ±  4%  interrupts.RES:Rescheduling_interrupts
    173.86 ± 48%     -73.4%      46.19 ± 16%  interrupts.TLB:TLB_shootdowns
   3810358            +3.1%    3929986        proc-vmstat.nr_active_anon
   3805080            +3.2%    3924954        proc-vmstat.nr_anon_pages
    401615            -3.1%     388972        proc-vmstat.nr_dirty_background_threshold
    804213            -3.1%     778897        proc-vmstat.nr_dirty_threshold
    246012            +3.0%     253497        proc-vmstat.nr_file_pages
   4088131            -3.1%    3963245        proc-vmstat.nr_free_pages
    312.29           -12.0%     274.88        proc-vmstat.nr_inactive_file
      8771            +2.7%       9007        proc-vmstat.nr_shmem
     19793            -6.6%      18483        proc-vmstat.nr_slab_unreclaimable
    236921            +3.1%     244216        proc-vmstat.nr_unevictable
   3810358            +3.1%    3929986        proc-vmstat.nr_zone_active_anon
    312.29           -12.0%     274.88        proc-vmstat.nr_zone_inactive_file
    236921            +3.1%     244216        proc-vmstat.nr_zone_unevictable
  58418825            +6.6%   62295794        proc-vmstat.numa_hit
  58418825            +6.6%   62295794        proc-vmstat.numa_local
      4252 ±  3%     +10.0%       4679 ±  4%  proc-vmstat.pgactivate
  62058733            +6.6%   66174824        proc-vmstat.pgalloc_normal
  58391274            +6.6%   62255253        proc-vmstat.pgfault
  60704133            +7.0%   64973361 ±  2%  proc-vmstat.pgfree
      7054            +6.7%       7526        proc-vmstat.thp_fault_alloc
    112868            +6.7%     120421        proc-vmstat.thp_split_pmd
     26275 ±  3%     +13.5%      29811 ±  4%  sched_debug.cfs_rq:/.min_vruntime.stddev
     26260 ±  3%     +13.5%      29798 ±  4%  sched_debug.cfs_rq:/.spread0.stddev
    820.18           -70.3%     243.76 ±  3%  sched_debug.cfs_rq:/.util_est_enqueued.avg
      1019 ±  6%     -38.4%     628.00 ±  2%  sched_debug.cfs_rq:/.util_est_enqueued.max
    767.76           -86.2%     105.58 ± 16%  sched_debug.cfs_rq:/.util_est_enqueued.min
     70.19 ± 20%     +90.0%     133.34 ±  5%  sched_debug.cfs_rq:/.util_est_enqueued.stddev
     62894 ± 26%    +250.8%     220614 ±  5%  sched_debug.cpu.avg_idle.avg
      7223 ± 45%    +531.2%      45599 ± 18%  sched_debug.cpu.avg_idle.min
     76934           -82.0%      13864 ±  2%  sched_debug.cpu.nr_switches.avg
     87426           -68.6%      27489 ± 12%  sched_debug.cpu.nr_switches.max
     69676           -91.2%       6097 ± 12%  sched_debug.cpu.nr_switches.min
     73009           -86.3%       9994 ±  3%  sched_debug.cpu.sched_count.avg
     80302           -77.6%      18022 ±  8%  sched_debug.cpu.sched_count.max
     67053 ±  2%     -93.7%       4253 ± 14%  sched_debug.cpu.sched_count.min
     32175           -98.6%     436.32 ±  6%  sched_debug.cpu.sched_goidle.avg
     33641           -95.5%       1509 ±  7%  sched_debug.cpu.sched_goidle.max
     30062           -99.3%     199.53 ±  9%  sched_debug.cpu.sched_goidle.min
    801.65 ± 13%     -57.4%     341.57 ± 11%  sched_debug.cpu.sched_goidle.stddev
     36206           -87.8%       4433 ±  4%  sched_debug.cpu.ttwu_count.avg
     39349           -80.0%       7852 ±  9%  sched_debug.cpu.ttwu_count.max
     33384           -94.4%       1857 ± 22%  sched_debug.cpu.ttwu_count.min
     71.96            -7.2%      66.74        perf-stat.i.MPKI
 2.388e+09            -1.6%  2.349e+09        perf-stat.i.branch-instructions
   2607749 ±  3%      -9.1%    2371691 ±  2%  perf-stat.i.branch-misses
     76.52            +6.2       82.72        perf-stat.i.cache-miss-rate%
  7.81e+08           -13.2%  6.775e+08        perf-stat.i.cache-references
      7401           -86.8%     975.06 ±  3%  perf-stat.i.context-switches
      6.42            +4.3%       6.69        perf-stat.i.cpi
 2.748e+09            +3.1%  2.833e+09        perf-stat.i.dTLB-loads
      8.35            +0.5        8.82        perf-stat.i.dTLB-store-miss-rate%
  94899654            +7.5%   1.02e+08        perf-stat.i.dTLB-store-misses
 1.008e+09            +6.3%  1.072e+09        perf-stat.i.dTLB-stores
     76.89            +4.0       80.86        perf-stat.i.iTLB-load-miss-rate%
    189171           -32.8%     127059        perf-stat.i.iTLB-load-misses
   1018881            +5.7%    1077007        perf-stat.i.iTLB-loads
 1.066e+10            -4.6%  1.017e+10        perf-stat.i.instructions
      0.16            -5.4%       0.15        perf-stat.i.ipc
      0.51            +8.6%       0.56        perf-stat.i.metric.K/sec
    184985            +5.6%     195420        perf-stat.i.minor-faults
  81588669           -10.4%   73110690        perf-stat.i.node-loads
 2.024e+08            +7.4%  2.174e+08        perf-stat.i.node-stores
    184985            +5.6%     195420        perf-stat.i.page-faults
     73.30            -9.1%      66.62        perf-stat.overall.MPKI
      0.11 ±  3%      -0.0        0.10 ±  2%  perf-stat.overall.branch-miss-rate%
     71.94           +11.1       82.99        perf-stat.overall.cache-miss-rate%
      6.33            +5.5%       6.68        perf-stat.overall.cpi
     15.63            -5.2       10.42        perf-stat.overall.iTLB-load-miss-rate%
     56015           +41.2%      79073        perf-stat.overall.instructions-per-iTLB-miss
      0.16            -5.2%       0.15        perf-stat.overall.ipc
     12800           -11.4%      11346        perf-stat.overall.path-length
 2.379e+09            -1.7%  2.339e+09        perf-stat.ps.branch-instructions
   2617135 ±  3%      -8.8%    2385532 ±  2%  perf-stat.ps.branch-misses
  7.79e+08           -13.4%  6.749e+08        perf-stat.ps.cache-references
      7434           -86.9%     974.26 ±  3%  perf-stat.ps.context-switches
 2.738e+09            +3.1%  2.822e+09        perf-stat.ps.dTLB-loads
  94387865            +7.4%  1.014e+08        perf-stat.ps.dTLB-store-misses
 1.005e+09            +6.4%  1.069e+09        perf-stat.ps.dTLB-stores
    189717           -32.5%     128138        perf-stat.ps.iTLB-load-misses
   1023815            +7.6%    1101383        perf-stat.ps.iTLB-loads
 1.063e+10            -4.7%  1.013e+10        perf-stat.ps.instructions
    185866            +7.5%     199819        perf-stat.ps.minor-faults
  81382471           -10.4%   72935146        perf-stat.ps.node-loads
 2.014e+08            +7.4%  2.164e+08        perf-stat.ps.node-stores
    185866            +7.5%     199819        perf-stat.ps.page-faults
 3.336e+12            -5.5%  3.154e+12        perf-stat.total.instructions
     55.87 ±  4%     -55.9        0.00        perf-profile.calltrace.cycles-pp.asm_sysvec_apic_timer_interrupt
     53.40 ±  4%     -53.4        0.00        perf-profile.calltrace.cycles-pp.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
     41.92 ±  5%     -41.9        0.00        perf-profile.calltrace.cycles-pp.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
     39.23 ±  5%     -39.2        0.00        perf-profile.calltrace.cycles-pp.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
     32.69 ±  5%     -32.7        0.00        perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
     24.23 ±  9%     -24.2        0.00        perf-profile.calltrace.cycles-pp.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt.sysvec_apic_timer_interrupt
     21.98 ± 10%     -22.0        0.00        perf-profile.calltrace.cycles-pp.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt
     21.91 ± 10%     -21.9        0.00        perf-profile.calltrace.cycles-pp.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt
     13.44 ± 13%     -13.4        0.00        perf-profile.calltrace.cycles-pp.scheduler_tick.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues
     11.32 ±  5%     -11.3        0.00        perf-profile.calltrace.cycles-pp.printk.irq_work_single.irq_work_run_list.irq_work_run.__sysvec_irq_work
     11.32 ±  5%     -11.3        0.00        perf-profile.calltrace.cycles-pp.vprintk_emit.printk.irq_work_single.irq_work_run_list.irq_work_run
     11.17 ±  6%     -11.2        0.00        perf-profile.calltrace.cycles-pp.console_unlock.vprintk_emit.printk.irq_work_single.irq_work_run_list
     10.23 ±  5%     -10.2        0.00        perf-profile.calltrace.cycles-pp.serial8250_console_write.console_unlock.vprintk_emit.printk.irq_work_single
     10.23 ± 17%     -10.2        0.00        perf-profile.calltrace.cycles-pp.irq_work_run_list.irq_work_run.__sysvec_irq_work.sysvec_irq_work.asm_sysvec_irq_work
     10.23 ± 17%     -10.2        0.00        perf-profile.calltrace.cycles-pp.asm_sysvec_irq_work
     10.23 ± 17%     -10.2        0.00        perf-profile.calltrace.cycles-pp.sysvec_irq_work.asm_sysvec_irq_work
     10.23 ± 17%     -10.2        0.00        perf-profile.calltrace.cycles-pp.__sysvec_irq_work.sysvec_irq_work.asm_sysvec_irq_work
     10.23 ± 17%     -10.2        0.00        perf-profile.calltrace.cycles-pp.irq_work_run.__sysvec_irq_work.sysvec_irq_work.asm_sysvec_irq_work
     10.23 ± 17%     -10.2        0.00        perf-profile.calltrace.cycles-pp.irq_work_single.irq_work_run_list.irq_work_run.__sysvec_irq_work.sysvec_irq_work
      9.99 ±  5%     -10.0        0.00        perf-profile.calltrace.cycles-pp.uart_console_write.serial8250_console_write.console_unlock.vprintk_emit.printk
      9.40 ± 16%      -9.4        0.00        perf-profile.calltrace.cycles-pp.task_tick_fair.scheduler_tick.update_process_times.tick_sched_handle.tick_sched_timer
      9.26 ± 12%      -9.3        0.00        perf-profile.calltrace.cycles-pp.irq_exit_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
      9.15 ±  8%      -9.2        0.00        perf-profile.calltrace.cycles-pp.wait_for_xmitr.serial8250_console_putchar.uart_console_write.serial8250_console_write.console_unlock
      9.15 ±  8%      -9.2        0.00        perf-profile.calltrace.cycles-pp.serial8250_console_putchar.uart_console_write.serial8250_console_write.console_unlock.vprintk_emit
      8.18 ±  9%      -8.2        0.00        perf-profile.calltrace.cycles-pp.io_serial_in.wait_for_xmitr.serial8250_console_putchar.uart_console_write.serial8250_console_write
      7.25 ± 17%      -7.3        0.00        perf-profile.calltrace.cycles-pp.do_softirq_own_stack.irq_exit_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
      7.08 ± 17%      -7.1        0.00        perf-profile.calltrace.cycles-pp.asm_call_on_stack.do_softirq_own_stack.irq_exit_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
      6.73 ± 16%      -6.7        0.00        perf-profile.calltrace.cycles-pp.__softirqentry_text_start.asm_call_on_stack.do_softirq_own_stack.irq_exit_rcu.sysvec_apic_timer_interrupt
      0.00            +0.8        0.77 ±  4%  perf-profile.calltrace.cycles-pp.native_flush_tlb_one_user.flush_tlb_func_common.flush_tlb_mm_range.ptep_clear_flush.wp_page_copy
      0.00            +0.8        0.77 ±  4%  perf-profile.calltrace.cycles-pp.rmqueue_bulk.rmqueue.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma
      0.00            +0.8        0.81 ±  4%  perf-profile.calltrace.cycles-pp.flush_tlb_func_common.flush_tlb_mm_range.ptep_clear_flush.wp_page_copy.__handle_mm_fault
      0.00            +0.8        0.83 ±  5%  perf-profile.calltrace.cycles-pp.try_charge.mem_cgroup_charge.wp_page_copy.__handle_mm_fault.handle_mm_fault
      0.23 ±161%      +0.9        1.14 ±  3%  perf-profile.calltrace.cycles-pp.do_wp_page.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault
      0.00            +0.9        0.92 ±  7%  perf-profile.calltrace.cycles-pp._raw_spin_lock.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault
      0.00            +1.0        0.98 ±  4%  perf-profile.calltrace.cycles-pp.flush_tlb_mm_range.ptep_clear_flush.wp_page_copy.__handle_mm_fault.handle_mm_fault
      0.00            +1.0        1.04 ±  4%  perf-profile.calltrace.cycles-pp.ptep_clear_flush.wp_page_copy.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
      0.00            +1.1        1.06 ±  4%  perf-profile.calltrace.cycles-pp.rmqueue.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.wp_page_copy
      0.00            +1.1        1.11 ±  4%  perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.wp_page_copy.__handle_mm_fault
      0.00            +1.2        1.21 ±  4%  perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.alloc_pages_vma.wp_page_copy.__handle_mm_fault.handle_mm_fault
      0.00            +1.2        1.23 ±  5%  perf-profile.calltrace.cycles-pp.mem_cgroup_charge.wp_page_copy.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
      0.00            +1.3        1.28 ±  4%  perf-profile.calltrace.cycles-pp.alloc_pages_vma.wp_page_copy.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
      0.00            +1.4        1.37 ±  3%  perf-profile.calltrace.cycles-pp.nrand48_r
      0.00            +1.8        1.76 ±  3%  perf-profile.calltrace.cycles-pp.do_rw_once
      0.00            +2.5        2.49 ± 13%  perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.free_pcppages_bulk.free_unref_page_list.release_pages
      0.00            +2.5        2.49 ± 13%  perf-profile.calltrace.cycles-pp._raw_spin_lock.free_pcppages_bulk.free_unref_page_list.release_pages.tlb_flush_mmu
      0.00            +3.1        3.15 ±  9%  perf-profile.calltrace.cycles-pp.free_pcppages_bulk.free_unref_page_list.release_pages.tlb_flush_mmu.zap_pte_range
      0.00            +3.3        3.26 ±  9%  perf-profile.calltrace.cycles-pp.free_unref_page_list.release_pages.tlb_flush_mmu.zap_pte_range.unmap_page_range
      0.00            +4.3        4.28 ± 10%  perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.release_pages.tlb_flush_mmu.zap_pte_range
      0.00            +4.3        4.29 ± 10%  perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.release_pages.tlb_flush_mmu.zap_pte_range.unmap_page_range
      3.18 ± 19%      +6.3        9.49 ±  7%  perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
      3.17 ± 18%      +6.3        9.49 ±  7%  perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
      1.49 ± 20%      +8.0        9.46 ±  7%  perf-profile.calltrace.cycles-pp.__x64_sys_exit_group.do_syscall_64.entry_SYSCALL_64_after_hwframe
      1.49 ± 20%      +8.0        9.46 ±  7%  perf-profile.calltrace.cycles-pp.do_group_exit.__x64_sys_exit_group.do_syscall_64.entry_SYSCALL_64_after_hwframe
      1.47 ± 21%      +8.0        9.46 ±  7%  perf-profile.calltrace.cycles-pp.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64.entry_SYSCALL_64_after_hwframe
      1.11 ± 18%      +8.3        9.45 ±  7%  perf-profile.calltrace.cycles-pp.mmput.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
      1.11 ± 18%      +8.3        9.45 ±  7%  perf-profile.calltrace.cycles-pp.exit_mmap.mmput.do_exit.do_group_exit.__x64_sys_exit_group
      0.00            +8.4        8.38 ±  8%  perf-profile.calltrace.cycles-pp.release_pages.tlb_flush_mmu.zap_pte_range.unmap_page_range.unmap_vmas
      0.00            +8.5        8.49 ±  8%  perf-profile.calltrace.cycles-pp.tlb_flush_mmu.zap_pte_range.unmap_page_range.unmap_vmas.exit_mmap
      0.62 ± 57%      +8.5        9.13 ±  7%  perf-profile.calltrace.cycles-pp.zap_pte_range.unmap_page_range.unmap_vmas.exit_mmap.mmput
      0.58 ± 44%      +8.6        9.14 ±  7%  perf-profile.calltrace.cycles-pp.unmap_vmas.exit_mmap.mmput.do_exit.do_group_exit
      0.57 ± 45%      +8.6        9.13 ±  7%  perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.exit_mmap.mmput.do_exit
      0.00           +11.4       11.42 ±  4%  perf-profile.calltrace.cycles-pp.copy_page.wp_page_copy.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
      0.00           +16.2       16.22 ±  3%  perf-profile.calltrace.cycles-pp.wp_page_copy.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault
      1.32 ± 25%     +17.5       18.84 ±  3%  perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
      0.00           +19.0       19.02 ±  3%  perf-profile.calltrace.cycles-pp.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.do_access
      0.00           +19.6       19.58 ±  3%  perf-profile.calltrace.cycles-pp.do_user_addr_fault.exc_page_fault.asm_exc_page_fault.do_access
      0.00           +19.8       19.83 ±  3%  perf-profile.calltrace.cycles-pp.exc_page_fault.asm_exc_page_fault.do_access
      0.00           +23.4       23.36 ±  3%  perf-profile.calltrace.cycles-pp.asm_exc_page_fault.do_access
      0.00           +84.2       84.18        perf-profile.calltrace.cycles-pp.do_access
     56.15 ±  4%     -55.3        0.88 ±  5%  perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt
     53.60 ±  4%     -52.9        0.74 ±  5%  perf-profile.children.cycles-pp.sysvec_apic_timer_interrupt
     41.95 ±  5%     -41.4        0.58 ±  5%  perf-profile.children.cycles-pp.__sysvec_apic_timer_interrupt
     39.34 ±  5%     -38.8        0.54 ±  5%  perf-profile.children.cycles-pp.hrtimer_interrupt
     32.81 ±  5%     -32.4        0.44 ±  6%  perf-profile.children.cycles-pp.__hrtimer_run_queues
     24.29 ±  9%     -24.0        0.33 ±  8%  perf-profile.children.cycles-pp.tick_sched_timer
     22.04 ± 10%     -21.7        0.30 ±  8%  perf-profile.children.cycles-pp.tick_sched_handle
     22.01 ± 10%     -21.7        0.30 ±  8%  perf-profile.children.cycles-pp.update_process_times
     13.48 ± 13%     -13.3        0.19 ± 11%  perf-profile.children.cycles-pp.scheduler_tick
     11.39 ±  5%     -11.2        0.21 ± 13%  perf-profile.children.cycles-pp.irq_work_run_list
     11.32 ±  5%     -11.1        0.21 ± 13%  perf-profile.children.cycles-pp.asm_sysvec_irq_work
     11.32 ±  5%     -11.1        0.21 ± 13%  perf-profile.children.cycles-pp.sysvec_irq_work
     11.32 ±  5%     -11.1        0.21 ± 13%  perf-profile.children.cycles-pp.__sysvec_irq_work
     11.32 ±  5%     -11.1        0.21 ± 13%  perf-profile.children.cycles-pp.irq_work_run
     11.32 ±  5%     -11.1        0.21 ± 13%  perf-profile.children.cycles-pp.irq_work_single
     11.32 ±  5%     -11.1        0.21 ± 13%  perf-profile.children.cycles-pp.printk
     11.32 ±  5%     -10.9        0.39 ± 20%  perf-profile.children.cycles-pp.vprintk_emit
     11.17 ±  6%     -10.8        0.38 ± 20%  perf-profile.children.cycles-pp.console_unlock
     10.23 ±  5%      -9.9        0.33 ± 20%  perf-profile.children.cycles-pp.serial8250_console_write
      9.99 ±  5%      -9.7        0.32 ± 20%  perf-profile.children.cycles-pp.uart_console_write
      9.73 ± 15%      -9.6        0.14 ± 19%  perf-profile.children.cycles-pp.irq_exit_rcu
      9.49 ± 16%      -9.3        0.14 ± 13%  perf-profile.children.cycles-pp.task_tick_fair
      9.38 ±  8%      -9.1        0.30 ± 19%  perf-profile.children.cycles-pp.wait_for_xmitr
      9.15 ±  8%      -8.9        0.29 ± 19%  perf-profile.children.cycles-pp.serial8250_console_putchar
      9.04 ± 18%      -8.8        0.27 ± 20%  perf-profile.children.cycles-pp.io_serial_in
      8.67 ± 37%      -8.3        0.32 ± 14%  perf-profile.children.cycles-pp.asm_call_on_stack
      7.67 ± 21%      -7.6        0.11 ± 22%  perf-profile.children.cycles-pp.do_softirq_own_stack
      7.17 ± 21%      -7.1        0.11 ± 23%  perf-profile.children.cycles-pp.__softirqentry_text_start
      5.87 ± 23%      -5.2        0.71 ±  5%  perf-profile.children.cycles-pp.native_irq_return_iret
      4.45 ± 19%      -4.4        0.06 ± 13%  perf-profile.children.cycles-pp.update_load_avg
      4.59 ± 39%      -4.3        0.27 ± 11%  perf-profile.children.cycles-pp.ret_from_fork
      4.76 ± 18%      -4.3        0.45 ±  3%  perf-profile.children.cycles-pp.sync_regs
      4.41 ± 41%      -4.1        0.27 ± 12%  perf-profile.children.cycles-pp.kthread
      3.84 ± 21%      -3.8        0.03 ± 78%  perf-profile.children.cycles-pp.perf_mux_hrtimer_handler
      3.70 ± 48%      -3.4        0.25 ± 12%  perf-profile.children.cycles-pp.worker_thread
      3.55 ± 49%      -3.3        0.25 ± 12%  perf-profile.children.cycles-pp.process_one_work
      3.31 ± 52%      -3.1        0.25 ± 12%  perf-profile.children.cycles-pp.memcpy_erms
      3.24 ± 54%      -3.0        0.25 ± 12%  perf-profile.children.cycles-pp.drm_fb_helper_dirty_work
      2.92 ± 22%      -2.9        0.03 ± 77%  perf-profile.children.cycles-pp.__x64_sys_execve
      2.90 ± 23%      -2.9        0.03 ± 77%  perf-profile.children.cycles-pp.__do_execve_file
      2.88 ± 22%      -2.8        0.03 ± 77%  perf-profile.children.cycles-pp.execve
      1.40 ± 19%      -1.2        0.21 ± 31%  perf-profile.children.cycles-pp.ksys_write
      1.39 ± 19%      -1.2        0.21 ± 31%  perf-profile.children.cycles-pp.vfs_write
      1.30 ± 18%      -1.1        0.21 ± 31%  perf-profile.children.cycles-pp.new_sync_write
      0.75 ± 44%      -0.7        0.07 ± 29%  perf-profile.children.cycles-pp.vm_mmap_pgoff
      0.53 ± 55%      -0.5        0.07 ± 29%  perf-profile.children.cycles-pp.ksys_mmap_pgoff
      0.63 ± 40%      -0.4        0.19 ± 34%  perf-profile.children.cycles-pp.write
      0.30 ± 46%      -0.2        0.05 ± 39%  perf-profile.children.cycles-pp.clear_page_erms
      0.26 ± 48%      -0.2        0.06 ± 27%  perf-profile.children.cycles-pp.__mmap
      0.21 ± 33%      -0.1        0.06 ± 27%  perf-profile.children.cycles-pp.__get_user_pages
      0.00            +0.1        0.07 ± 14%  perf-profile.children.cycles-pp.__split_huge_pmd
      0.00            +0.1        0.07 ± 14%  perf-profile.children.cycles-pp.__split_huge_pmd_locked
      0.02 ±158%      +0.1        0.09 ± 16%  perf-profile.children.cycles-pp.mem_cgroup_charge_statistics
      0.00            +0.1        0.08 ±  9%  perf-profile.children.cycles-pp.__mod_node_page_state
      0.00            +0.1        0.08 ± 17%  perf-profile.children.cycles-pp.cpumask_any_but
      0.01 ±244%      +0.1        0.11 ± 16%  perf-profile.children.cycles-pp.__mod_memcg_state
      0.01 ±244%      +0.1        0.12 ± 19%  perf-profile.children.cycles-pp.__count_memcg_events
      0.00            +0.1        0.12 ± 11%  perf-profile.children.cycles-pp.vmacache_find
      0.00            +0.1        0.12 ±  9%  perf-profile.children.cycles-pp.reuse_swap_page
      0.00            +0.2        0.17 ±  4%  perf-profile.children.cycles-pp.lrand48_r@plt
      0.00            +0.2        0.18 ± 36%  perf-profile.children.cycles-pp.devkmsg_write.cold
      0.00            +0.2        0.18 ± 36%  perf-profile.children.cycles-pp.devkmsg_emit
      0.00            +0.2        0.20 ±  7%  perf-profile.children.cycles-pp.do_huge_pmd_wp_page
      0.02 ±158%      +0.2        0.24 ±  8%  perf-profile.children.cycles-pp.page_add_new_anon_rmap
      0.01 ±244%      +0.2        0.25 ±  9%  perf-profile.children.cycles-pp.get_mem_cgroup_from_mm
      0.16 ± 90%      +0.2        0.41 ± 34%  perf-profile.children.cycles-pp.intel_idle
      0.02 ±158%      +0.2        0.27 ±  7%  perf-profile.children.cycles-pp.__pagevec_lru_add_fn
      0.16 ± 90%      +0.3        0.42 ± 33%  perf-profile.children.cycles-pp.cpuidle_enter
      0.16 ± 90%      +0.3        0.42 ± 33%  perf-profile.children.cycles-pp.cpuidle_enter_state
      0.11 ±120%      +0.3        0.38 ± 35%  perf-profile.children.cycles-pp.start_secondary
      0.16 ± 90%      +0.3        0.43 ± 32%  perf-profile.children.cycles-pp.secondary_startup_64
      0.16 ± 90%      +0.3        0.43 ± 32%  perf-profile.children.cycles-pp.cpu_startup_entry
      0.16 ± 90%      +0.3        0.43 ± 32%  perf-profile.children.cycles-pp.do_idle
      0.02 ±244%      +0.3        0.32 ±  8%  perf-profile.children.cycles-pp.__mod_lruvec_state
      0.06 ±101%      +0.3        0.39 ±  8%  perf-profile.children.cycles-pp.__perf_sw_event
      0.13 ± 54%      +0.3        0.47 ±  7%  perf-profile.children.cycles-pp.pagevec_lru_move_fn
      0.00            +0.4        0.44 ±  6%  perf-profile.children.cycles-pp.lrand48_r
      0.05 ± 87%      +0.5        0.53 ±  7%  perf-profile.children.cycles-pp.lru_cache_add
      0.58 ± 31%      +0.6        1.23 ±  4%  perf-profile.children.cycles-pp.__alloc_pages_nodemask
      0.47 ± 40%      +0.7        1.13 ±  4%  perf-profile.children.cycles-pp.get_page_from_freelist
      0.09 ± 72%      +0.7        0.78 ±  4%  perf-profile.children.cycles-pp.native_flush_tlb_one_user
      0.12 ± 91%      +0.7        0.83 ±  5%  perf-profile.children.cycles-pp.try_charge
      0.09 ± 72%      +0.7        0.81 ±  4%  perf-profile.children.cycles-pp.flush_tlb_func_common
      0.03 ±158%      +0.8        0.78 ±  3%  perf-profile.children.cycles-pp.rmqueue_bulk
      0.18 ± 62%      +0.9        1.03 ±  6%  perf-profile.children.cycles-pp.__list_del_entry_valid
      0.12 ± 85%      +0.9        0.98 ±  4%  perf-profile.children.cycles-pp.flush_tlb_mm_range
      0.18 ± 42%      +0.9        1.08 ±  4%  perf-profile.children.cycles-pp.rmqueue
      0.03 ±115%      +1.0        1.04 ±  4%  perf-profile.children.cycles-pp.ptep_clear_flush
      0.23 ± 73%      +1.1        1.29 ±  4%  perf-profile.children.cycles-pp.alloc_pages_vma
      0.11 ± 74%      +1.1        1.23 ±  5%  perf-profile.children.cycles-pp.mem_cgroup_charge
      0.00            +1.6        1.58 ±  2%  perf-profile.children.cycles-pp.nrand48_r
      0.00            +1.9        1.90 ±  3%  perf-profile.children.cycles-pp.do_rw_once
      0.82 ± 13%      +3.0        3.82 ± 10%  perf-profile.children.cycles-pp._raw_spin_lock
      0.00            +3.4        3.40 ±  9%  perf-profile.children.cycles-pp.free_pcppages_bulk
      0.02 ±158%      +3.5        3.53 ±  9%  perf-profile.children.cycles-pp.free_unref_page_list
      0.80 ± 44%      +3.6        4.41 ± 10%  perf-profile.children.cycles-pp._raw_spin_lock_irqsave
      0.29 ± 49%      +6.9        7.22 ± 10%  perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
      1.82 ± 20%      +7.6        9.46 ±  7%  perf-profile.children.cycles-pp.mmput
      1.80 ± 20%      +7.7        9.46 ±  7%  perf-profile.children.cycles-pp.exit_mmap
      1.65 ± 16%      +7.8        9.46 ±  7%  perf-profile.children.cycles-pp.__x64_sys_exit_group
      1.65 ± 16%      +7.8        9.46 ±  7%  perf-profile.children.cycles-pp.do_group_exit
      1.63 ± 18%      +7.8        9.46 ±  7%  perf-profile.children.cycles-pp.do_exit
      0.95 ± 26%      +8.2        9.14 ±  7%  perf-profile.children.cycles-pp.unmap_vmas
      0.88 ± 29%      +8.3        9.14 ±  7%  perf-profile.children.cycles-pp.unmap_page_range
      0.85 ± 29%      +8.3        9.14 ±  7%  perf-profile.children.cycles-pp.zap_pte_range
      0.26 ± 84%      +8.5        8.80 ±  8%  perf-profile.children.cycles-pp.tlb_flush_mmu
      0.16 ± 72%      +8.6        8.72 ±  8%  perf-profile.children.cycles-pp.release_pages
      0.49 ± 42%     +11.0       11.48 ±  4%  perf-profile.children.cycles-pp.copy_page
      0.66 ± 31%     +15.6       16.24 ±  3%  perf-profile.children.cycles-pp.wp_page_copy
      2.30 ± 24%     +16.6       18.94 ±  3%  perf-profile.children.cycles-pp.__handle_mm_fault
      2.34 ± 23%     +16.8       19.12 ±  3%  perf-profile.children.cycles-pp.handle_mm_fault
      2.41 ± 24%     +17.2       19.63 ±  3%  perf-profile.children.cycles-pp.do_user_addr_fault
      2.42 ± 23%     +17.5       19.88 ±  3%  perf-profile.children.cycles-pp.exc_page_fault
      2.61 ± 22%     +19.1       21.69 ±  3%  perf-profile.children.cycles-pp.asm_exc_page_fault
      0.00           +85.3       85.35        perf-profile.children.cycles-pp.do_access
      9.04 ± 18%      -8.8        0.27 ± 20%  perf-profile.self.cycles-pp.io_serial_in
      5.87 ± 23%      -5.2        0.71 ±  5%  perf-profile.self.cycles-pp.native_irq_return_iret
      4.75 ± 19%      -4.3        0.45 ±  3%  perf-profile.self.cycles-pp.sync_regs
      2.22 ± 17%      -2.0        0.24 ± 12%  perf-profile.self.cycles-pp.memcpy_erms
      0.30 ± 46%      -0.2        0.05 ± 39%  perf-profile.self.cycles-pp.clear_page_erms
      0.00            +0.1        0.06 ± 15%  perf-profile.self.cycles-pp.ptep_clear_flush
      0.00            +0.1        0.06 ± 19%  perf-profile.self.cycles-pp.__split_huge_pmd_locked
      0.00            +0.1        0.08 ±  9%  perf-profile.self.cycles-pp.__mod_node_page_state
      0.02 ±158%      +0.1        0.10 ± 17%  perf-profile.self.cycles-pp.rmqueue
      0.03 ±116%      +0.1        0.11 ± 11%  perf-profile.self.cycles-pp.handle_mm_fault
      0.01 ±244%      +0.1        0.10 ± 16%  perf-profile.self.cycles-pp.__mod_memcg_state
      0.01 ±244%      +0.1        0.12 ± 20%  perf-profile.self.cycles-pp.__count_memcg_events
      0.00            +0.1        0.11 ± 11%  perf-profile.self.cycles-pp.vmacache_find
      0.00            +0.1        0.12 ±  9%  perf-profile.self.cycles-pp.reuse_swap_page
      0.01 ±244%      +0.1        0.14 ±  8%  perf-profile.self.cycles-pp.__pagevec_lru_add_fn
      0.01 ±244%      +0.1        0.14 ± 11%  perf-profile.self.cycles-pp.__mod_lruvec_state
      0.00            +0.1        0.15 ±  6%  perf-profile.self.cycles-pp.lrand48_r@plt
      0.07 ± 78%      +0.2        0.24 ±  8%  perf-profile.self.cycles-pp.release_pages
      0.01 ±244%      +0.2        0.21 ±  8%  perf-profile.self.cycles-pp.wp_page_copy
      0.01 ±244%      +0.2        0.25 ±  9%  perf-profile.self.cycles-pp.get_mem_cgroup_from_mm
      0.16 ± 90%      +0.2        0.41 ± 34%  perf-profile.self.cycles-pp.intel_idle
      0.00            +0.3        0.29 ±  9%  perf-profile.self.cycles-pp.lrand48_r
      0.00            +0.3        0.30 ±  6%  perf-profile.self.cycles-pp.free_pcppages_bulk
      0.01 ±244%      +0.6        0.58 ±  3%  perf-profile.self.cycles-pp.rmqueue_bulk
      0.09 ± 72%      +0.7        0.78 ±  4%  perf-profile.self.cycles-pp.native_flush_tlb_one_user
      0.10 ±101%      +0.7        0.79 ±  5%  perf-profile.self.cycles-pp.try_charge
      0.18 ± 62%      +0.9        1.03 ±  6%  perf-profile.self.cycles-pp.__list_del_entry_valid
      0.06 ±128%      +1.1        1.13 ±  3%  perf-profile.self.cycles-pp.do_wp_page
      0.00            +1.4        1.38 ±  3%  perf-profile.self.cycles-pp.nrand48_r
      0.00            +1.8        1.76 ±  3%  perf-profile.self.cycles-pp.do_rw_once
      0.29 ± 49%      +6.9        7.22 ± 10%  perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
      0.49 ± 42%     +10.9       11.39 ±  4%  perf-profile.self.cycles-pp.copy_page
      0.00           +63.9       63.86 ±  2%  perf-profile.self.cycles-pp.do_access





Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


Thanks,
Rong Chen


View attachment "config-5.8.0-00001-g09854ba94c6aa" of type "text/plain" (169434 bytes)

View attachment "job-script" of type "text/plain" (7565 bytes)

View attachment "job.yaml" of type "text/plain" (5240 bytes)

View attachment "reproduce" of type "text/plain" (6840 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ