lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20190328161513.GG7277@shao2-debian>
Date:   Fri, 29 Mar 2019 00:15:13 +0800
From:   kernel test robot <rong.a.chen@...el.com>
To:     David Howells <dhowells@...hat.com>
Cc:     Hugh Dickins <hughd@...gle.com>,
        LKML <linux-kernel@...r.kernel.org>,
        David Howells <dhowells@...hat.com>, lkp@...org
Subject: [vfs]  27eb9d500d:  vm-scalability.median -19.4% regression

Greeting,

FYI, we noticed a -19.4% regression of vm-scalability.median due to commit:


commit: 27eb9d500d71d93e2b2f55c226bc1cc4ba53e9b0 ("vfs: Convert ramfs, shmem, tmpfs, devtmpfs, rootfs to use the new mount API")
https://git.kernel.org/cgit/linux/kernel/git/dhowells/linux-fs.git mount-api-viro

in testcase: vm-scalability
on test machine: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
with following parameters:

	runtime: 300s
	size: 16G
	test: shm-pread-rand
	cpufreq_governor: performance

test-description: The motivation behind this suite is to exercise functions and regions of the mm/ of the Linux kernel which are of interest to us.
test-url: https://git.kernel.org/cgit/linux/kernel/git/wfg/vm-scalability.git/

In addition to that, the commit also has significant impact on the following tests:

+------------------+-----------------------------------------------------------------------+
| testcase: change | vm-scalability:                                                       |
| test machine     | 104 threads Skylake with 192G memory                                  |
| test parameters  | cpufreq_governor=performance                                          |
|                  | runtime=300s                                                          |
|                  | size=2T                                                               |
|                  | test=shm-xread-seq-mt                                                 |
+------------------+-----------------------------------------------------------------------+
| testcase: change | vm-scalability: vm-scalability.median -6.9% regression                |
| test machine     | 72 threads Intel(R) Xeon(R) Gold 6139 CPU @ 2.30GHz with 128G memory  |
| test parameters  | cpufreq_governor=performance                                          |
|                  | runtime=300s                                                          |
|                  | size=1T                                                               |
|                  | test=lru-shm                                                          |
|                  | ucode=0x200005a                                                       |
+------------------+-----------------------------------------------------------------------+
| testcase: change | vm-scalability: vm-scalability.median 17.4% improvement               |
| test machine     | 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory |
| test parameters  | cpufreq_governor=performance                                          |
|                  | runtime=300s                                                          |
|                  | size=256G                                                             |
|                  | test=lru-shm-rand                                                     |
+------------------+-----------------------------------------------------------------------+
| testcase: change | vm-scalability: vm-scalability.median -33.5% regression               |
| test machine     | 72 threads Intel(R) Xeon(R) Gold 6139 CPU @ 2.30GHz with 128G memory  |
| test parameters  | cpufreq_governor=performance                                          |
|                  | runtime=300s                                                          |
|                  | size=16G                                                              |
|                  | test=shm-pread-rand                                                   |
|                  | ucode=0x200005a                                                       |
+------------------+-----------------------------------------------------------------------+
| testcase: change | vm-scalability: vm-scalability.median 5.9% improvement                |
| test machine     | 104 threads Skylake with 192G memory                                  |
| test parameters  | cpufreq_governor=performance                                          |
|                  | runtime=300s                                                          |
|                  | size=1T                                                               |
|                  | test=lru-shm                                                          |
+------------------+-----------------------------------------------------------------------+


Details are as below:
-------------------------------------------------------------------------------------------------->


To reproduce:

        git clone https://github.com/intel/lkp-tests.git
        cd lkp-tests
        bin/lkp install job.yaml  # job file is attached in this email
        bin/lkp run     job.yaml

=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/size/tbox_group/test/testcase:
  gcc-7/performance/x86_64-rhel-7.6/debian-x86_64-2018-04-03.cgz/300s/16G/lkp-bdw-ep2/shm-pread-rand/vm-scalability

commit: 
  f568cf93ca ("vfs: Convert smackfs to use the new mount API")
  27eb9d500d ("vfs: Convert ramfs, shmem, tmpfs, devtmpfs, rootfs to use the new mount API")

f568cf93ca0923a0 27eb9d500d71d93e2b2f55c226b 
---------------- --------------------------- 
         %stddev     %change         %stddev
             \          |                \  
     35.48           -48.9%      18.13        vm-scalability.free_time
     58648           -19.4%      47257        vm-scalability.median
   5152784           -19.7%    4135803        vm-scalability.throughput
    345.44            -6.3%     323.69        vm-scalability.time.elapsed_time
    345.44            -6.3%     323.69        vm-scalability.time.elapsed_time.max
  65947509           -50.0%   32974469        vm-scalability.time.maximum_resident_set_size
 1.362e+08           -50.0%   68087693        vm-scalability.time.minor_page_faults
      8549            +1.3%       8656        vm-scalability.time.percent_of_cpu_this_job_got
      5070 ±  2%     -49.9%       2538        vm-scalability.time.system_time
     24465            +4.2%      25484        vm-scalability.time.user_time
 1.548e+09           -19.7%  1.244e+09        vm-scalability.workload
   1584808 ± 72%     -53.2%     742467 ±  8%  cpuidle.C1.time
    785475 ± 36%     -53.3%     366683 ± 31%  cpuidle.C6.usage
      2.96 ±  3%      -1.1        1.82        mpstat.cpu.all.idle%
     16.68            -7.8        8.92        mpstat.cpu.all.sys%
     80.35            +8.9       89.25        mpstat.cpu.all.usr%
   9965020 ±  2%     -44.0%    5578712 ±  5%  numa-numastat.node0.local_node
   9977872           -44.1%    5580748 ±  5%  numa-numastat.node0.numa_hit
  10122101           -52.6%    4801652 ±  6%  numa-numastat.node1.local_node
  10126532           -52.4%    4816911 ±  6%  numa-numastat.node1.numa_hit
     79.25           +11.0%      88.00        vmstat.cpu.us
  64507804           -48.1%   33498560        vmstat.memory.cache
  55730730           +65.3%   92097498        vmstat.memory.free
      1073            -4.2%       1028        vmstat.system.cs
     31.50 ±  8%     -19.0%      25.50 ±  9%  sched_debug.cpu.cpu_load[3].max
      3.10 ± 19%     -34.0%       2.04 ± 11%  sched_debug.cpu.cpu_load[3].stddev
     59.79 ±  6%     -18.4%      48.78 ±  7%  sched_debug.cpu.cpu_load[4].max
      5.63 ±  8%     -22.5%       4.36 ±  8%  sched_debug.cpu.cpu_load[4].stddev
    517.67 ±  5%     -19.3%     417.56 ±  7%  sched_debug.cpu.nr_switches.min
      2709            +1.2%       2742        turbostat.Avg_MHz
     22091 ± 95%     -63.2%       8124 ± 27%  turbostat.C1
    774586 ± 36%     -54.5%     352400 ± 33%  turbostat.C6
      1.23 ± 23%     -35.1%       0.80 ± 15%  turbostat.CPU%c1
      0.85 ± 15%     -46.8%       0.45 ± 11%  turbostat.Pkg%pc2
     26.30            +3.3%      27.17        turbostat.RAMWatt
   4589533           -68.6%    1440882        meminfo.Active
   4589118           -68.6%    1440465        meminfo.Active(anon)
  64456373           -48.0%   33491887        meminfo.Cached
  64265169           -49.2%   32665733        meminfo.Committed_AS
  58888990           -47.2%   31073080        meminfo.Inactive
  58887681           -47.2%   31071756        meminfo.Inactive(anon)
    223071           -33.5%     148432        meminfo.KReclaimable
  58833647           -47.3%   31017550        meminfo.Mapped
  55029628           +66.1%   91387958        meminfo.MemAvailable
  55536115           +65.5%   91931756        meminfo.MemFree
  76356407           -47.7%   39960766        meminfo.Memused
  10888435           -49.1%    5544617        meminfo.PageTables
    223071           -33.5%     148432        meminfo.SReclaimable
  63233823           -49.0%   32269411        meminfo.Shmem
    353083           -21.7%     276412        meminfo.Slab
    238010           -45.0%     131007        meminfo.max_used_kB
      6.97 ± 11%      -1.3        5.69 ±  5%  perf-profile.calltrace.cycles-pp.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt
      1.93 ± 50%      -1.2        0.70 ± 93%  perf-profile.calltrace.cycles-pp.process_one_work.worker_thread.kthread.ret_from_fork
      1.94 ± 50%      -1.2        0.72 ± 92%  perf-profile.calltrace.cycles-pp.worker_thread.kthread.ret_from_fork
      2.25 ± 41%      -1.1        1.17 ± 42%  perf-profile.calltrace.cycles-pp.ret_from_fork
      2.25 ± 41%      -1.1        1.17 ± 42%  perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
      5.80 ± 10%      -1.0        4.75 ±  7%  perf-profile.calltrace.cycles-pp.__softirqentry_text_start.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt
      1.97 ± 11%      -0.6        1.42 ±  8%  perf-profile.calltrace.cycles-pp.__update_load_avg_cfs_rq.task_tick_fair.scheduler_tick.update_process_times.tick_sched_handle
      2.75 ±  4%      -0.5        2.28 ±  5%  perf-profile.calltrace.cycles-pp.native_write_msr.lapic_next_deadline.clockevents_program_event.hrtimer_interrupt.smp_apic_timer_interrupt
      3.34 ±  4%      -0.4        2.90 ±  5%  perf-profile.calltrace.cycles-pp.lapic_next_deadline.clockevents_program_event.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
      1.77 ±  4%      -0.2        1.54 ± 11%  perf-profile.calltrace.cycles-pp.run_timer_softirq.__softirqentry_text_start.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt
      1.67 ± 10%      -0.2        1.44 ± 13%  perf-profile.calltrace.cycles-pp.rcu_sched_clock_irq.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues
      0.76 ±  5%      -0.1        0.62 ± 10%  perf-profile.calltrace.cycles-pp.ktime_get.perf_mux_hrtimer_handler.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt
      0.51 ± 58%      +0.7        1.25 ±  8%  perf-profile.calltrace.cycles-pp.rb_next.timerqueue_del.__remove_hrtimer.__hrtimer_run_queues.hrtimer_interrupt
      1.43 ± 20%      +0.9        2.35 ±  8%  perf-profile.calltrace.cycles-pp.timerqueue_del.__remove_hrtimer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt
      2.17 ± 16%      +1.0        3.18 ±  9%  perf-profile.calltrace.cycles-pp.__remove_hrtimer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
      2.16 ± 19%      +1.0        3.19 ± 13%  perf-profile.calltrace.cycles-pp.irq_enter.smp_apic_timer_interrupt.apic_timer_interrupt
      1696 ± 13%     -15.2%       1438 ±  3%  slabinfo.UNIX.active_objs
      1696 ± 13%     -15.2%       1438 ±  3%  slabinfo.UNIX.num_objs
      3934 ±  4%     -15.1%       3340 ±  7%  slabinfo.eventpoll_pwq.active_objs
      3934 ±  4%     -15.1%       3340 ±  7%  slabinfo.eventpoll_pwq.num_objs
     21089 ±  5%      -7.5%      19504 ±  2%  slabinfo.filp.active_objs
     21107 ±  5%      -7.2%      19587 ±  2%  slabinfo.filp.num_objs
      9028 ±  5%      -6.8%       8417 ±  2%  slabinfo.kmalloc-512.active_objs
      4303 ±  5%     -15.3%       3646 ±  5%  slabinfo.kmalloc-rcl-64.active_objs
      4303 ±  5%     -15.3%       3646 ±  5%  slabinfo.kmalloc-rcl-64.num_objs
      1911 ±  4%     -10.7%       1708 ±  4%  slabinfo.kmalloc-rcl-96.active_objs
      1911 ±  4%     -10.7%       1708 ±  4%  slabinfo.kmalloc-rcl-96.num_objs
    272197           -46.6%     145406        slabinfo.radix_tree_node.active_objs
      4921           -46.9%       2612        slabinfo.radix_tree_node.active_slabs
    275638           -46.9%     146318        slabinfo.radix_tree_node.num_objs
      4921           -46.9%       2612        slabinfo.radix_tree_node.num_slabs
    362.50 ±  9%     -24.7%     273.00 ± 17%  slabinfo.skbuff_fclone_cache.active_objs
    362.50 ±  9%     -24.7%     273.00 ± 17%  slabinfo.skbuff_fclone_cache.num_objs
   1142643           -68.5%     360168        proc-vmstat.nr_active_anon
   1371004           +66.3%    2279988        proc-vmstat.nr_dirty_background_threshold
   2745366           +66.3%    4565553        proc-vmstat.nr_dirty_threshold
  16116153           -48.1%    8369140        proc-vmstat.nr_file_pages
  13884480           +65.6%   22987632        proc-vmstat.nr_free_pages
  14728364           -47.3%    7763800        proc-vmstat.nr_inactive_anon
  14714990           -47.3%    7750437        proc-vmstat.nr_mapped
   2720411           -49.1%    1385255        proc-vmstat.nr_page_table_pages
  15810254           -49.0%    8063260        proc-vmstat.nr_shmem
     55779           -33.5%      37096        proc-vmstat.nr_slab_reclaimable
     32502            -1.6%      31995        proc-vmstat.nr_slab_unreclaimable
   1142643           -68.5%     360168        proc-vmstat.nr_zone_active_anon
  14728364           -47.3%    7763800        proc-vmstat.nr_zone_inactive_anon
  20128841           -48.2%   10422506        proc-vmstat.numa_hit
  20111549           -48.3%   10405203        proc-vmstat.numa_local
  16498703           -50.0%    8254888        proc-vmstat.pgactivate
  20220083           -48.1%   10494413        proc-vmstat.pgalloc_normal
 1.371e+08           -49.7%   68917706        proc-vmstat.pgfault
  19856424           -49.8%    9959314 ±  3%  proc-vmstat.pgfree
    576965 ±  2%     -66.5%     193251 ±  4%  numa-vmstat.node0.nr_active_anon
   7984459           -46.0%    4314633 ±  3%  numa-vmstat.node0.nr_file_pages
   6994491 ±  2%     +60.4%   11221928 ±  2%  numa-vmstat.node0.nr_free_pages
   7288285           -45.1%    4000865 ±  3%  numa-vmstat.node0.nr_inactive_anon
   7280756           -45.2%    3990345 ±  3%  numa-vmstat.node0.nr_mapped
   1339535 ±  7%     -41.5%     783256 ± 18%  numa-vmstat.node0.nr_page_table_pages
   7829898           -46.9%    4159908 ±  3%  numa-vmstat.node0.nr_shmem
    576964 ±  2%     -66.5%     193251 ±  4%  numa-vmstat.node0.nr_zone_active_anon
   7288285           -45.1%    4000865 ±  3%  numa-vmstat.node0.nr_zone_inactive_anon
  10058590 ±  2%     -42.2%    5818668 ±  5%  numa-vmstat.node0.numa_hit
  10045466 ±  2%     -42.1%    5816443 ±  5%  numa-vmstat.node0.numa_local
    579988           -70.6%     170402 ±  3%  numa-vmstat.node1.nr_active_anon
   8125385           -50.1%    4052654 ±  3%  numa-vmstat.node1.nr_file_pages
   6897539 ±  2%     +70.6%   11767440 ±  2%  numa-vmstat.node1.nr_free_pages
   7419455           -49.4%    3757603 ±  3%  numa-vmstat.node1.nr_inactive_anon
   7413582           -49.4%    3754757 ±  3%  numa-vmstat.node1.nr_mapped
   1379651 ±  6%     -56.4%     601611 ± 23%  numa-vmstat.node1.nr_page_table_pages
   7974047           -51.1%    3901499 ±  3%  numa-vmstat.node1.nr_shmem
     29269 ± 31%     -55.5%      13024 ± 53%  numa-vmstat.node1.nr_slab_reclaimable
    579988           -70.6%     170402 ±  3%  numa-vmstat.node1.nr_zone_active_anon
   7419455           -49.4%    3757603 ±  3%  numa-vmstat.node1.nr_zone_inactive_anon
  10114627 ±  2%     -50.5%    5007129 ±  6%  numa-vmstat.node1.numa_hit
   9935234 ±  2%     -51.5%    4816878 ±  6%  numa-vmstat.node1.numa_local
   2309540 ±  2%     -66.6%     772196 ±  4%  numa-meminfo.node0.Active
   2309333 ±  2%     -66.6%     771828 ±  4%  numa-meminfo.node0.Active(anon)
    110938 ± 15%     -22.2%      86295 ± 16%  numa-meminfo.node0.AnonHugePages
  31924208           -46.0%   17242697 ±  3%  numa-meminfo.node0.FilePages
  29138719           -45.1%   15989448 ±  3%  numa-meminfo.node0.Inactive
  29138061           -45.1%   15988774 ±  3%  numa-meminfo.node0.Inactive(anon)
  29107588           -45.2%   15946271 ±  3%  numa-meminfo.node0.Mapped
  27989020 ±  2%     +60.5%   44909537 ±  2%  numa-meminfo.node0.MemFree
  37878262           -44.7%   20957746 ±  5%  numa-meminfo.node0.MemUsed
   5360132 ±  7%     -41.6%    3129116 ± 18%  numa-meminfo.node0.PageTables
  31305964           -46.9%   16623796 ±  3%  numa-meminfo.node0.Shmem
   2321613           -70.7%     680488 ±  3%  numa-meminfo.node1.Active
   2321405           -70.7%     680440 ±  3%  numa-meminfo.node1.Active(anon)
  32488297           -50.2%   16194282 ±  3%  numa-meminfo.node1.FilePages
  29663768           -49.4%   15015880 ±  3%  numa-meminfo.node1.Inactive
  29663115           -49.4%   15015232 ±  3%  numa-meminfo.node1.Inactive(anon)
    117054 ± 31%     -55.5%      52077 ± 53%  numa-meminfo.node1.KReclaimable
  29639530           -49.4%   15003596 ±  3%  numa-meminfo.node1.Mapped
  27600077 ±  2%     +70.6%   47091753 ±  2%  numa-meminfo.node1.MemFree
  38425162           -50.7%   18933485 ±  6%  numa-meminfo.node1.MemUsed
   5521649 ±  6%     -56.5%    2402059 ± 23%  numa-meminfo.node1.PageTables
    117054 ± 31%     -55.5%      52077 ± 53%  numa-meminfo.node1.SReclaimable
  31882947           -51.1%   15589660 ±  3%  numa-meminfo.node1.Shmem
    176591 ± 22%     -40.0%     105918 ± 28%  numa-meminfo.node1.Slab
    224.75 ± 14%     -21.7%     176.00 ±  3%  interrupts.35:IR-PCI-MSI.1572865-edge.eth0-TxRx-1
    383.00 ± 30%     -36.4%     243.67 ± 43%  interrupts.39:IR-PCI-MSI.1572869-edge.eth0-TxRx-5
    213.50 ± 10%      -9.6%     193.00 ± 14%  interrupts.47:IR-PCI-MSI.1572877-edge.eth0-TxRx-13
    227.50 ± 41%     -30.0%     159.33        interrupts.56:IR-PCI-MSI.1572886-edge.eth0-TxRx-22
    313942            -5.2%     297766        interrupts.CAL:Function_call_interrupts
    224.75 ± 14%     -21.7%     176.00 ±  3%  interrupts.CPU1.35:IR-PCI-MSI.1572865-edge.eth0-TxRx-1
    430.25 ± 40%    +125.2%     969.00 ± 20%  interrupts.CPU1.RES:Rescheduling_interrupts
    213.50 ± 10%      -9.6%     193.00 ± 14%  interrupts.CPU13.47:IR-PCI-MSI.1572877-edge.eth0-TxRx-13
    124.25 ± 39%   +1451.7%       1928 ± 26%  interrupts.CPU17.RES:Rescheduling_interrupts
    358.75 ± 31%    +365.7%       1670 ± 20%  interrupts.CPU18.RES:Rescheduling_interrupts
    227.50 ± 41%     -30.0%     159.33        interrupts.CPU22.56:IR-PCI-MSI.1572886-edge.eth0-TxRx-22
      2867 ±149%     -94.5%     157.67 ± 57%  interrupts.CPU25.RES:Rescheduling_interrupts
    444.50 ± 72%     -79.5%      91.00 ± 45%  interrupts.CPU27.RES:Rescheduling_interrupts
    387.25 ± 39%     -51.8%     186.67 ± 83%  interrupts.CPU29.RES:Rescheduling_interrupts
    671.50 ± 85%     -73.9%     175.33 ± 19%  interrupts.CPU31.RES:Rescheduling_interrupts
    392.00 ± 20%     -74.1%     101.67 ± 68%  interrupts.CPU35.RES:Rescheduling_interrupts
    689.00 ± 58%     -63.7%     250.00 ± 19%  interrupts.CPU38.RES:Rescheduling_interrupts
    713.75 ± 46%     -65.2%     248.33 ± 67%  interrupts.CPU41.RES:Rescheduling_interrupts
    403.25 ± 36%     -67.8%     130.00 ± 48%  interrupts.CPU42.RES:Rescheduling_interrupts
      4547 ± 10%     -15.5%       3841 ±  2%  interrupts.CPU43.RES:Rescheduling_interrupts
    295.00 ± 64%     -83.2%      49.67 ± 39%  interrupts.CPU48.RES:Rescheduling_interrupts
    383.00 ± 30%     -36.4%     243.67 ± 43%  interrupts.CPU5.39:IR-PCI-MSI.1572869-edge.eth0-TxRx-5
    104.75 ± 44%    +242.7%     359.00 ± 69%  interrupts.CPU52.RES:Rescheduling_interrupts
      7873           -33.0%       5276 ± 34%  interrupts.CPU57.NMI:Non-maskable_interrupts
      7873           -33.0%       5276 ± 34%  interrupts.CPU57.PMI:Performance_monitoring_interrupts
      7836           -49.6%       3950        interrupts.CPU58.NMI:Non-maskable_interrupts
      7836           -49.6%       3950        interrupts.CPU58.PMI:Performance_monitoring_interrupts
      7880           -49.8%       3957        interrupts.CPU59.NMI:Non-maskable_interrupts
      7880           -49.8%       3957        interrupts.CPU59.PMI:Performance_monitoring_interrupts
      6881 ± 24%     -42.4%       3965        interrupts.CPU61.NMI:Non-maskable_interrupts
      6881 ± 24%     -42.4%       3965        interrupts.CPU61.PMI:Performance_monitoring_interrupts
      6894 ± 24%     -42.7%       3948        interrupts.CPU63.NMI:Non-maskable_interrupts
      6894 ± 24%     -42.7%       3948        interrupts.CPU63.PMI:Performance_monitoring_interrupts
     34.75 ± 48%   +1010.8%     386.00 ±113%  interrupts.CPU63.RES:Rescheduling_interrupts
      7879           -49.7%       3959        interrupts.CPU64.NMI:Non-maskable_interrupts
      7879           -49.7%       3959        interrupts.CPU64.PMI:Performance_monitoring_interrupts
      7828           -49.4%       3960        interrupts.CPU65.NMI:Non-maskable_interrupts
      7828           -49.4%       3960        interrupts.CPU65.PMI:Performance_monitoring_interrupts
      7889           -49.6%       3977        interrupts.CPU66.NMI:Non-maskable_interrupts
      7889           -49.6%       3977        interrupts.CPU66.PMI:Performance_monitoring_interrupts
      7862           -49.6%       3964        interrupts.CPU67.NMI:Non-maskable_interrupts
      7862           -49.6%       3964        interrupts.CPU67.PMI:Performance_monitoring_interrupts
    658.00 ±109%     -97.8%      14.33 ± 52%  interrupts.CPU67.RES:Rescheduling_interrupts
      7845           -49.3%       3976        interrupts.CPU68.NMI:Non-maskable_interrupts
      7845           -49.3%       3976        interrupts.CPU68.PMI:Performance_monitoring_interrupts
      7875           -49.6%       3966        interrupts.CPU69.NMI:Non-maskable_interrupts
      7875           -49.6%       3966        interrupts.CPU69.PMI:Performance_monitoring_interrupts
    348.75 ± 33%    +140.0%     837.00 ± 56%  interrupts.CPU7.RES:Rescheduling_interrupts
      7881           -49.9%       3952        interrupts.CPU70.NMI:Non-maskable_interrupts
      7881           -49.9%       3952        interrupts.CPU70.PMI:Performance_monitoring_interrupts
     27.33            +3.0%      28.16        perf-stat.i.MPKI
 8.834e+09            -1.8%  8.673e+09        perf-stat.i.branch-instructions
      0.15 ±  9%      -0.1        0.10 ±  7%  perf-stat.i.branch-miss-rate%
  10588799           -28.6%    7555287 ±  2%  perf-stat.i.branch-misses
     63.58            +4.2       67.78        perf-stat.i.cache-miss-rate%
 6.596e+08            +8.0%  7.125e+08        perf-stat.i.cache-misses
 9.611e+08            +5.0%  1.009e+09        perf-stat.i.cache-references
      1041            -4.5%     994.48        perf-stat.i.context-switches
 2.378e+11            +1.3%   2.41e+11        perf-stat.i.cpu-cycles
     50.46 ±  2%     +22.5%      61.81 ±  7%  perf-stat.i.cpu-migrations
    662.21 ±  2%     -23.0%     509.63        perf-stat.i.cycles-between-cache-misses
      4.42            +0.4        4.87        perf-stat.i.dTLB-load-miss-rate%
 4.707e+08           +12.4%  5.293e+08        perf-stat.i.dTLB-load-misses
 3.295e+09            +5.5%  3.477e+09        perf-stat.i.dTLB-stores
     81.54 ±  5%      -6.0       75.55        perf-stat.i.iTLB-load-miss-rate%
    883475 ±  2%     -45.9%     477683        perf-stat.i.iTLB-load-misses
 3.788e+10            -1.7%  3.723e+10        perf-stat.i.instructions
    532634 ± 30%     +47.4%     784855        perf-stat.i.instructions-per-iTLB-miss
      0.17            -5.1%       0.16        perf-stat.i.ipc
    394946           -46.4%     211876        perf-stat.i.minor-faults
 1.827e+08           +19.8%  2.189e+08 ±  8%  perf-stat.i.node-load-misses
   2240951           -44.9%    1234844        perf-stat.i.node-store-misses
   1555912           -44.5%     864154 ±  2%  perf-stat.i.node-stores
    394952           -46.4%     211884        perf-stat.i.page-faults
     25.29            +7.0%      27.07        perf-stat.overall.MPKI
      0.12 ±  2%      -0.0        0.09 ±  3%  perf-stat.overall.branch-miss-rate%
     68.60            +2.0       70.57        perf-stat.overall.cache-miss-rate%
      6.27            +3.2%       6.47        perf-stat.overall.cpi
    361.41            -6.3%     338.69        perf-stat.overall.cycles-between-cache-misses
      4.21            +0.5        4.74        perf-stat.overall.dTLB-load-miss-rate%
     96.84            -1.6       95.28        perf-stat.overall.iTLB-load-miss-rate%
     42701 ±  2%     +81.4%      77459        perf-stat.overall.instructions-per-iTLB-miss
      0.16            -3.1%       0.15        perf-stat.overall.ipc
     28.32            +2.8       31.14 ±  9%  perf-stat.overall.node-load-miss-rate%
      8452           +14.5%       9674        perf-stat.overall.path-length
  8.82e+09            -1.9%  8.652e+09        perf-stat.ps.branch-instructions
  10660193           -28.7%    7603578 ±  2%  perf-stat.ps.branch-misses
 6.561e+08            +8.1%  7.094e+08        perf-stat.ps.cache-misses
 9.565e+08            +5.1%  1.005e+09        perf-stat.ps.cache-references
      1039            -4.6%     992.30        perf-stat.ps.context-switches
 2.371e+11            +1.3%  2.402e+11        perf-stat.ps.cpu-cycles
     50.29 ±  2%     +22.5%      61.62 ±  7%  perf-stat.ps.cpu-migrations
 4.681e+08           +12.6%  5.269e+08        perf-stat.ps.dTLB-load-misses
 3.281e+09            +5.6%  3.463e+09        perf-stat.ps.dTLB-stores
    886017 ±  2%     -45.9%     479504        perf-stat.ps.iTLB-load-misses
 3.782e+10            -1.8%  3.714e+10        perf-stat.ps.instructions
    395992           -46.3%     212654        perf-stat.ps.minor-faults
 1.817e+08           +19.9%  2.179e+08 ±  8%  perf-stat.ps.node-load-misses
   2253281           -44.9%    1242231        perf-stat.ps.node-store-misses
   1565323           -44.4%     869772 ±  2%  perf-stat.ps.node-stores
    395992           -46.3%     212654        perf-stat.ps.page-faults
 1.309e+13            -8.1%  1.203e+13        perf-stat.total.instructions
    146357 ±  6%     -14.5%     125182        softirqs.CPU0.TIMER
    141184 ±  5%     -11.7%     124672 ±  2%  softirqs.CPU1.TIMER
    147886 ± 14%     -20.0%     118358 ±  2%  softirqs.CPU10.TIMER
    136397 ±  2%     -10.9%     121532        softirqs.CPU11.TIMER
    137112 ±  3%     -11.2%     121805        softirqs.CPU12.TIMER
    136851 ±  3%     -11.8%     120641 ±  2%  softirqs.CPU13.TIMER
    154185 ± 12%     -18.7%     125328 ±  2%  softirqs.CPU14.TIMER
    146182 ±  6%     -11.6%     129251 ±  2%  softirqs.CPU15.TIMER
    137975 ±  3%     -11.6%     121954        softirqs.CPU17.TIMER
    139759 ±  3%     -12.1%     122847 ±  2%  softirqs.CPU18.TIMER
    136434 ±  3%     -10.1%     122633        softirqs.CPU19.TIMER
    140178 ±  3%     -13.9%     120756        softirqs.CPU2.TIMER
    142345 ±  2%     -10.5%     127466        softirqs.CPU20.TIMER
    142140 ±  4%     -13.0%     123682 ±  2%  softirqs.CPU21.TIMER
    140219 ±  3%     -11.9%     123543        softirqs.CPU3.TIMER
     39237 ±  5%      -8.5%      35897        softirqs.CPU33.RCU
     39794 ±  6%      -7.9%      36635        softirqs.CPU35.RCU
     43475 ±  5%     -10.0%      39132        softirqs.CPU36.RCU
    137545 ±  3%     -13.3%     119226        softirqs.CPU4.TIMER
     44254 ±  6%     -11.4%      39230 ±  4%  softirqs.CPU42.RCU
     43098 ±  4%     -14.8%      36703 ± 10%  softirqs.CPU43.RCU
    150777 ± 12%     -14.8%     128387 ±  4%  softirqs.CPU44.TIMER
    149865 ±  4%     -13.8%     129214 ±  5%  softirqs.CPU45.TIMER
    139358 ±  2%     -12.6%     121744 ±  2%  softirqs.CPU46.TIMER
    140711 ±  3%     -11.8%     124137        softirqs.CPU47.TIMER
    137353 ±  2%     -12.7%     119924        softirqs.CPU48.TIMER
    136333 ±  3%     -11.4%     120724        softirqs.CPU49.TIMER
    137656 ±  4%     -12.8%     120006        softirqs.CPU5.TIMER
    142562 ±  2%     -11.5%     126238 ±  4%  softirqs.CPU50.TIMER
    139806 ±  2%      -9.4%     126671 ±  2%  softirqs.CPU51.TIMER
    135091 ±  2%     -10.2%     121320 ±  2%  softirqs.CPU52.TIMER
    139116 ±  3%     -10.6%     124324        softirqs.CPU53.TIMER
    137160           -14.2%     117742 ±  3%  softirqs.CPU54.TIMER
    135798 ±  2%     -12.1%     119408        softirqs.CPU55.TIMER
    136756 ±  3%     -11.6%     120847        softirqs.CPU56.TIMER
    135013 ±  2%     -11.3%     119817        softirqs.CPU57.TIMER
    146878 ±  7%     -15.1%     124630 ±  3%  softirqs.CPU58.TIMER
    152234 ± 13%     -15.3%     128998 ±  2%  softirqs.CPU59.TIMER
    142749 ±  3%     -13.5%     123549 ±  2%  softirqs.CPU6.TIMER
    136796 ±  3%     -11.8%     120619 ±  2%  softirqs.CPU61.TIMER
    138150 ±  3%     -12.3%     121108 ±  3%  softirqs.CPU62.TIMER
    136393 ±  3%     -11.4%     120810        softirqs.CPU63.TIMER
    141686 ±  2%     -10.2%     127274        softirqs.CPU64.TIMER
    140700 ±  3%     -12.0%     123754 ±  2%  softirqs.CPU65.TIMER
    143677 ±  3%     -11.9%     126609 ±  2%  softirqs.CPU7.TIMER
     34821 ±  5%      -9.7%      31437        softirqs.CPU77.RCU
     34876 ±  5%      -8.2%      32008        softirqs.CPU79.RCU
    134723 ±  2%     -10.2%     120958        softirqs.CPU8.TIMER
     35408 ±  5%      -7.7%      32681 ±  3%  softirqs.CPU83.RCU
     38518 ±  5%      -9.5%      34861 ±  5%  softirqs.CPU86.RCU
     38060 ±  9%     -20.0%      30464 ± 13%  softirqs.CPU87.RCU
    135950 ±  2%      -9.8%     122596        softirqs.CPU9.TIMER
    347353 ±  3%     -21.9%     271232 ±  3%  softirqs.SCHED


                                                                                
                            vm-scalability.time.user_time                       
                                                                                
  30000 +-+-----------------------------------------------------------------+   
        |                                                                   |   
  25000 O-O.+.O      O   O O      O O  .O.  O O O .O.O O   O O.O         .+.|   
        |.+   +..+.+.+.+   +   +    +.+   +.+.+.+.   +.+   +.+ :    +.+.+   |   
        |              :   :   :    :                  :   :   :    :       |   
  20000 +-+            :   :   ::   :                  :   :    :  :        |   
        |               : : : : :  :                    : :     :  :        |   
  15000 +-+             : : : : :  :                    : :     :  :        |   
        |               : : : : :  :                    : :     :  :        |   
  10000 +-+             : : : :  : :                    : :     : :         |   
        |               : : : :  : :                    : :     : :         |   
        |               : : : :  : :                    : :      ::         |   
   5000 +-+              :   :   ::                      :       ::         |   
        |                :   :    :                      :       :          |   
      0 +-+-O----O-O---O-----O-O------O---O--------------O------------------+   
                                                                                
                                                                                                                                                                
                          vm-scalability.time.system_time                       
                                                                                
  6000 +-+------------------------------------------------------------------+   
       |                                                                    |   
  5000 +-+. .+..+.+.+.+   +    +   +.+.  .+.+.+.+. .+.+    +.+.    +..+.+. .|   
       |   +          :   :    :   :   +.         +   :    :   +   :      + |   
       |              :   ::   :   :                   :   :   :   :        |   
  4000 +-+             :  ::   ::  :                   :   :   :   :        |   
       |               : : :  : : :                    :  :     : :         |   
  3000 +-+             : : :  : : :                    :  :     : :         |   
       O O   O      O  :O:O : : :O:O   O    O O O O O O : :O O O: :         |   
  2000 +-+             : :  : : : :                     : :     : :         |   
       |               : :  : : : :                     : :     : :         |   
       |                ::  : :  ::                     : :     : :         |   
  1000 +-+              :    :   :                       :       :          |   
       |                :    :   :                       :       :          |   
     0 +-+-O----O-O---O------O-O-----O----O--------------O------------------+   
                                                                                
                                                                                                                                                                
                  vm-scalability.time.percent_of_cpu_this_job_got               
                                                                                
  9000 +-+------------------------------------------------------------------+   
       O.O.+.O..+.+.O.+ O O    + O O.+.O..+.O.O.O.O.O.O    O.O.O   +..+.+.+.|   
  8000 +-+            :   :    :   :                  :    :   :   :        |   
  7000 +-+            :   :    :   :                  :    :   :   :        |   
       |              :   ::   :   :                   :   :   :   :        |   
  6000 +-+             : : :  : : :                    :  :     : :         |   
  5000 +-+             : : :  : : :                    :  :     : :         |   
       |               : : :  : : :                    :  :     : :         |   
  4000 +-+             : :  : : : :                     : :     : :         |   
  3000 +-+             : :  : : : :                     : :     : :         |   
       |               : :  : : : :                     : :     : :         |   
  2000 +-+              :   ::   :                      ::       :          |   
  1000 +-+              :    :   :                       :       :          |   
       |                :    :   :                       :       :          |   
     0 +-+-O----O-O---O------O-O-----O----O--------------O------------------+   
                                                                                
                                                                                                                                                                
                         vm-scalability.time.elapsed_time                       
                                                                                
  350 +-+-------------------------------------------------------------------+   
      O O    O     O :  O O   : O O    O   O O  O O O O    O O O   :        |   
  300 +-+            :    :   :   :                   :    :   :   :        |   
      |               :   :   :   :                   :   :    :   :        |   
  250 +-+             :  : : : : :                     :  :     : :         |   
      |               :  : : : : :                     :  :     : :         |   
  200 +-+             :  : : : : :                     :  :     : :         |   
      |                : : : : : :                     :  :     : :         |   
  150 +-+              : : : : : :                     : :      : :         |   
      |                : : : : : :                     : :      : :         |   
  100 +-+              : : : : : :                     : :      : :         |   
      |                ::   :   :                       ::       :          |   
   50 +-+               :   :   :                       :        :          |   
      |                 :   :   :                       :        :          |   
    0 +-+-O----O-O---O------O-O------O---O--------------O-------------------+   
                                                                                
                                                                                                                                                                
                       vm-scalability.time.elapsed_time.max                     
                                                                                
  350 +-+-------------------------------------------------------------------+   
      O O    O     O :  O O   : O O    O   O O  O O O O    O O O   :        |   
  300 +-+            :    :   :   :                   :    :   :   :        |   
      |               :   :   :   :                   :   :    :   :        |   
  250 +-+             :  : : : : :                     :  :     : :         |   
      |               :  : : : : :                     :  :     : :         |   
  200 +-+             :  : : : : :                     :  :     : :         |   
      |                : : : : : :                     :  :     : :         |   
  150 +-+              : : : : : :                     : :      : :         |   
      |                : : : : : :                     : :      : :         |   
  100 +-+              : : : : : :                     : :      : :         |   
      |                ::   :   :                       ::       :          |   
   50 +-+               :   :   :                       :        :          |   
      |                 :   :   :                       :        :          |   
    0 +-+-O----O-O---O------O-O------O---O--------------O-------------------+   
                                                                                
                                                                                                                                                                
                    vm-scalability.time.maximum_resident_set_size               
                                                                                
  7e+07 +-+-----------------------------------------------------------------+   
        |.+.+.+..+.+.+.+   +   +    +.+.+.+.+.+.+..+.+.+   +.+.+    +.+.+.+.|   
  6e+07 +-+            :   :   :    :                  :   :   :    :       |   
        |              :   :   :    :                  :   :   :    :       |   
  5e+07 +-+            :   :   ::   :                  :   :   :   :        |   
        |               : : : : :  :                    : :     :  :        |   
  4e+07 +-+             : : : : :  :                    : :     :  :        |   
        O O   O      O  :O:O: : : O:O   O   O O O  O O O: :O O O:  :        |   
  3e+07 +-+             : : : :  : :                    : :     : :         |   
        |               : : : :  : :                    : :     : :         |   
  2e+07 +-+             : : : :  : :                    : :     : :         |   
        |                :   :   ::                      :       ::         |   
  1e+07 +-+              :   :    :                      :       :          |   
        |                :   :    :                      :       :          |   
      0 +-+-O----O-O---O-----O-O------O---O--------------O------------------+   
                                                                                
                                                                                                                                                                
                         vm-scalability.time.minor_page_faults                  
                                                                                
  1.4e+08 +-+---------------------------------------------------------------+   
          |             :    :   :   :                 :    :   :   :       |   
  1.2e+08 +-+           :    :   :   :                 :    :   :   :       |   
          |              :   :   :   :                 :   :    :   :       |   
    1e+08 +-+            :  : : : : :                   :  :     : :        |   
          |              :  : : : : :                   :  :     : :        |   
    8e+07 +-+            :  : : : : :                   :  :     : :        |   
          O O   O     O   :O:O: : :O:O   O   O O O O O O:  :O O O: :        |   
    6e+07 +-+             : : : : : :                   : :      : :        |   
          |               : : : : : :                   : :      : :        |   
    4e+07 +-+             : : : : : :                   : :      : :        |   
          |               ::   :   :                     ::       :         |   
    2e+07 +-+              :   :   :                     :        :         |   
          |                :   :   :                     :        :         |   
        0 +-+-O---O-O---O------O-O-----O---O-------------O------------------+   
                                                                                
                                                                                                                                                                
                              vm-scalability.throughput                         
                                                                                
  6e+06 +-+-----------------------------------------------------------------+   
        |                                                                   |   
  5e+06 +-+.+.+..+.+.+.+   +   +    +.+.+.+.+.+.+..+.+.+   +.+.+    +.+.+.+.|   
        |              :   :   :    :                  :   :   :    :       |   
        O O   O      O : O O   :: O O   O   O O O  O O O   O O O    :       |   
  4e+06 +-+             :  ::  ::   :                   :  :    :  :        |   
        |               : : : : :  :                    : :     :  :        |   
  3e+06 +-+             : : : : :  :                    : :     :  :        |   
        |               : : : :  : :                    : :     :  :        |   
  2e+06 +-+             : : : :  : :                    : :     : :         |   
        |               : : : :  : :                    : :     : :         |   
        |                ::  ::  : :                     ::      ::         |   
  1e+06 +-+              :   :    :                      :       ::         |   
        |                :   :    :                      :       :          |   
      0 +-+-O----O-O---O-----O-O------O---O--------------O------------------+   
                                                                                
                                                                                                                                                                
                            vm-scalability.free_time                            
                                                                                
  40 +-+--------------------------------------------------------------------+   
     |.    .+.+.    .+   +   +    +.    .+.+.+.    .+.+                     |   
  35 +-+.+.     +.+. :   :   :    : +.+.       +.+.   :   +.+..+   +.+..+.+.|   
  30 +-+             :   :   :    :                   :   :    :   :        |   
     |               :   :   ::   :                   :   :    :   :        |   
  25 +-+              : : : : :  :                     :  :     :  :        |   
     |                : : : : :  :                     : :      : :         |   
  20 +-+              : : : : :  :         O O O O     : :      : :         |   
     O O    O     O   :O:O: :  :O:O   O             O O: :O O  O: :         |   
  15 +-+              : : : :  : :                     : :      : :         |   
  10 +-+              : : : :  : :                     : :      : :         |   
     |                 :   :   ::                       ::       ::         |   
   5 +-+               :   :    :                       :        :          |   
     |                 :   :    :                       :        :          |   
   0 +-+-O----O-O----O-----O-O------O----O--------------O-------------------+   
                                                                                
                                                                                                                                                                
                                vm-scalability.median                           
                                                                                
  60000 +-+-----------------------------------------------------------------+   
        |              :   :   :    :                  :   :   :    :       |   
  50000 +-+            :   :   :    :                  :   :   :    :       |   
        O O   O      O : O O   :: O O   O   O O O  O O O   O O O   :        |   
        |               : : : : :  :                    : :     :  :        |   
  40000 +-+             : : : : :  :                    : :     :  :        |   
        |               : : : : :  :                    : :     :  :        |   
  30000 +-+             : : : :  : :                    : :     :  :        |   
        |               : : : :  : :                    : :     : :         |   
  20000 +-+             : : : :  : :                    : :     : :         |   
        |               : : : :  : :                    : :     : :         |   
        |                :   :   ::                      :       ::         |   
  10000 +-+              :   :    :                      :       :          |   
        |                :   :    :                      :       :          |   
      0 +-+-O----O-O---O-----O-O------O---O--------------O------------------+   
                                                                                
                                                                                                                                                                
                                vm-scalability.workload                         
                                                                                
  1.6e+09 +-+---------------------------------------------------------------+   
          |             +    :   :   +         +       :    :   :   : +     |   
  1.4e+09 +-+           :    :   :   :                 :    :   :   :       |   
  1.2e+09 O-O   O     O :  O O   : O O   O   O O O O O O   :O O O   :       |   
          |              :  : : : :  :                  :  :     : :        |   
    1e+09 +-+            :  : : : : :                   :  :     : :        |   
          |              :  : : : : :                   :  :     : :        |   
    8e+08 +-+            :  : : : : :                   :  :     : :        |   
          |               : : : : : :                   : :      : :        |   
    6e+08 +-+             : : : : : :                   : :      : :        |   
    4e+08 +-+             : : : : : :                   : :      : :        |   
          |               ::   :   :                     ::       :         |   
    2e+08 +-+              :   :   :                     :        :         |   
          |                :   :   :                     :        :         |   
        0 +-+-O---O-O---O------O-O-----O---O-------------O------------------+   
                                                                                
                                                                                
[*] bisect-good sample
[O] bisect-bad  sample

***************************************************************************************************
lkp-skl-fpga01: 104 threads Skylake with 192G memory


***************************************************************************************************
lkp-skl-2sp6: 72 threads Intel(R) Xeon(R) Gold 6139 CPU @ 2.30GHz with 128G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/size/tbox_group/test/testcase/ucode:
  gcc-7/performance/x86_64-rhel-7.6/debian-x86_64-2019-03-19.cgz/300s/1T/lkp-skl-2sp6/lru-shm/vm-scalability/0x200005a

commit: 
  f568cf93ca ("vfs: Convert smackfs to use the new mount API")
  27eb9d500d ("vfs: Convert ramfs, shmem, tmpfs, devtmpfs, rootfs to use the new mount API")

f568cf93ca0923a0 27eb9d500d71d93e2b2f55c226b 
---------------- --------------------------- 
       fail:runs  %reproduction    fail:runs
           |             |             |    
          1:4          -25%            :4     kmsg.Firmware_Bug]:the_BIOS_has_corrupted_hw-PMU_resources(MSR#is#)
           :4           25%           1:4     kmsg.Firmware_Bug]:the_BIOS_has_corrupted_hw-PMU_resources(MSR#is#c5)
          3:4           17%           4:4     perf-profile.calltrace.cycles-pp.sync_regs.error_entry.do_access
          6:4           33%           7:4     perf-profile.calltrace.cycles-pp.error_entry.do_access
         %stddev     %change         %stddev
             \          |                \  
      0.04           -44.5%       0.02        vm-scalability.free_time
    493774            -6.9%     459676        vm-scalability.median
      0.10 ±  4%     -70.2%       0.03 ±  8%  vm-scalability.median_stddev
      0.10 ±  3%     -71.6%       0.03 ±  8%  vm-scalability.stddev
  35560067            -6.7%   33183685        vm-scalability.throughput
    265.62            +5.3%     279.70        vm-scalability.time.elapsed_time
    265.62            +5.3%     279.70        vm-scalability.time.elapsed_time.max
     53018 ±  4%     +12.9%      59872 ±  2%  vm-scalability.time.involuntary_context_switches
    915746           -50.0%     458060        vm-scalability.time.maximum_resident_set_size
 5.282e+08            +1.8%  5.378e+08        vm-scalability.time.minor_page_faults
      1913            +3.0%       1970        vm-scalability.time.percent_of_cpu_this_job_got
      3184            +7.2%       3413        vm-scalability.time.system_time
      1899           +10.5%       2099        vm-scalability.time.user_time
     33022          +102.9%      67014        vm-scalability.time.voluntary_context_switches
 2.371e+09            +1.6%  2.408e+09        vm-scalability.workload
     53130 ±  4%    +157.8%     136948 ± 40%  cpuidle.C1.usage
    207860           +39.5%     290026        cpuidle.POLL.time
    101579           +28.5%     130526        cpuidle.POLL.usage
     73.00            -2.1%      71.50        vmstat.cpu.id
  34506226           -48.0%   17935148        vmstat.memory.cache
  96542374           +17.2%  1.131e+08        vmstat.memory.free
      3293 ±  2%     +35.6%       4466        vmstat.system.cs
    851.33            +6.9%     910.25        turbostat.Avg_MHz
     53038 ±  4%    +155.5%     135497 ± 39%  turbostat.C1
     30.18           -22.3        7.89 ± 11%  turbostat.PKG_%
     48.33            +8.1%      52.25 ±  2%  turbostat.PkgTmp
    177.02            +6.2%     187.92        turbostat.PkgWatt
  34577155           -48.1%   17950176        meminfo.Cached
  33784819           -49.7%   16984540        meminfo.Committed_AS
  33341535           -49.9%   16711076        meminfo.Inactive
  33339708           -49.9%   16708826        meminfo.Inactive(anon)
      1826           +23.1%       2248        meminfo.Inactive(file)
    145341           -26.4%     106952        meminfo.KReclaimable
   8852817 ±  3%     -50.6%    4377013        meminfo.Mapped
  95780719           +17.4%  1.124e+08        meminfo.MemAvailable
  96325492           +17.3%   1.13e+08        meminfo.MemFree
  35378531           -47.1%   18707929        meminfo.Memused
     20756 ±  2%     -43.5%      11735        meminfo.PageTables
    145341           -26.4%     106952        meminfo.SReclaimable
  33354229           -49.9%   16727090        meminfo.Shmem
    274165           -13.2%     238008        meminfo.Slab
    257335           -51.1%     125813        meminfo.max_used_kB
     62431 ± 26%     -49.8%      31335 ±  6%  softirqs.CPU18.RCU
     24215 ±  3%     +12.7%      27285 ±  6%  softirqs.CPU18.SCHED
     24808           +18.1%      29295 ± 11%  softirqs.CPU23.RCU
     21472 ±  6%     +22.6%      26326 ± 17%  softirqs.CPU30.RCU
     20425 ±  9%     +30.1%      26584 ± 11%  softirqs.CPU32.RCU
     22603 ±  7%     +20.0%      27119 ± 13%  softirqs.CPU34.RCU
     24977 ±  4%     +20.7%      30135 ± 10%  softirqs.CPU39.RCU
     24481 ±  2%     +16.9%      28618 ±  9%  softirqs.CPU40.RCU
     24336 ± 11%     +13.0%      27510 ± 11%  softirqs.CPU41.RCU
     25706 ±  4%     +20.1%      30880 ±  8%  softirqs.CPU44.RCU
     24456 ±  4%     +21.1%      29606 ± 10%  softirqs.CPU48.RCU
     22435 ± 11%     +37.2%      30774 ± 10%  softirqs.CPU50.RCU
     23702 ±  5%     +28.3%      30419 ± 11%  softirqs.CPU51.RCU
     25067 ±  7%     +32.1%      33123 ±  4%  softirqs.CPU52.RCU
     22262 ±  4%     +17.7%      26209 ± 13%  softirqs.CPU58.RCU
     35952 ± 10%     -22.6%      27814 ± 10%  softirqs.CPU6.RCU
     25967 ±  4%      +8.4%      28142        softirqs.CPU7.SCHED
   4406424 ±  7%     -49.2%    2236674 ±  6%  numa-vmstat.node0.nr_file_pages
  11914548 ±  2%     +18.3%   14091804        numa-vmstat.node0.nr_free_pages
   4254001 ±  7%     -51.1%    2080200 ±  6%  numa-vmstat.node0.nr_inactive_anon
   1024415 ±  2%     -45.2%     560990        numa-vmstat.node0.nr_mapped
      2444 ±  9%     -38.0%       1515 ± 12%  numa-vmstat.node0.nr_page_table_pages
   4255480 ±  7%     -51.1%    2082984 ±  6%  numa-vmstat.node0.nr_shmem
     19425 ±  6%     -23.1%      14934 ±  2%  numa-vmstat.node0.nr_slab_reclaimable
   4253930 ±  7%     -51.1%    2080107 ±  6%  numa-vmstat.node0.nr_zone_inactive_anon
     34197 ± 13%     +10.5%      37790 ± 11%  numa-vmstat.node1.nr_anon_pages
     70.67 ± 24%    +170.3%     191.00 ± 30%  numa-vmstat.node1.nr_dirtied
   4217319 ±  7%     -46.8%    2244814 ±  6%  numa-vmstat.node1.nr_file_pages
  12188079 ±  2%     +16.2%   14163735        numa-vmstat.node1.nr_free_pages
   4060115 ±  8%     -48.5%    2090670 ±  7%  numa-vmstat.node1.nr_inactive_anon
   1048978 ±  3%     -48.0%     545102 ±  2%  numa-vmstat.node1.nr_mapped
      2558 ±  8%     -44.0%       1431 ± 14%  numa-vmstat.node1.nr_page_table_pages
   4062271 ±  8%     -48.5%    2092470 ±  7%  numa-vmstat.node1.nr_shmem
     16926 ±  7%     -30.2%      11811 ±  3%  numa-vmstat.node1.nr_slab_reclaimable
     63.33 ± 24%    +167.2%     169.25 ± 30%  numa-vmstat.node1.nr_written
   4060057 ±  8%     -48.5%    2090600 ±  7%  numa-vmstat.node1.nr_zone_inactive_anon
     17465           +28.1%      22378 ±  2%  slabinfo.anon_vma.active_objs
    379.00           +28.3%     486.25 ±  2%  slabinfo.anon_vma.active_slabs
     17465           +28.2%      22391 ±  2%  slabinfo.anon_vma.num_objs
    379.00           +28.3%     486.25 ±  2%  slabinfo.anon_vma.num_slabs
     34169           +19.5%      40848        slabinfo.anon_vma_chain.active_objs
    533.33           +19.7%     638.50        slabinfo.anon_vma_chain.active_slabs
     34169           +19.7%      40900        slabinfo.anon_vma_chain.num_objs
    533.33           +19.7%     638.50        slabinfo.anon_vma_chain.num_slabs
      3571            +7.4%       3835 ±  3%  slabinfo.pid.active_objs
      3571            +7.4%       3835 ±  3%  slabinfo.pid.num_objs
    150826           -43.4%      85301        slabinfo.radix_tree_node.active_objs
      2749           -42.7%       1574        slabinfo.radix_tree_node.active_slabs
    154006           -42.7%      88185        slabinfo.radix_tree_node.num_objs
      2749           -42.7%       1574        slabinfo.radix_tree_node.num_slabs
      2385 ±  4%     +17.6%       2805 ±  5%  slabinfo.sighand_cache.active_objs
      2388 ±  4%     +17.8%       2813 ±  5%  slabinfo.sighand_cache.num_objs
      3459 ±  2%      +9.2%       3778 ±  3%  slabinfo.signal_cache.active_objs
      3459 ±  2%      +9.2%       3778 ±  3%  slabinfo.signal_cache.num_objs
      1197 ±  4%     +19.7%       1433 ±  3%  slabinfo.skbuff_ext_cache.active_objs
      1204 ±  4%     +19.3%       1436 ±  3%  slabinfo.skbuff_ext_cache.num_objs
     33178 ± 11%     +65.8%      55010 ± 23%  sched_debug.cfs_rq:/.min_vruntime.stddev
     33153 ± 10%     +66.0%      55046 ± 23%  sched_debug.cfs_rq:/.spread0.stddev
    408.53 ±  6%     +29.1%     527.30        sched_debug.cfs_rq:/.util_est_enqueued.max
     82.45 ±  6%     +27.1%     104.81 ± 11%  sched_debug.cfs_rq:/.util_est_enqueued.stddev
      9.78 ±  2%     +10.0%      10.76 ±  3%  sched_debug.cpu.cpu_load[0].avg
      9.62 ±  4%     +15.0%      11.07 ±  3%  sched_debug.cpu.cpu_load[1].avg
      9.52 ±  5%     +18.0%      11.23 ±  3%  sched_debug.cpu.cpu_load[2].avg
      9.55 ±  4%     +18.5%      11.32 ±  3%  sched_debug.cpu.cpu_load[3].avg
      9.72 ±  3%     +18.8%      11.54 ±  2%  sched_debug.cpu.cpu_load[4].avg
      8720 ± 11%     +73.6%      15136        sched_debug.cpu.curr->pid.max
      1411 ±  9%     +90.4%       2687 ±  9%  sched_debug.cpu.curr->pid.stddev
     11784 ±  3%     +13.5%      13371        sched_debug.cpu.load.avg
    112105 ± 12%     +19.5%     134002        sched_debug.cpu.nr_load_updates.max
      1962 ± 15%     +73.8%       3411 ± 17%  sched_debug.cpu.nr_load_updates.stddev
      0.23 ±  2%      +9.6%       0.26 ±  2%  sched_debug.cpu.nr_running.stddev
      5073 ± 12%     +58.9%       8061        sched_debug.cpu.nr_switches.avg
     22637 ± 15%     +94.2%      43953 ± 12%  sched_debug.cpu.nr_switches.max
      1515 ±  5%     +76.3%       2671 ± 17%  sched_debug.cpu.nr_switches.min
      4110 ± 17%     +74.3%       7163 ±  8%  sched_debug.cpu.nr_switches.stddev
    -16.67           -35.5%     -10.75        sched_debug.cpu.nr_uninterruptible.min
      5.82 ±  8%     -16.6%       4.85 ±  5%  sched_debug.cpu.nr_uninterruptible.stddev
  17663224 ±  7%     -49.3%    8955455 ±  6%  numa-meminfo.node0.FilePages
  17054254 ±  7%     -51.2%    8330481 ±  6%  numa-meminfo.node0.Inactive
  17053513 ±  7%     -51.2%    8329558 ±  6%  numa-meminfo.node0.Inactive(anon)
     77757 ±  6%     -23.2%      59747 ±  2%  numa-meminfo.node0.KReclaimable
   4351999 ±  2%     -48.2%    2256215 ±  2%  numa-meminfo.node0.Mapped
  47619833 ±  2%     +18.3%   56357983        numa-meminfo.node0.MemFree
  18057485 ±  7%     -48.4%    9319335 ±  5%  numa-meminfo.node0.MemUsed
     10156 ±  8%     -39.8%       6112 ± 10%  numa-meminfo.node0.PageTables
     77757 ±  6%     -23.2%      59747 ±  2%  numa-meminfo.node0.SReclaimable
  17059436 ±  7%     -51.1%    8340675 ±  6%  numa-meminfo.node0.Shmem
    136791 ± 13%     +10.5%     151136 ± 11%  numa-meminfo.node1.AnonPages
  16876074 ±  7%     -46.8%    8981202 ±  6%  numa-meminfo.node1.FilePages
  16248318 ±  8%     -48.5%    8365977 ±  7%  numa-meminfo.node1.Inactive
  16247232 ±  8%     -48.5%    8364650 ±  7%  numa-meminfo.node1.Inactive(anon)
     67560 ±  7%     -30.1%      47237 ±  3%  numa-meminfo.node1.KReclaimable
   4186453 ±  3%     -47.9%    2182438 ±  4%  numa-meminfo.node1.Mapped
  48745406 ±  2%     +16.2%   56652661        numa-meminfo.node1.MemFree
  17281296 ±  7%     -45.8%    9374041 ±  6%  numa-meminfo.node1.MemUsed
      9974 ±  6%     -42.9%       5693 ± 16%  numa-meminfo.node1.PageTables
     67560 ±  7%     -30.1%      47237 ±  3%  numa-meminfo.node1.SReclaimable
  16255869 ±  8%     -48.5%    8371819 ±  7%  numa-meminfo.node1.Shmem
    128302 ±  3%     -17.3%     106117 ±  7%  numa-meminfo.node1.Slab
     64056            +1.9%      65273        proc-vmstat.nr_active_anon
    284.00           +80.7%     513.25        proc-vmstat.nr_dirtied
   2391828           +17.3%    2805292        proc-vmstat.nr_dirty_background_threshold
   4790033           +17.3%    5618128        proc-vmstat.nr_dirty_threshold
   8615955           -47.9%    4485338        proc-vmstat.nr_file_pages
  24110149           +17.2%   28251496        proc-vmstat.nr_free_pages
   8306311           -49.7%    4174722        proc-vmstat.nr_inactive_anon
    456.00           +23.2%     562.00        proc-vmstat.nr_inactive_file
   2183921 ±  3%     -49.7%    1097757        proc-vmstat.nr_mapped
      5274 ±  3%     -44.3%       2939        proc-vmstat.nr_page_table_pages
   8309962           -49.7%    4179304        proc-vmstat.nr_shmem
     36356           -26.5%      26731        proc-vmstat.nr_slab_reclaimable
    266.67           +89.4%     505.00        proc-vmstat.nr_written
     64056            +1.9%      65273        proc-vmstat.nr_zone_active_anon
   8306311           -49.7%    4174722        proc-vmstat.nr_zone_inactive_anon
    456.00           +23.2%     562.00        proc-vmstat.nr_zone_inactive_file
      6493 ± 34%    +198.8%      19402 ± 43%  proc-vmstat.numa_hint_faults
      2898 ± 20%    +285.7%      11176 ± 58%  proc-vmstat.numa_hint_faults_local
 5.297e+08            +1.8%   5.39e+08        proc-vmstat.numa_hit
    993.33 ± 14%    +118.5%       2170 ± 35%  proc-vmstat.numa_huge_pte_updates
 5.296e+08            +1.8%   5.39e+08        proc-vmstat.numa_local
    539384 ± 14%    +113.4%    1150839 ± 34%  proc-vmstat.numa_pte_updates
      5669 ±  2%     +20.9%       6856 ±  5%  proc-vmstat.pgactivate
 5.308e+08            +1.8%  5.401e+08        proc-vmstat.pgalloc_normal
 5.289e+08            +1.8%  5.385e+08        proc-vmstat.pgfault
 5.306e+08            +1.6%  5.393e+08        proc-vmstat.pgfree
     50.92            -2.1       48.86 ±  3%  perf-profile.calltrace.cycles-pp.page_fault.do_access
     13.22            -1.7       11.55 ±  3%  perf-profile.calltrace.cycles-pp.do_rw_once
      1.61 ± 12%      -1.3        0.31 ±102%  perf-profile.calltrace.cycles-pp.evict.do_unlinkat.do_syscall_64.entry_SYSCALL_64_after_hwframe
      1.61 ± 12%      -1.3        0.31 ±102%  perf-profile.calltrace.cycles-pp.do_unlinkat.do_syscall_64.entry_SYSCALL_64_after_hwframe
      4.41            -1.1        3.29 ± 11%  perf-profile.calltrace.cycles-pp.shmem_undo_range.shmem_truncate_range.shmem_evict_inode.evict.do_unlinkat
      4.42            -1.1        3.30 ± 11%  perf-profile.calltrace.cycles-pp.shmem_evict_inode.evict.do_unlinkat.do_syscall_64.entry_SYSCALL_64_after_hwframe
      4.42            -1.1        3.30 ± 11%  perf-profile.calltrace.cycles-pp.shmem_truncate_range.shmem_evict_inode.evict.do_unlinkat.do_syscall_64
      1.69 ± 12%      -1.0        0.70 ± 14%  perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
      1.69 ± 12%      -1.0        0.70 ± 14%  perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
      2.14            -0.7        1.40 ±  8%  perf-profile.calltrace.cycles-pp.__pagevec_release.shmem_undo_range.shmem_truncate_range.shmem_evict_inode.evict
      2.12 ±  2%      -0.7        1.39 ±  8%  perf-profile.calltrace.cycles-pp.release_pages.__pagevec_release.shmem_undo_range.shmem_truncate_range.shmem_evict_inode
      7.59            -0.7        6.92 ±  3%  perf-profile.calltrace.cycles-pp.filemap_map_pages.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
      1.43            -0.5        0.91 ±  7%  perf-profile.calltrace.cycles-pp.free_unref_page_list.release_pages.__pagevec_release.shmem_undo_range.shmem_truncate_range
      1.72 ±  2%      -0.4        1.33 ±  2%  perf-profile.calltrace.cycles-pp.shmem_add_to_page_cache.shmem_getpage_gfp.shmem_fault.__do_fault.__handle_mm_fault
      3.27 ±  2%      -0.4        2.92 ±  3%  perf-profile.calltrace.cycles-pp.pagevec_lru_move_fn.__lru_cache_add.shmem_getpage_gfp.shmem_fault.__do_fault
      0.76 ±  3%      -0.3        0.42 ± 57%  perf-profile.calltrace.cycles-pp.mem_cgroup_commit_charge.shmem_getpage_gfp.shmem_fault.__do_fault.__handle_mm_fault
      3.55 ±  3%      -0.3        3.23 ±  3%  perf-profile.calltrace.cycles-pp.__lru_cache_add.shmem_getpage_gfp.shmem_fault.__do_fault.__handle_mm_fault
      0.57 ±  4%      -0.3        0.25 ±100%  perf-profile.calltrace.cycles-pp.xas_find.filemap_map_pages.__handle_mm_fault.handle_mm_fault.__do_page_fault
      1.59 ±  5%      -0.3        1.32 ±  2%  perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.pagevec_lru_move_fn.__lru_cache_add.shmem_getpage_gfp
      1.65 ±  5%      -0.3        1.39 ±  2%  perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.pagevec_lru_move_fn.__lru_cache_add.shmem_getpage_gfp.shmem_fault
      0.71            -0.1        0.65 ±  3%  perf-profile.calltrace.cycles-pp.security_vm_enough_memory_mm.shmem_alloc_and_acct_page.shmem_getpage_gfp.shmem_fault.__do_fault
      0.58 ±  4%      +0.0        0.61 ±  4%  perf-profile.calltrace.cycles-pp.__list_del_entry_valid.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.shmem_alloc_page
      0.72 ±  7%      +0.1        0.79 ±  7%  perf-profile.calltrace.cycles-pp.delete_from_page_cache.truncate_inode_page.shmem_undo_range.shmem_truncate_range.shmem_evict_inode
      8.50 ±  5%      +1.4        9.90 ± 14%  perf-profile.calltrace.cycles-pp.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
      6.53 ± 12%      +1.7        8.19 ± 18%  perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
  1.25e+10            -1.4%  1.233e+10        perf-stat.i.branch-instructions
  40278349            -7.9%   37105012        perf-stat.i.cache-misses
 2.002e+08 ±  3%      -9.7%  1.807e+08 ±  3%  perf-stat.i.cache-references
      3242           +37.4%       4454        perf-stat.i.context-switches
      1.09 ±  3%     +11.6%       1.21        perf-stat.i.cpi
 5.886e+10            +7.8%  6.346e+10        perf-stat.i.cpu-cycles
     82.67           +73.7%     143.61 ±  2%  perf-stat.i.cpu-migrations
    925.34           +34.2%       1241        perf-stat.i.cycles-between-cache-misses
      0.03 ± 10%      +0.0        0.03 ±  4%  perf-stat.i.dTLB-store-miss-rate%
   2048736            -2.3%    2002293        perf-stat.i.dTLB-store-misses
   2672712 ±  3%     -17.4%    2207575 ±  2%  perf-stat.i.iTLB-load-misses
      0.95 ±  4%     -10.5%       0.85        perf-stat.i.ipc
   1962276            -2.7%    1909124        perf-stat.i.minor-faults
   2409945            -4.8%    2294023 ±  3%  perf-stat.i.node-load-misses
     34.06 ±  2%      -5.6       28.42 ±  4%  perf-stat.i.node-store-miss-rate%
   1962504            -2.7%    1909687        perf-stat.i.page-faults
      4.49 ±  3%      -9.2%       4.07 ±  3%  perf-stat.overall.MPKI
      1.32            +8.4%       1.43        perf-stat.overall.cpi
      1466           +16.9%       1714        perf-stat.overall.cycles-between-cache-misses
     77.31 ±  2%      -5.1       72.25 ±  2%  perf-stat.overall.iTLB-load-miss-rate%
     16699 ±  4%     +20.2%      20073 ±  2%  perf-stat.overall.instructions-per-iTLB-miss
      0.76            -7.8%       0.70        perf-stat.overall.ipc
      5060            +2.4%       5183        perf-stat.overall.path-length
 1.264e+10            -2.0%  1.239e+10        perf-stat.ps.branch-instructions
  40566190            -8.4%   37177450        perf-stat.ps.cache-misses
 2.023e+08 ±  3%     -10.3%  1.815e+08 ±  3%  perf-stat.ps.cache-references
      3241           +37.3%       4450        perf-stat.ps.context-switches
 5.951e+10            +7.1%  6.373e+10        perf-stat.ps.cpu-cycles
     83.53           +73.6%     145.03 ±  2%  perf-stat.ps.cpu-migrations
  1.18e+10            -1.5%  1.162e+10        perf-stat.ps.dTLB-loads
   2073937            -2.9%    2014135        perf-stat.ps.dTLB-store-misses
   2704797 ±  3%     -17.9%    2220519 ±  2%  perf-stat.ps.iTLB-load-misses
   1987508            -3.3%    1921858        perf-stat.ps.minor-faults
   2405765            -4.9%    2287716 ±  3%  perf-stat.ps.node-load-misses
   1987508            -3.3%    1921858        perf-stat.ps.page-faults
   1.2e+13            +4.0%  1.248e+13        perf-stat.total.instructions
    380.00 ± 13%     -48.5%     195.75 ± 30%  interrupts.70:PCI-MSI.70260738-edge.eth3-TxRx-1
    207675            +7.7%     223582 ±  2%  interrupts.CAL:Function_call_interrupts
      1712 ±  4%     +28.6%       2202 ±  7%  interrupts.CPU1.NMI:Non-maskable_interrupts
      1712 ±  4%     +28.6%       2202 ±  7%  interrupts.CPU1.PMI:Performance_monitoring_interrupts
      1794 ±  8%     +25.2%       2246 ±  6%  interrupts.CPU10.NMI:Non-maskable_interrupts
      1794 ±  8%     +25.2%       2246 ±  6%  interrupts.CPU10.PMI:Performance_monitoring_interrupts
      1795 ±  4%     +19.1%       2137 ± 10%  interrupts.CPU11.NMI:Non-maskable_interrupts
      1795 ±  4%     +19.1%       2137 ± 10%  interrupts.CPU11.PMI:Performance_monitoring_interrupts
      1710 ±  4%     +46.6%       2507 ± 17%  interrupts.CPU13.NMI:Non-maskable_interrupts
      1710 ±  4%     +46.6%       2507 ± 17%  interrupts.CPU13.PMI:Performance_monitoring_interrupts
      1701 ±  3%     +28.3%       2182 ±  8%  interrupts.CPU14.NMI:Non-maskable_interrupts
      1701 ±  3%     +28.3%       2182 ±  8%  interrupts.CPU14.PMI:Performance_monitoring_interrupts
      1701 ±  5%     +38.3%       2352 ± 15%  interrupts.CPU15.NMI:Non-maskable_interrupts
      1701 ±  5%     +38.3%       2352 ± 15%  interrupts.CPU15.PMI:Performance_monitoring_interrupts
      1708 ±  3%     +27.0%       2168 ±  5%  interrupts.CPU16.NMI:Non-maskable_interrupts
      1708 ±  3%     +27.0%       2168 ±  5%  interrupts.CPU16.PMI:Performance_monitoring_interrupts
    212.00 ± 47%    +264.3%     772.25 ± 64%  interrupts.CPU18.RES:Rescheduling_interrupts
     69.00 ± 60%    +100.7%     138.50 ± 36%  interrupts.CPU18.TLB:TLB_shootdowns
      1826 ±  5%     +81.2%       3309 ± 31%  interrupts.CPU19.NMI:Non-maskable_interrupts
      1826 ±  5%     +81.2%       3309 ± 31%  interrupts.CPU19.PMI:Performance_monitoring_interrupts
      2895 ±  2%      +8.0%       3127 ±  4%  interrupts.CPU2.CAL:Function_call_interrupts
      1760 ±  9%     +33.9%       2356 ± 18%  interrupts.CPU20.NMI:Non-maskable_interrupts
      1760 ±  9%     +33.9%       2356 ± 18%  interrupts.CPU20.PMI:Performance_monitoring_interrupts
    299.67 ± 37%    +155.3%     765.00 ± 38%  interrupts.CPU20.RES:Rescheduling_interrupts
      1741 ±  9%     +62.9%       2836 ± 52%  interrupts.CPU21.NMI:Non-maskable_interrupts
      1741 ±  9%     +62.9%       2836 ± 52%  interrupts.CPU21.PMI:Performance_monitoring_interrupts
    184.67 ± 48%    +211.2%     574.75 ± 34%  interrupts.CPU21.RES:Rescheduling_interrupts
      2911            +7.7%       3135 ±  3%  interrupts.CPU22.CAL:Function_call_interrupts
      1749 ±  9%     +23.3%       2157 ± 16%  interrupts.CPU22.NMI:Non-maskable_interrupts
      1749 ±  9%     +23.3%       2157 ± 16%  interrupts.CPU22.PMI:Performance_monitoring_interrupts
    210.67 ± 27%    +285.1%     811.25 ± 58%  interrupts.CPU22.RES:Rescheduling_interrupts
     57.33 ± 46%    +209.2%     177.25 ± 24%  interrupts.CPU22.TLB:TLB_shootdowns
      1746 ± 10%     +19.3%       2083 ± 12%  interrupts.CPU23.NMI:Non-maskable_interrupts
      1746 ± 10%     +19.3%       2083 ± 12%  interrupts.CPU23.PMI:Performance_monitoring_interrupts
    285.67 ± 28%    +120.8%     630.75 ± 31%  interrupts.CPU23.RES:Rescheduling_interrupts
      2900            +8.2%       3137 ±  2%  interrupts.CPU24.CAL:Function_call_interrupts
      1738 ±  9%     +29.1%       2245 ± 13%  interrupts.CPU24.NMI:Non-maskable_interrupts
      1738 ±  9%     +29.1%       2245 ± 13%  interrupts.CPU24.PMI:Performance_monitoring_interrupts
    371.00 ± 14%     +90.9%     708.25 ± 31%  interrupts.CPU24.RES:Rescheduling_interrupts
    128.33 ± 30%    +123.4%     286.75 ± 78%  interrupts.CPU24.TLB:TLB_shootdowns
      2866            +9.6%       3140 ±  2%  interrupts.CPU25.CAL:Function_call_interrupts
      1746 ± 10%     +65.6%       2891 ± 27%  interrupts.CPU25.NMI:Non-maskable_interrupts
      1746 ± 10%     +65.6%       2891 ± 27%  interrupts.CPU25.PMI:Performance_monitoring_interrupts
    237.00 ± 13%    +204.1%     720.75 ± 56%  interrupts.CPU25.RES:Rescheduling_interrupts
     31.67 ± 15%    +435.3%     169.50 ± 49%  interrupts.CPU25.TLB:TLB_shootdowns
    275.67 ± 68%    +140.1%     662.00 ± 12%  interrupts.CPU26.RES:Rescheduling_interrupts
      2797 ±  5%     +13.0%       3159 ±  3%  interrupts.CPU27.CAL:Function_call_interrupts
    224.00 ± 85%    +134.6%     525.50 ± 16%  interrupts.CPU27.RES:Rescheduling_interrupts
     60.33 ± 46%    +222.4%     194.50 ± 56%  interrupts.CPU27.TLB:TLB_shootdowns
      2916            +9.2%       3185 ±  4%  interrupts.CPU28.CAL:Function_call_interrupts
      1773 ± 11%     +17.8%       2089 ± 11%  interrupts.CPU29.NMI:Non-maskable_interrupts
      1773 ± 11%     +17.8%       2089 ± 11%  interrupts.CPU29.PMI:Performance_monitoring_interrupts
    181.33 ± 44%    +564.0%       1204 ± 60%  interrupts.CPU29.RES:Rescheduling_interrupts
      1729 ±  5%     +26.1%       2179 ±  7%  interrupts.CPU3.NMI:Non-maskable_interrupts
      1729 ±  5%     +26.1%       2179 ±  7%  interrupts.CPU3.PMI:Performance_monitoring_interrupts
      1748 ± 10%     +20.6%       2109 ± 10%  interrupts.CPU30.NMI:Non-maskable_interrupts
      1748 ± 10%     +20.6%       2109 ± 10%  interrupts.CPU30.PMI:Performance_monitoring_interrupts
      2945            +6.2%       3127 ±  4%  interrupts.CPU31.CAL:Function_call_interrupts
      1745 ± 10%     +19.4%       2084 ± 10%  interrupts.CPU31.NMI:Non-maskable_interrupts
      1745 ± 10%     +19.4%       2084 ± 10%  interrupts.CPU31.PMI:Performance_monitoring_interrupts
    244.67 ± 30%    +199.4%     732.50 ± 47%  interrupts.CPU31.RES:Rescheduling_interrupts
    248.00 ± 39%    +129.0%     568.00 ± 40%  interrupts.CPU33.RES:Rescheduling_interrupts
    100.33 ± 16%    +106.8%     207.50 ± 48%  interrupts.CPU33.TLB:TLB_shootdowns
      2810 ±  3%     +12.2%       3154 ±  3%  interrupts.CPU34.CAL:Function_call_interrupts
      1762 ± 10%     +29.3%       2279 ± 14%  interrupts.CPU34.NMI:Non-maskable_interrupts
      1762 ± 10%     +29.3%       2279 ± 14%  interrupts.CPU34.PMI:Performance_monitoring_interrupts
    249.00 ± 53%    +271.4%     924.75 ± 29%  interrupts.CPU34.RES:Rescheduling_interrupts
      2901            +8.2%       3138 ±  4%  interrupts.CPU35.CAL:Function_call_interrupts
      1720 ±  4%     +25.7%       2163 ±  7%  interrupts.CPU39.NMI:Non-maskable_interrupts
      1720 ±  4%     +25.7%       2163 ±  7%  interrupts.CPU39.PMI:Performance_monitoring_interrupts
     95.67 ± 88%    +110.1%     201.00 ± 86%  interrupts.CPU39.TLB:TLB_shootdowns
      2836 ±  6%     +10.4%       3130        interrupts.CPU4.CAL:Function_call_interrupts
      1854 ± 11%     +18.4%       2195 ±  7%  interrupts.CPU4.NMI:Non-maskable_interrupts
      1854 ± 11%     +18.4%       2195 ±  7%  interrupts.CPU4.PMI:Performance_monitoring_interrupts
    138.00 ± 47%     +76.1%     243.00 ± 38%  interrupts.CPU40.RES:Rescheduling_interrupts
     44.00 ± 42%    +277.3%     166.00 ± 64%  interrupts.CPU40.TLB:TLB_shootdowns
      1707 ±  4%     +28.1%       2186 ±  8%  interrupts.CPU41.NMI:Non-maskable_interrupts
      1707 ±  4%     +28.1%       2186 ±  8%  interrupts.CPU41.PMI:Performance_monitoring_interrupts
     38.00 ± 52%    +186.8%     109.00 ± 69%  interrupts.CPU41.TLB:TLB_shootdowns
      1690 ±  4%     +30.0%       2196 ±  8%  interrupts.CPU42.NMI:Non-maskable_interrupts
      1690 ±  4%     +30.0%       2196 ±  8%  interrupts.CPU42.PMI:Performance_monitoring_interrupts
     71.67 ± 84%    +278.1%     271.00 ±109%  interrupts.CPU42.TLB:TLB_shootdowns
      1701 ±  3%     +26.9%       2159 ±  8%  interrupts.CPU43.NMI:Non-maskable_interrupts
      1701 ±  3%     +26.9%       2159 ±  8%  interrupts.CPU43.PMI:Performance_monitoring_interrupts
     66.33 ± 65%    +106.2%     136.75 ± 61%  interrupts.CPU43.TLB:TLB_shootdowns
      1716 ±  3%     +26.9%       2178 ±  8%  interrupts.CPU44.NMI:Non-maskable_interrupts
      1716 ±  3%     +26.9%       2178 ±  8%  interrupts.CPU44.PMI:Performance_monitoring_interrupts
     57.33 ± 97%     +76.2%     101.00 ± 58%  interrupts.CPU44.TLB:TLB_shootdowns
      1759 ±  2%     +26.4%       2224 ±  6%  interrupts.CPU45.NMI:Non-maskable_interrupts
      1759 ±  2%     +26.4%       2224 ±  6%  interrupts.CPU45.PMI:Performance_monitoring_interrupts
      1877 ± 13%     +18.3%       2222 ±  8%  interrupts.CPU46.NMI:Non-maskable_interrupts
      1877 ± 13%     +18.3%       2222 ±  8%  interrupts.CPU46.PMI:Performance_monitoring_interrupts
    220.33 ± 71%     +86.3%     410.50 ± 35%  interrupts.CPU47.RES:Rescheduling_interrupts
     49.33 ± 79%     +96.1%      96.75 ± 52%  interrupts.CPU47.TLB:TLB_shootdowns
     34.67 ± 29%    +149.5%      86.50 ± 50%  interrupts.CPU48.TLB:TLB_shootdowns
      1705 ±  4%     +66.4%       2838 ± 36%  interrupts.CPU49.NMI:Non-maskable_interrupts
      1705 ±  4%     +66.4%       2838 ± 36%  interrupts.CPU49.PMI:Performance_monitoring_interrupts
    331.67 ± 31%     -40.9%     196.00 ± 25%  interrupts.CPU49.RES:Rescheduling_interrupts
     41.67 ± 75%    +128.0%      95.00 ± 54%  interrupts.CPU49.TLB:TLB_shootdowns
      1727 ±  3%     +25.5%       2167 ±  8%  interrupts.CPU5.NMI:Non-maskable_interrupts
      1727 ±  3%     +25.5%       2167 ±  8%  interrupts.CPU5.PMI:Performance_monitoring_interrupts
      1705 ±  4%     +28.7%       2194 ± 11%  interrupts.CPU50.NMI:Non-maskable_interrupts
      1705 ±  4%     +28.7%       2194 ± 11%  interrupts.CPU50.PMI:Performance_monitoring_interrupts
     37.67 ± 40%    +145.6%      92.50 ± 28%  interrupts.CPU50.TLB:TLB_shootdowns
      1705 ±  5%     +29.7%       2211 ±  8%  interrupts.CPU51.NMI:Non-maskable_interrupts
      1705 ±  5%     +29.7%       2211 ±  8%  interrupts.CPU51.PMI:Performance_monitoring_interrupts
    223.00 ± 13%     +51.0%     336.75 ± 23%  interrupts.CPU51.RES:Rescheduling_interrupts
      1711 ±  4%     +28.0%       2191 ±  8%  interrupts.CPU52.NMI:Non-maskable_interrupts
      1711 ±  4%     +28.0%       2191 ±  8%  interrupts.CPU52.PMI:Performance_monitoring_interrupts
     76.33 ± 79%    +109.3%     159.75 ± 43%  interrupts.CPU53.TLB:TLB_shootdowns
    262.67 ±  9%    +139.8%     630.00 ± 19%  interrupts.CPU54.RES:Rescheduling_interrupts
     92.33 ± 52%     +32.4%     122.25 ± 44%  interrupts.CPU54.TLB:TLB_shootdowns
      2904 ±  2%      +7.2%       3113 ±  5%  interrupts.CPU55.CAL:Function_call_interrupts
      1770 ±  8%     +62.2%       2871 ± 29%  interrupts.CPU55.NMI:Non-maskable_interrupts
      1770 ±  8%     +62.2%       2871 ± 29%  interrupts.CPU55.PMI:Performance_monitoring_interrupts
      1761 ± 10%     +59.5%       2808 ± 42%  interrupts.CPU56.NMI:Non-maskable_interrupts
      1761 ± 10%     +59.5%       2808 ± 42%  interrupts.CPU56.PMI:Performance_monitoring_interrupts
      2645 ± 11%     +14.9%       3038 ±  5%  interrupts.CPU57.CAL:Function_call_interrupts
    257.67 ± 40%    +185.3%     735.25 ± 19%  interrupts.CPU58.RES:Rescheduling_interrupts
     46.67 ± 38%    +341.4%     206.00 ± 82%  interrupts.CPU58.TLB:TLB_shootdowns
      2927            +8.9%       3187 ±  4%  interrupts.CPU59.CAL:Function_call_interrupts
      1767 ±  9%     +17.1%       2069 ± 10%  interrupts.CPU59.NMI:Non-maskable_interrupts
      1767 ±  9%     +17.1%       2069 ± 10%  interrupts.CPU59.PMI:Performance_monitoring_interrupts
    146.00 ± 43%    +166.6%     389.25 ± 46%  interrupts.CPU59.RES:Rescheduling_interrupts
      1742 ±  9%     +43.9%       2506 ± 28%  interrupts.CPU60.NMI:Non-maskable_interrupts
      1742 ±  9%     +43.9%       2506 ± 28%  interrupts.CPU60.PMI:Performance_monitoring_interrupts
      2811 ±  3%     +12.0%       3150 ±  3%  interrupts.CPU61.CAL:Function_call_interrupts
    371.67 ± 20%     +43.1%     532.00 ± 28%  interrupts.CPU61.RES:Rescheduling_interrupts
      1781 ± 10%     +28.8%       2295 ± 18%  interrupts.CPU63.NMI:Non-maskable_interrupts
      1781 ± 10%     +28.8%       2295 ± 18%  interrupts.CPU63.PMI:Performance_monitoring_interrupts
    186.33 ± 44%    +176.5%     515.25 ± 42%  interrupts.CPU63.RES:Rescheduling_interrupts
     31.67 ± 22%    +500.0%     190.00 ± 58%  interrupts.CPU63.TLB:TLB_shootdowns
    375.33 ± 22%     +79.6%     674.00 ± 40%  interrupts.CPU64.RES:Rescheduling_interrupts
      1785 ± 12%     +17.4%       2096 ± 10%  interrupts.CPU65.NMI:Non-maskable_interrupts
      1785 ± 12%     +17.4%       2096 ± 10%  interrupts.CPU65.PMI:Performance_monitoring_interrupts
      1745 ± 11%     +19.7%       2089 ± 10%  interrupts.CPU66.NMI:Non-maskable_interrupts
      1745 ± 11%     +19.7%       2089 ± 10%  interrupts.CPU66.PMI:Performance_monitoring_interrupts
      1749 ±  9%     +19.5%       2090 ± 10%  interrupts.CPU67.NMI:Non-maskable_interrupts
      1749 ±  9%     +19.5%       2090 ± 10%  interrupts.CPU67.PMI:Performance_monitoring_interrupts
      1811 ±  8%     +17.7%       2132 ± 12%  interrupts.CPU68.NMI:Non-maskable_interrupts
      1811 ±  8%     +17.7%       2132 ± 12%  interrupts.CPU68.PMI:Performance_monitoring_interrupts
    179.33 ± 62%    +133.9%     419.50 ± 35%  interrupts.CPU68.RES:Rescheduling_interrupts
    380.00 ± 13%     -48.5%     195.75 ± 30%  interrupts.CPU69.70:PCI-MSI.70260738-edge.eth3-TxRx-1
      2682 ±  4%     +18.7%       3184 ±  5%  interrupts.CPU69.CAL:Function_call_interrupts
      1727 ±  4%     +26.5%       2184 ±  8%  interrupts.CPU7.NMI:Non-maskable_interrupts
      1727 ±  4%     +26.5%       2184 ±  8%  interrupts.CPU7.PMI:Performance_monitoring_interrupts
      2847 ±  4%     +11.1%       3162 ±  5%  interrupts.CPU70.CAL:Function_call_interrupts
      1777 ±  9%     +22.0%       2168 ± 11%  interrupts.CPU70.NMI:Non-maskable_interrupts
      1777 ±  9%     +22.0%       2168 ± 11%  interrupts.CPU70.PMI:Performance_monitoring_interrupts
    241.33 ± 32%    +137.4%     573.00 ± 25%  interrupts.CPU70.RES:Rescheduling_interrupts
      1780 ± 10%     +23.1%       2191 ± 11%  interrupts.CPU71.NMI:Non-maskable_interrupts
      1780 ± 10%     +23.1%       2191 ± 11%  interrupts.CPU71.PMI:Performance_monitoring_interrupts
      1704 ±  4%     +25.5%       2139 ±  8%  interrupts.CPU8.NMI:Non-maskable_interrupts
      1704 ±  4%     +25.5%       2139 ±  8%  interrupts.CPU8.PMI:Performance_monitoring_interrupts
      1774           +23.9%       2198 ±  8%  interrupts.CPU9.NMI:Non-maskable_interrupts
      1774           +23.9%       2198 ±  8%  interrupts.CPU9.PMI:Performance_monitoring_interrupts
    135003 ±  4%     +21.0%     163358 ±  7%  interrupts.NMI:Non-maskable_interrupts
    135003 ±  4%     +21.0%     163358 ±  7%  interrupts.PMI:Performance_monitoring_interrupts
     24412 ±  8%     +56.4%      38169 ± 13%  interrupts.RES:Rescheduling_interrupts
      6115 ± 16%     +92.6%      11777 ± 13%  interrupts.TLB:TLB_shootdowns



***************************************************************************************************
lkp-bdw-ep2: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/size/tbox_group/test/testcase:
  gcc-7/performance/x86_64-rhel-7.6/debian-x86_64-2018-04-03.cgz/300s/256G/lkp-bdw-ep2/lru-shm-rand/vm-scalability

commit: 
  f568cf93ca ("vfs: Convert smackfs to use the new mount API")
  27eb9d500d ("vfs: Convert ramfs, shmem, tmpfs, devtmpfs, rootfs to use the new mount API")

f568cf93ca0923a0 27eb9d500d71d93e2b2f55c226b 
---------------- --------------------------- 
         %stddev     %change         %stddev
             \          |                \  
      0.03           -51.6%       0.01        vm-scalability.free_time
     54446           +17.4%      63942        vm-scalability.median
      0.00 ± 29%    +206.0%       0.01 ± 12%  vm-scalability.median_stddev
   4788027           +17.3%    5614974        vm-scalability.throughput
    193.08            -8.8%     176.00        vm-scalability.time.elapsed_time
    193.08            -8.8%     176.00        vm-scalability.time.elapsed_time.max
    750512           -49.9%     375966        vm-scalability.time.maximum_resident_set_size
      5671            -6.6%       5298        vm-scalability.time.percent_of_cpu_this_job_got
    881.36            -6.1%     827.85        vm-scalability.time.system_time
     10068           -15.6%       8497        vm-scalability.time.user_time
     10321           +97.7%      20407        vm-scalability.time.voluntary_context_switches
     35.56 ±  2%      +3.8       39.39        mpstat.cpu.all.idle%
     34.81 ± 10%     -15.2%      29.51        boot-time.boot
     28.53 ± 13%     -18.9%      23.13        boot-time.dhcp
    366.97            +1.7%     373.37        pmeter.Average_Active_Power
     13197           +15.2%      15209        pmeter.performance_per_watt
 6.092e+08 ±138%    +537.9%  3.886e+09 ± 31%  cpuidle.C3.time
   2145092 ±138%    +325.2%    9121998 ± 20%  cpuidle.C3.usage
 5.276e+09 ± 14%     -59.5%  2.135e+09 ± 60%  cpuidle.C6.time
     36.00 ±  2%      +9.7%      39.50        vmstat.cpu.id
     58.00            -6.9%      54.00        vmstat.cpu.us
  55228783           -49.7%   27768287        vmstat.memory.cache
  75764283           +36.3%  1.033e+08        vmstat.memory.free
      1972           +43.0%       2820 ±  4%  vmstat.system.cs
      1823            -5.6%       1720        turbostat.Avg_MHz
   2144618 ±138%    +325.3%    9121862 ± 20%  turbostat.C3
      3.53 ±138%     +21.4       24.98 ± 32%  turbostat.C3%
     30.76 ± 15%     -17.2       13.59 ± 60%  turbostat.C6%
     12.28 ± 30%     +59.1%      19.54 ±  6%  turbostat.CPU%c1
      0.77 ±137%   +1422.1%      11.72 ± 44%  turbostat.CPU%c3
     21.38 ± 22%     -67.9%       6.86 ± 70%  turbostat.CPU%c6
     65.00 ±  3%     +10.8%      72.00        turbostat.CoreTmp
     10.91 ± 16%     -24.4%       8.24 ±  2%  turbostat.Pkg%pc2
      0.00 ±141%    +725.0%       0.03 ± 53%  turbostat.Pkg%pc3
      0.12 ± 61%     -89.9%       0.01 ±103%  turbostat.Pkg%pc6
    178.48            +2.8%     183.53        turbostat.PkgWatt
     22.65            -3.2%      21.92        turbostat.RAMWatt
  55238867           -50.0%   27601805        meminfo.Cached
  54257533           -51.0%   26559528        meminfo.Committed_AS
  54003945           -51.2%   26371074        meminfo.Inactive
  54002477           -51.2%   26369483        meminfo.Inactive(anon)
    202625           -33.0%     135693        meminfo.KReclaimable
  39890692           -53.3%   18642069        meminfo.Mapped
  75034355           +36.9%  1.027e+08        meminfo.MemAvailable
  75551000           +36.7%  1.033e+08        meminfo.MemFree
  56341522           -49.2%   28604497        meminfo.Memused
     88732           -48.9%      45342        meminfo.PageTables
    202625           -33.0%     135693        meminfo.SReclaimable
  54016672           -51.2%   26379748        meminfo.Shmem
    336103           -18.9%     272502        meminfo.Slab
    360670           -43.5%     203807        meminfo.max_used_kB
     15033 ±  3%     +12.6%      16924 ±  4%  slabinfo.anon_vma.active_objs
     15033 ±  3%     +12.6%      16924 ±  4%  slabinfo.anon_vma.num_objs
     29229 ±  2%      +8.9%      31821 ±  3%  slabinfo.anon_vma_chain.active_objs
     29229 ±  2%      +8.9%      31821 ±  3%  slabinfo.anon_vma_chain.num_objs
     12000 ±  4%      +9.4%      13124 ±  3%  slabinfo.cred_jar.active_objs
     12000 ±  4%      +9.4%      13124 ±  3%  slabinfo.cred_jar.num_objs
    662.00 ±  7%     -16.9%     550.00 ±  5%  slabinfo.kmem_cache_node.active_objs
    703.00 ±  7%     -15.9%     591.50 ±  4%  slabinfo.kmem_cache_node.num_objs
    238877           -47.9%     124367        slabinfo.radix_tree_node.active_objs
      4288           -47.7%       2244        slabinfo.radix_tree_node.active_slabs
    240200           -47.7%     125696        slabinfo.radix_tree_node.num_objs
      4288           -47.7%       2244        slabinfo.radix_tree_node.num_slabs
    648.67 ±  4%     +36.8%     887.50 ±  4%  slabinfo.skbuff_ext_cache.active_objs
    648.67 ±  4%     +36.8%     887.50 ±  4%  slabinfo.skbuff_ext_cache.num_objs
      2559 ±  3%     +20.1%       3074 ± 11%  slabinfo.sock_inode_cache.active_objs
      2559 ±  3%     +20.1%       3074 ± 11%  slabinfo.sock_inode_cache.num_objs
   6943392 ±  4%     -50.9%    3410526 ±  2%  numa-vmstat.node0.nr_file_pages
   9358435 ±  2%     +37.9%   12908555        numa-vmstat.node0.nr_free_pages
   6783608 ±  4%     -52.0%    3257220 ±  2%  numa-vmstat.node0.nr_inactive_anon
   4976414 ±  3%     -53.3%    2324942        numa-vmstat.node0.nr_mapped
     11297 ±  2%     -48.6%       5810 ±  3%  numa-vmstat.node0.nr_page_table_pages
   6786264 ±  4%     -52.0%    3258936 ±  2%  numa-vmstat.node0.nr_shmem
     27773           -33.2%      18562 ±  4%  numa-vmstat.node0.nr_slab_reclaimable
   6783598 ±  4%     -52.0%    3257206 ±  2%  numa-vmstat.node0.nr_zone_inactive_anon
   6798656 ±  5%     -48.7%    3489885 ±  3%  numa-vmstat.node1.nr_file_pages
   9597839 ±  3%     +34.6%   12913978        numa-vmstat.node1.nr_free_pages
   6649054 ±  5%     -49.8%    3334855 ±  3%  numa-vmstat.node1.nr_inactive_anon
     18.00 ± 68%    +263.9%      65.50 ± 50%  numa-vmstat.node1.nr_inactive_file
   4914540           -53.1%    2306813        numa-vmstat.node1.nr_mapped
     10635           -49.2%       5405 ±  5%  numa-vmstat.node1.nr_page_table_pages
   6649976 ±  5%     -49.8%    3335702 ±  3%  numa-vmstat.node1.nr_shmem
     22650 ±  2%     -32.3%      15336 ±  6%  numa-vmstat.node1.nr_slab_reclaimable
   6649042 ±  5%     -49.8%    3334845 ±  3%  numa-vmstat.node1.nr_zone_inactive_anon
     18.00 ± 68%    +263.9%      65.50 ± 50%  numa-vmstat.node1.nr_zone_inactive_file
  27806193 ±  4%     -51.0%   13620225 ±  2%  numa-meminfo.node0.FilePages
  27168468 ±  4%     -52.1%   13008289 ±  3%  numa-meminfo.node0.Inactive
  27167075 ±  4%     -52.1%   13006963 ±  3%  numa-meminfo.node0.Inactive(anon)
    111263           -33.3%      74266 ±  4%  numa-meminfo.node0.KReclaimable
  19943539 ±  3%     -53.1%    9349954        numa-meminfo.node0.Mapped
  37400591 ±  3%     +38.1%   51655755        numa-meminfo.node0.MemFree
  28466692 ±  4%     -50.1%   14211527 ±  2%  numa-meminfo.node0.MemUsed
     45430 ±  2%     -48.3%      23467 ±  4%  numa-meminfo.node0.PageTables
    111263           -33.3%      74266 ±  4%  numa-meminfo.node0.SReclaimable
  27177672 ±  4%     -52.1%   13013861 ±  3%  numa-meminfo.node0.Shmem
    185064           -18.1%     151580 ±  5%  numa-meminfo.node0.Slab
  27237549 ±  4%     -48.7%   13965128 ±  2%  numa-meminfo.node1.FilePages
  26639192 ±  4%     -49.9%   13345224 ±  3%  numa-meminfo.node1.Inactive
  26639119 ±  4%     -49.9%   13344959 ±  3%  numa-meminfo.node1.Inactive(anon)
     90704           -32.4%      61357 ±  6%  numa-meminfo.node1.KReclaimable
  19694581           -53.4%    9174424        numa-meminfo.node1.Mapped
  38348099 ±  3%     +34.7%   51650284        numa-meminfo.node1.MemFree
  27677139 ±  4%     -48.1%   14374955 ±  2%  numa-meminfo.node1.MemUsed
     42740           -49.4%      21608 ±  4%  numa-meminfo.node1.PageTables
     90704           -32.4%      61357 ±  6%  numa-meminfo.node1.SReclaimable
  26642833 ±  4%     -49.9%   13348390 ±  3%  numa-meminfo.node1.Shmem
    150389           -19.7%     120837 ±  8%  numa-meminfo.node1.Slab
     65408            -2.4%      63841        proc-vmstat.nr_active_anon
    122.33           +56.1%     191.00        proc-vmstat.nr_dirtied
   1872101           +36.9%    2563115        proc-vmstat.nr_dirty_background_threshold
   3748812           +36.9%    5132740        proc-vmstat.nr_dirty_threshold
  13795102           -50.0%    6898747        proc-vmstat.nr_file_pages
  18902921           +36.6%   25824194        proc-vmstat.nr_free_pages
  13485730           -51.1%    6590402        proc-vmstat.nr_inactive_anon
    366.00            +8.6%     397.50        proc-vmstat.nr_inactive_file
   9967759           -53.5%    4637655        proc-vmstat.nr_mapped
     22073           -49.2%      11216        proc-vmstat.nr_page_table_pages
  13489295           -51.1%    6592974        proc-vmstat.nr_shmem
     50522           -32.9%      33902        proc-vmstat.nr_slab_reclaimable
     33370            +2.5%      34199        proc-vmstat.nr_slab_unreclaimable
    118.67           +43.9%     170.75        proc-vmstat.nr_written
     65408            -2.4%      63841        proc-vmstat.nr_zone_active_anon
  13485730           -51.1%    6590402        proc-vmstat.nr_zone_inactive_anon
    366.00            +8.6%     397.50        proc-vmstat.nr_zone_inactive_file
      2724 ± 25%     +77.1%       4826 ± 28%  proc-vmstat.numa_hint_faults
      3308 ±  4%    +168.1%       8869 ± 47%  proc-vmstat.numa_pages_migrated
    275310 ±  9%     +65.5%     455701 ± 15%  proc-vmstat.numa_pte_updates
     11705 ±  4%     -16.1%       9825 ±  6%  proc-vmstat.pgactivate
      3308 ±  4%    +168.1%       8869 ± 47%  proc-vmstat.pgmigrate_success
     27.31 ± 13%    +392.9%     134.60 ±119%  sched_debug.cfs_rq:/.load_avg.avg
    331.58 ± 11%    +525.2%       2073 ±128%  sched_debug.cfs_rq:/.load_avg.max
     68.59 ± 17%    +592.4%     474.93 ±130%  sched_debug.cfs_rq:/.load_avg.stddev
   5489162           -54.4%    2500483 ± 56%  sched_debug.cfs_rq:/.min_vruntime.avg
   5569380           -54.1%    2554132 ± 56%  sched_debug.cfs_rq:/.min_vruntime.max
   5339180           -55.4%    2381744 ± 56%  sched_debug.cfs_rq:/.min_vruntime.min
      0.13 ± 32%     +40.7%       0.19 ±  7%  sched_debug.cfs_rq:/.nr_running.stddev
      6.73 ± 19%    +167.6%      18.01 ± 53%  sched_debug.cfs_rq:/.removed.load_avg.avg
     40.59 ±  9%    +108.1%      84.47 ± 30%  sched_debug.cfs_rq:/.removed.load_avg.stddev
    309.01 ± 19%    +169.0%     831.37 ± 53%  sched_debug.cfs_rq:/.removed.runnable_sum.avg
     11735           +77.4%      20824 ± 24%  sched_debug.cfs_rq:/.removed.runnable_sum.max
      1863 ±  9%    +109.2%       3898 ± 30%  sched_debug.cfs_rq:/.removed.runnable_sum.stddev
      2.70 ± 35%    +192.4%       7.89 ± 48%  sched_debug.cfs_rq:/.removed.util_avg.avg
    104.17 ± 17%    +118.0%     227.12 ± 25%  sched_debug.cfs_rq:/.removed.util_avg.max
     16.08 ± 25%    +136.5%      38.02 ± 27%  sched_debug.cfs_rq:/.removed.util_avg.stddev
     10.49 ± 11%     -32.0%       7.13 ± 10%  sched_debug.cfs_rq:/.runnable_load_avg.avg
    114.31 ± 11%     +14.2%     130.50 ±  6%  sched_debug.cfs_rq:/.util_avg.stddev
    557.17 ± 21%     +24.9%     695.94 ± 16%  sched_debug.cfs_rq:/.util_est_enqueued.max
    130909 ±  2%     -38.9%      79924 ± 32%  sched_debug.cpu.clock.avg
    130918 ±  2%     -38.9%      79933 ± 32%  sched_debug.cpu.clock.max
    130900 ±  2%     -38.9%      79917 ± 32%  sched_debug.cpu.clock.min
    130909 ±  2%     -38.9%      79924 ± 32%  sched_debug.cpu.clock_task.avg
    130918 ±  2%     -38.9%      79933 ± 32%  sched_debug.cpu.clock_task.max
    130900 ±  2%     -38.9%      79917 ± 32%  sched_debug.cpu.clock_task.min
      8.80 ±  4%     -14.7%       7.50 ±  5%  sched_debug.cpu.cpu_load[3].avg
      8.95 ±  4%     -13.9%       7.70 ±  2%  sched_debug.cpu.cpu_load[4].avg
      4083 ± 19%     -35.7%       2625 ± 22%  sched_debug.cpu.curr->pid.avg
      2171 ± 43%     -71.8%     612.12 ± 39%  sched_debug.cpu.curr->pid.min
     96790           -47.5%      50806 ± 50%  sched_debug.cpu.nr_load_updates.avg
    108785 ±  5%     -48.3%      56293 ± 46%  sched_debug.cpu.nr_load_updates.max
     93045           -50.6%      46010 ± 52%  sched_debug.cpu.nr_load_updates.min
      0.22 ± 15%     +22.6%       0.27 ±  8%  sched_debug.cpu.nr_running.stddev
     11.00 ± 26%     +81.8%      20.00 ± 35%  sched_debug.cpu.nr_uninterruptible.max
    130899 ±  2%     -38.9%      79917 ± 32%  sched_debug.cpu_clk
    126848 ±  3%     -40.2%      75884 ± 34%  sched_debug.ktime
    131813 ±  3%     -38.8%      80612 ± 32%  sched_debug.sched_clk
     20131 ±  7%     +18.3%      23816 ± 12%  softirqs.CPU14.RCU
     89596 ± 10%     -21.0%      70741 ±  9%  softirqs.CPU14.TIMER
     82675 ±  3%     -12.7%      72166 ±  9%  softirqs.CPU15.TIMER
     82022 ±  3%     -13.1%      71311 ±  7%  softirqs.CPU20.TIMER
     75561 ±  4%     -13.5%      65383 ±  4%  softirqs.CPU22.TIMER
     73531 ±  5%     -12.7%      64183 ±  4%  softirqs.CPU23.TIMER
     74796 ±  3%     -13.8%      64445 ±  6%  softirqs.CPU24.TIMER
     73640 ±  5%     -10.8%      65706 ±  3%  softirqs.CPU26.TIMER
     72580 ±  6%     -13.9%      62506 ±  3%  softirqs.CPU27.TIMER
     73608 ±  5%     -12.0%      64755 ±  3%  softirqs.CPU28.TIMER
     73680 ±  5%     -10.1%      66265 ±  5%  softirqs.CPU29.TIMER
     73672 ±  4%     -13.2%      63955 ±  3%  softirqs.CPU30.TIMER
     73132 ±  5%     -12.3%      64110 ±  4%  softirqs.CPU31.TIMER
     73080 ±  5%     -13.4%      63311 ±  4%  softirqs.CPU33.TIMER
     74422 ±  4%     -10.9%      66288        softirqs.CPU34.TIMER
     73272 ±  5%     -12.7%      63991 ±  2%  softirqs.CPU35.TIMER
     75598 ±  4%     -10.2%      67920        softirqs.CPU37.TIMER
     75732 ±  3%     -12.1%      66556 ±  2%  softirqs.CPU38.TIMER
     73562 ±  6%     -14.0%      63239 ±  5%  softirqs.CPU39.TIMER
     74804 ±  3%     -13.1%      64997 ±  2%  softirqs.CPU40.TIMER
     75926           -11.2%      67418 ±  2%  softirqs.CPU41.TIMER
     76357           -10.0%      68731 ±  3%  softirqs.CPU42.TIMER
     78177           -13.1%      67901        softirqs.CPU43.TIMER
     20261 ±  5%     +25.8%      25487 ± 22%  softirqs.CPU49.RCU
     21419 ±  3%     +11.1%      23806 ±  6%  softirqs.CPU58.RCU
     83494 ±  2%     -15.7%      70407 ±  8%  softirqs.CPU58.TIMER
     81880 ±  3%     -13.4%      70890 ±  8%  softirqs.CPU59.TIMER
     16398 ±  7%     +24.7%      20443 ± 14%  softirqs.CPU63.RCU
     81929 ±  3%     -13.3%      71052 ±  9%  softirqs.CPU64.TIMER
     73604 ±  5%     -13.4%      63720 ±  2%  softirqs.CPU66.TIMER
     74179 ±  4%     -13.0%      64501 ±  5%  softirqs.CPU68.TIMER
     73907 ±  5%     -11.8%      65152 ±  4%  softirqs.CPU70.TIMER
     72272 ±  6%     -14.5%      61817 ±  5%  softirqs.CPU71.TIMER
     73600 ±  5%     -12.4%      64453 ±  4%  softirqs.CPU72.TIMER
     73383 ±  4%     -12.2%      64466 ±  4%  softirqs.CPU74.TIMER
     16610 ±  9%     +22.2%      20291 ± 10%  softirqs.CPU75.RCU
     74127 ±  4%     -12.7%      64724 ±  4%  softirqs.CPU75.TIMER
     72876 ±  5%     -13.5%      63044 ±  3%  softirqs.CPU77.TIMER
     73741 ±  5%      -9.8%      66541        softirqs.CPU78.TIMER
     72336 ±  6%     -10.7%      64573        softirqs.CPU79.TIMER
     74984 ±  4%      -9.4%      67927        softirqs.CPU81.TIMER
     75643 ±  3%     -11.6%      66858 ±  2%  softirqs.CPU82.TIMER
     74269 ±  4%     -12.4%      65080 ±  3%  softirqs.CPU84.TIMER
     75571           -11.4%      66947 ±  2%  softirqs.CPU85.TIMER
     75810            -9.8%      68392 ±  2%  softirqs.CPU86.TIMER
   6674487           -10.7%    5963133 ±  3%  softirqs.TIMER
     48.25 ± 48%     -48.1        0.15 ±173%  perf-profile.calltrace.cycles-pp.apic_timer_interrupt
     45.86 ± 48%     -45.7        0.14 ±173%  perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt
     36.81 ± 47%     -36.8        0.00        perf-profile.calltrace.cycles-pp.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
     26.91 ± 48%     -26.9        0.00        perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
     19.21 ± 43%     -19.2        0.00        perf-profile.calltrace.cycles-pp.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
     17.43 ± 43%     -17.4        0.00        perf-profile.calltrace.cycles-pp.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt
     17.20 ± 43%     -17.2        0.00        perf-profile.calltrace.cycles-pp.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt
     11.95 ± 48%     -12.0        0.00        perf-profile.calltrace.cycles-pp.scheduler_tick.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues
      9.48 ± 48%      -9.5        0.00        perf-profile.calltrace.cycles-pp.task_tick_fair.scheduler_tick.update_process_times.tick_sched_handle.tick_sched_timer
      8.62 ± 41%      -8.1        0.54 ± 60%  perf-profile.calltrace.cycles-pp.new_sync_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
      8.17 ± 41%      -7.6        0.54 ± 60%  perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.write._fini
      8.17 ± 41%      -7.6        0.54 ± 60%  perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.write._fini
      8.17 ± 41%      -7.6        0.54 ± 60%  perf-profile.calltrace.cycles-pp.vprintk_emit.devkmsg_emit.devkmsg_write.new_sync_write.vfs_write
      8.17 ± 41%      -7.6        0.54 ± 60%  perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write._fini
      8.17 ± 41%      -7.6        0.54 ± 60%  perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
      8.17 ± 41%      -7.6        0.54 ± 60%  perf-profile.calltrace.cycles-pp.write._fini
      8.17 ± 41%      -7.6        0.54 ± 60%  perf-profile.calltrace.cycles-pp._fini
      8.17 ± 41%      -7.6        0.54 ± 60%  perf-profile.calltrace.cycles-pp.devkmsg_write.new_sync_write.vfs_write.ksys_write.do_syscall_64
      8.17 ± 41%      -7.6        0.54 ± 60%  perf-profile.calltrace.cycles-pp.devkmsg_emit.devkmsg_write.new_sync_write.vfs_write.ksys_write
      7.95 ± 38%      -7.4        0.54 ± 61%  perf-profile.calltrace.cycles-pp.console_unlock.vprintk_emit.devkmsg_emit.devkmsg_write.new_sync_write
      6.82 ± 32%      -6.5        0.31 ±100%  perf-profile.calltrace.cycles-pp.serial8250_console_putchar.uart_console_write.serial8250_console_write.console_unlock.vprintk_emit
      6.82 ± 32%      -6.5        0.31 ±100%  perf-profile.calltrace.cycles-pp.wait_for_xmitr.serial8250_console_putchar.uart_console_write.serial8250_console_write.console_unlock
      6.50 ± 34%      -6.2        0.33 ±100%  perf-profile.calltrace.cycles-pp.serial8250_console_write.console_unlock.vprintk_emit.devkmsg_emit.devkmsg_write
      6.30 ± 35%      -6.0        0.32 ±100%  perf-profile.calltrace.cycles-pp.uart_console_write.serial8250_console_write.console_unlock.vprintk_emit.devkmsg_emit
      4.33 ± 50%      -4.3        0.00        perf-profile.calltrace.cycles-pp.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt
      4.19 ± 50%      -4.2        0.00        perf-profile.calltrace.cycles-pp.perf_mux_hrtimer_handler.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
      4.10 ±115%      -3.8        0.26 ±100%  perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
      4.09 ±116%      -3.8        0.26 ±100%  perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
      0.00            +1.4        1.41 ± 12%  perf-profile.calltrace.cycles-pp.nrand48_r
      0.00            +1.5        1.52 ± 34%  perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma
      0.00            +1.5        1.54 ± 34%  perf-profile.calltrace.cycles-pp._raw_spin_lock.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.shmem_alloc_page
      0.00            +1.5        1.54 ± 33%  perf-profile.calltrace.cycles-pp.clear_page_erms.shmem_getpage_gfp.shmem_fault.__do_fault.__handle_mm_fault
      0.00            +2.0        2.00 ± 34%  perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.shmem_alloc_page.shmem_alloc_and_acct_page
      0.00            +2.1        2.08 ± 34%  perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.alloc_pages_vma.shmem_alloc_page.shmem_alloc_and_acct_page.shmem_getpage_gfp
      0.00            +2.1        2.12 ± 34%  perf-profile.calltrace.cycles-pp.alloc_pages_vma.shmem_alloc_page.shmem_alloc_and_acct_page.shmem_getpage_gfp.shmem_fault
      0.00            +2.2        2.17 ± 34%  perf-profile.calltrace.cycles-pp.shmem_alloc_page.shmem_alloc_and_acct_page.shmem_getpage_gfp.shmem_fault.__do_fault
      0.00            +2.5        2.47 ± 34%  perf-profile.calltrace.cycles-pp.shmem_alloc_and_acct_page.shmem_getpage_gfp.shmem_fault.__do_fault.__handle_mm_fault
      0.00            +2.9        2.94 ± 10%  perf-profile.calltrace.cycles-pp.do_rw_once
      0.00            +3.8        3.76 ± 33%  perf-profile.calltrace.cycles-pp.filemap_map_pages.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
      0.00            +5.7        5.74 ± 33%  perf-profile.calltrace.cycles-pp.shmem_getpage_gfp.shmem_fault.__do_fault.__handle_mm_fault.handle_mm_fault
      0.00            +5.8        5.85 ± 33%  perf-profile.calltrace.cycles-pp.shmem_fault.__do_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault
      0.00            +5.9        5.87 ± 33%  perf-profile.calltrace.cycles-pp.__do_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
      0.18 ±141%      +9.8       10.02 ± 33%  perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
      0.00           +10.1       10.11 ± 33%  perf-profile.calltrace.cycles-pp.handle_mm_fault.__do_page_fault.do_page_fault.page_fault.do_access
      0.00           +10.4       10.41 ± 33%  perf-profile.calltrace.cycles-pp.__do_page_fault.do_page_fault.page_fault.do_access
      0.00           +10.5       10.49 ± 33%  perf-profile.calltrace.cycles-pp.do_page_fault.page_fault.do_access
      0.00           +11.6       11.55 ± 36%  perf-profile.calltrace.cycles-pp.page_fault.do_access
      0.00           +82.8       82.82 ±  5%  perf-profile.calltrace.cycles-pp.do_access
     21.45            -7.5%      19.84 ±  5%  perf-stat.i.MPKI
 8.758e+09           +10.8%    9.7e+09        perf-stat.i.branch-instructions
      0.80 ±  7%      -0.2        0.61 ± 25%  perf-stat.i.branch-miss-rate%
     52.90           -10.0       42.90        perf-stat.i.cache-miss-rate%
 5.787e+08           -11.6%  5.114e+08        perf-stat.i.cache-misses
 8.135e+08            +7.8%  8.773e+08        perf-stat.i.cache-references
      1924           +43.6%       2763 ±  4%  perf-stat.i.context-switches
      3.57           -14.1%       3.06        perf-stat.i.cpi
 1.602e+11            -5.6%  1.512e+11        perf-stat.i.cpu-cycles
     64.20 ±  5%     +39.9%      89.80        perf-stat.i.cpu-migrations
    537.18 ±  8%     +21.6%     653.45 ±  6%  perf-stat.i.cycles-between-cache-misses
      2.96            -0.5        2.48        perf-stat.i.dTLB-load-miss-rate%
 4.458e+08            -4.6%  4.254e+08        perf-stat.i.dTLB-load-misses
 1.098e+10           +10.7%  1.215e+10        perf-stat.i.dTLB-loads
    589025 ± 25%     +54.6%     910381 ±  7%  perf-stat.i.dTLB-store-misses
 4.284e+09           +12.1%  4.803e+09        perf-stat.i.dTLB-stores
   1852806 ±  4%     +13.3%    2099262 ±  3%  perf-stat.i.iTLB-load-misses
 3.852e+10           +11.1%  4.278e+10        perf-stat.i.instructions
    286409 ± 41%     +54.9%     443730 ±  3%  perf-stat.i.instructions-per-iTLB-miss
      0.33 ±  4%      +7.7%       0.35 ±  3%  perf-stat.i.ipc
    672221 ±  2%     +11.8%     751342        perf-stat.i.minor-faults
   6532149           -26.3%    4811515 ± 15%  perf-stat.i.node-load-misses
 5.586e+08           -12.0%  4.919e+08        perf-stat.i.node-loads
    672261 ±  2%     +11.8%     751452        perf-stat.i.page-faults
     21.10            -2.9%      20.50        perf-stat.overall.MPKI
     71.04           -12.7       58.30        perf-stat.overall.cache-miss-rate%
      4.15           -14.9%       3.53        perf-stat.overall.cpi
    277.06            +6.7%     295.70        perf-stat.overall.cycles-between-cache-misses
      3.89            -0.5        3.38        perf-stat.overall.dTLB-load-miss-rate%
      0.01 ± 25%      +0.0        0.02 ±  7%  perf-stat.overall.dTLB-store-miss-rate%
      0.24           +17.5%       0.28        perf-stat.overall.ipc
     12551            +1.1%      12696        perf-stat.overall.path-length
 8.727e+09           +10.9%  9.677e+09        perf-stat.ps.branch-instructions
 5.756e+08           -11.4%  5.101e+08        perf-stat.ps.cache-misses
 8.103e+08            +8.0%  8.751e+08        perf-stat.ps.cache-references
      1922           +43.4%       2757 ±  4%  perf-stat.ps.context-switches
 1.595e+11            -5.4%  1.508e+11        perf-stat.ps.cpu-cycles
     64.37 ±  5%     +39.6%      89.86        perf-stat.ps.cpu-migrations
 4.431e+08            -4.2%  4.243e+08        perf-stat.ps.dTLB-load-misses
 1.094e+10           +10.8%  1.213e+10        perf-stat.ps.dTLB-loads
    586226 ± 25%     +54.3%     904307 ±  7%  perf-stat.ps.dTLB-store-misses
 4.276e+09           +12.1%  4.792e+09        perf-stat.ps.dTLB-stores
   1875392 ±  4%     +12.0%    2099913 ±  3%  perf-stat.ps.iTLB-load-misses
  3.84e+10           +11.2%  4.269e+10        perf-stat.ps.instructions
    684408           +10.2%     754271        perf-stat.ps.minor-faults
   6489476           -26.3%    4781738 ± 15%  perf-stat.ps.node-load-misses
 5.554e+08           -11.7%  4.907e+08        perf-stat.ps.node-loads
    684409           +10.2%     754271        perf-stat.ps.page-faults
  7.45e+12            +1.1%  7.535e+12        perf-stat.total.instructions
    190.67            -9.9%     171.75        interrupts.168:IR-PCI-MSI.512000-edge.ahci[0000:00:1f.2]
    213.67 ± 29%     -56.7%      92.50 ±  3%  interrupts.39:IR-PCI-MSI.1572869-edge.eth0-TxRx-5
    251.33 ± 48%     -60.7%      98.75 ±  6%  interrupts.45:IR-PCI-MSI.1572875-edge.eth0-TxRx-11
    169.00 ± 26%     -38.6%     103.75 ± 14%  interrupts.46:IR-PCI-MSI.1572876-edge.eth0-TxRx-12
    107.00 ± 10%     -15.0%      91.00 ±  4%  interrupts.48:IR-PCI-MSI.1572878-edge.eth0-TxRx-14
    103.00 ±  6%     -15.5%      87.00        interrupts.50:IR-PCI-MSI.1572880-edge.eth0-TxRx-16
     99.33 ±  4%     -12.7%      86.75        interrupts.75:IR-PCI-MSI.1572903-edge.eth0-TxRx-39
     98.67 ±  6%     -12.1%      86.75        interrupts.79:IR-PCI-MSI.1572907-edge.eth0-TxRx-43
     98.33 ±  5%     -11.5%      87.00        interrupts.84:IR-PCI-MSI.1572912-edge.eth0-TxRx-48
     99.67 ±  2%     -12.7%      87.00        interrupts.87:IR-PCI-MSI.1572915-edge.eth0-TxRx-51
    328.00           -10.7%     292.75        interrupts.9:IR-IO-APIC.9-fasteoi.acpi
    193517            -6.8%     180367        interrupts.CAL:Function_call_interrupts
    328.00           -10.7%     292.75        interrupts.CPU1.9:IR-IO-APIC.9-fasteoi.acpi
      2232 ±  2%      -8.8%       2035 ±  2%  interrupts.CPU1.CAL:Function_call_interrupts
      7482 ±  7%     -49.3%       3796 ± 36%  interrupts.CPU10.NMI:Non-maskable_interrupts
      7482 ±  7%     -49.3%       3796 ± 36%  interrupts.CPU10.PMI:Performance_monitoring_interrupts
    251.33 ± 48%     -60.7%      98.75 ±  6%  interrupts.CPU11.45:IR-PCI-MSI.1572875-edge.eth0-TxRx-11
      7434 ±  8%     -32.4%       5028 ±  6%  interrupts.CPU11.NMI:Non-maskable_interrupts
      7434 ±  8%     -32.4%       5028 ±  6%  interrupts.CPU11.PMI:Performance_monitoring_interrupts
    169.00 ± 26%     -38.6%     103.75 ± 14%  interrupts.CPU12.46:IR-PCI-MSI.1572876-edge.eth0-TxRx-12
      7480 ±  8%     -33.2%       4994 ±  8%  interrupts.CPU12.NMI:Non-maskable_interrupts
      7480 ±  8%     -33.2%       4994 ±  8%  interrupts.CPU12.PMI:Performance_monitoring_interrupts
      7460 ±  7%     -33.2%       4986 ±  7%  interrupts.CPU13.NMI:Non-maskable_interrupts
      7460 ±  7%     -33.2%       4986 ±  7%  interrupts.CPU13.PMI:Performance_monitoring_interrupts
    904.00 ±105%     -86.8%     119.00 ± 79%  interrupts.CPU13.RES:Rescheduling_interrupts
    107.00 ± 10%     -15.0%      91.00 ±  4%  interrupts.CPU14.48:IR-PCI-MSI.1572878-edge.eth0-TxRx-14
      7545 ±  6%     -32.3%       5111 ±  8%  interrupts.CPU14.NMI:Non-maskable_interrupts
      7545 ±  6%     -32.3%       5111 ±  8%  interrupts.CPU14.PMI:Performance_monitoring_interrupts
      7556 ±  6%     -32.8%       5080 ±  7%  interrupts.CPU15.NMI:Non-maskable_interrupts
      7556 ±  6%     -32.8%       5080 ±  7%  interrupts.CPU15.PMI:Performance_monitoring_interrupts
    103.00 ±  6%     -15.5%      87.00        interrupts.CPU16.50:IR-PCI-MSI.1572880-edge.eth0-TxRx-16
      7494 ±  7%     -32.4%       5069 ±  8%  interrupts.CPU16.NMI:Non-maskable_interrupts
      7494 ±  7%     -32.4%       5069 ±  8%  interrupts.CPU16.PMI:Performance_monitoring_interrupts
      7477 ±  8%     -32.9%       5015 ±  8%  interrupts.CPU17.NMI:Non-maskable_interrupts
      7477 ±  8%     -32.9%       5015 ±  8%  interrupts.CPU17.PMI:Performance_monitoring_interrupts
      7416 ±  8%     -32.4%       5012 ±  7%  interrupts.CPU18.NMI:Non-maskable_interrupts
      7416 ±  8%     -32.4%       5012 ±  7%  interrupts.CPU18.PMI:Performance_monitoring_interrupts
      7489 ±  7%     -24.9%       5624 ± 19%  interrupts.CPU19.NMI:Non-maskable_interrupts
      7489 ±  7%     -24.9%       5624 ± 19%  interrupts.CPU19.PMI:Performance_monitoring_interrupts
      7531 ±  6%     -32.1%       5116 ±  7%  interrupts.CPU20.NMI:Non-maskable_interrupts
      7531 ±  6%     -32.1%       5116 ±  7%  interrupts.CPU20.PMI:Performance_monitoring_interrupts
    302.33 ±  9%     -50.1%     151.00 ± 39%  interrupts.CPU22.RES:Rescheduling_interrupts
      7565 ±  6%     -33.8%       5005 ±  6%  interrupts.CPU32.NMI:Non-maskable_interrupts
      7565 ±  6%     -33.8%       5005 ±  6%  interrupts.CPU32.PMI:Performance_monitoring_interrupts
      7494 ±  7%     -34.4%       4916 ±  7%  interrupts.CPU34.NMI:Non-maskable_interrupts
      7494 ±  7%     -34.4%       4916 ±  7%  interrupts.CPU34.PMI:Performance_monitoring_interrupts
      7562 ±  5%     -34.6%       4946 ±  7%  interrupts.CPU35.NMI:Non-maskable_interrupts
      7562 ±  5%     -34.6%       4946 ±  7%  interrupts.CPU35.PMI:Performance_monitoring_interrupts
      7485 ±  7%     -32.8%       5030 ±  7%  interrupts.CPU36.NMI:Non-maskable_interrupts
      7485 ±  7%     -32.8%       5030 ±  7%  interrupts.CPU36.PMI:Performance_monitoring_interrupts
      7532 ±  6%     -33.3%       5026 ±  7%  interrupts.CPU37.NMI:Non-maskable_interrupts
      7532 ±  6%     -33.3%       5026 ±  7%  interrupts.CPU37.PMI:Performance_monitoring_interrupts
    898.33 ±108%     -87.9%     108.25 ± 59%  interrupts.CPU37.RES:Rescheduling_interrupts
      7502 ±  7%     -33.7%       4977 ±  7%  interrupts.CPU38.NMI:Non-maskable_interrupts
      7502 ±  7%     -33.7%       4977 ±  7%  interrupts.CPU38.PMI:Performance_monitoring_interrupts
     99.33 ±  4%     -12.7%      86.75        interrupts.CPU39.75:IR-PCI-MSI.1572903-edge.eth0-TxRx-39
      7506 ±  6%     -41.6%       4382 ± 28%  interrupts.CPU39.NMI:Non-maskable_interrupts
      7506 ±  6%     -41.6%       4382 ± 28%  interrupts.CPU39.PMI:Performance_monitoring_interrupts
      7512 ±  7%     -30.3%       5232 ±  8%  interrupts.CPU40.NMI:Non-maskable_interrupts
      7512 ±  7%     -30.3%       5232 ±  8%  interrupts.CPU40.PMI:Performance_monitoring_interrupts
      7842           -44.1%       4384 ± 27%  interrupts.CPU41.NMI:Non-maskable_interrupts
      7842           -44.1%       4384 ± 27%  interrupts.CPU41.PMI:Performance_monitoring_interrupts
      7501 ±  6%     -41.7%       4375 ± 27%  interrupts.CPU42.NMI:Non-maskable_interrupts
      7501 ±  6%     -41.7%       4375 ± 27%  interrupts.CPU42.PMI:Performance_monitoring_interrupts
     98.67 ±  6%     -12.1%      86.75        interrupts.CPU43.79:IR-PCI-MSI.1572907-edge.eth0-TxRx-43
      7515 ±  6%     -41.5%       4399 ± 27%  interrupts.CPU43.NMI:Non-maskable_interrupts
      7515 ±  6%     -41.5%       4399 ± 27%  interrupts.CPU43.PMI:Performance_monitoring_interrupts
      7550 ±  5%     -40.5%       4492 ± 26%  interrupts.CPU44.NMI:Non-maskable_interrupts
      7550 ±  5%     -40.5%       4492 ± 26%  interrupts.CPU44.PMI:Performance_monitoring_interrupts
    182.33 ± 39%    +529.8%       1148 ±115%  interrupts.CPU44.RES:Rescheduling_interrupts
      2255 ±  2%      -8.6%       2060 ±  2%  interrupts.CPU45.CAL:Function_call_interrupts
     98.33 ±  5%     -11.5%      87.00        interrupts.CPU48.84:IR-PCI-MSI.1572912-edge.eth0-TxRx-48
    182.67 ± 68%    +121.0%     403.75 ± 61%  interrupts.CPU49.RES:Rescheduling_interrupts
    213.67 ± 29%     -56.7%      92.50 ±  3%  interrupts.CPU5.39:IR-PCI-MSI.1572869-edge.eth0-TxRx-5
    109.00 ± 14%     +62.2%     176.75 ± 36%  interrupts.CPU5.RES:Rescheduling_interrupts
     99.67 ±  2%     -12.7%      87.00        interrupts.CPU51.87:IR-PCI-MSI.1572915-edge.eth0-TxRx-51
     51.00 ± 51%     +63.7%      83.50 ± 27%  interrupts.CPU51.RES:Rescheduling_interrupts
      2241            -9.8%       2022 ±  3%  interrupts.CPU52.CAL:Function_call_interrupts
      2222           -10.5%       1988 ±  3%  interrupts.CPU54.CAL:Function_call_interrupts
      7457 ±  7%     -38.2%       4612 ± 27%  interrupts.CPU55.NMI:Non-maskable_interrupts
      7457 ±  7%     -38.2%       4612 ± 27%  interrupts.CPU55.PMI:Performance_monitoring_interrupts
      7522 ±  7%     -41.9%       4367 ± 25%  interrupts.CPU56.NMI:Non-maskable_interrupts
      7522 ±  7%     -41.9%       4367 ± 25%  interrupts.CPU56.PMI:Performance_monitoring_interrupts
      7514 ±  7%     -49.6%       3786 ± 35%  interrupts.CPU57.NMI:Non-maskable_interrupts
      7514 ±  7%     -49.6%       3786 ± 35%  interrupts.CPU57.PMI:Performance_monitoring_interrupts
     76.67 ± 95%    +761.8%     660.75 ±127%  interrupts.CPU57.RES:Rescheduling_interrupts
      7563 ±  6%     -40.8%       4480 ± 24%  interrupts.CPU58.NMI:Non-maskable_interrupts
      7563 ±  6%     -40.8%       4480 ± 24%  interrupts.CPU58.PMI:Performance_monitoring_interrupts
    205.00 ± 45%     -66.6%      68.50 ± 56%  interrupts.CPU58.RES:Rescheduling_interrupts
      7571 ±  6%     -32.6%       5105 ±  7%  interrupts.CPU59.NMI:Non-maskable_interrupts
      7571 ±  6%     -32.6%       5105 ±  7%  interrupts.CPU59.PMI:Performance_monitoring_interrupts
      7531 ±  6%     -41.1%       4432 ± 25%  interrupts.CPU6.NMI:Non-maskable_interrupts
      7531 ±  6%     -41.1%       4432 ± 25%  interrupts.CPU6.PMI:Performance_monitoring_interrupts
      7532 ±  6%     -32.6%       5075 ±  8%  interrupts.CPU60.NMI:Non-maskable_interrupts
      7532 ±  6%     -32.6%       5075 ±  8%  interrupts.CPU60.PMI:Performance_monitoring_interrupts
     26.00 ± 19%    +301.9%     104.50 ± 86%  interrupts.CPU60.RES:Rescheduling_interrupts
      7482 ±  7%     -33.2%       4999 ±  8%  interrupts.CPU61.NMI:Non-maskable_interrupts
      7482 ±  7%     -33.2%       4999 ±  8%  interrupts.CPU61.PMI:Performance_monitoring_interrupts
      7513 ±  7%     -33.5%       4998 ±  8%  interrupts.CPU62.NMI:Non-maskable_interrupts
      7513 ±  7%     -33.5%       4998 ±  8%  interrupts.CPU62.PMI:Performance_monitoring_interrupts
      7478 ±  7%     -27.5%       5423 ± 14%  interrupts.CPU63.NMI:Non-maskable_interrupts
      7478 ±  7%     -27.5%       5423 ± 14%  interrupts.CPU63.PMI:Performance_monitoring_interrupts
     79.33 ± 80%     +70.8%     135.50 ± 59%  interrupts.CPU63.RES:Rescheduling_interrupts
      7574 ±  6%     -32.1%       5141 ±  7%  interrupts.CPU64.NMI:Non-maskable_interrupts
      7574 ±  6%     -32.1%       5141 ±  7%  interrupts.CPU64.PMI:Performance_monitoring_interrupts
    190.67            -9.9%     171.75        interrupts.CPU65.168:IR-PCI-MSI.512000-edge.ahci[0000:00:1f.2]
      7546 ±  5%     -24.1%       5728 ± 15%  interrupts.CPU65.NMI:Non-maskable_interrupts
      7546 ±  5%     -24.1%       5728 ± 15%  interrupts.CPU65.PMI:Performance_monitoring_interrupts
      7435 ±  8%     -31.0%       5130 ±  6%  interrupts.CPU66.NMI:Non-maskable_interrupts
      7435 ±  8%     -31.0%       5130 ±  6%  interrupts.CPU66.PMI:Performance_monitoring_interrupts
     48.00 ± 40%    +322.4%     202.75 ± 76%  interrupts.CPU66.RES:Rescheduling_interrupts
      7473 ±  8%     -30.1%       5226 ± 15%  interrupts.CPU67.NMI:Non-maskable_interrupts
      7473 ±  8%     -30.1%       5226 ± 15%  interrupts.CPU67.PMI:Performance_monitoring_interrupts
      7502 ±  7%     -33.6%       4985 ±  7%  interrupts.CPU68.NMI:Non-maskable_interrupts
      7502 ±  7%     -33.6%       4985 ±  7%  interrupts.CPU68.PMI:Performance_monitoring_interrupts
      7415 ±  8%     -32.9%       4973 ±  9%  interrupts.CPU70.NMI:Non-maskable_interrupts
      7415 ±  8%     -32.9%       4973 ±  9%  interrupts.CPU70.PMI:Performance_monitoring_interrupts
    102.00 ± 64%    +195.8%     301.75 ± 50%  interrupts.CPU70.RES:Rescheduling_interrupts
      7463 ±  7%     -32.9%       5005 ±  9%  interrupts.CPU71.NMI:Non-maskable_interrupts
      7463 ±  7%     -32.9%       5005 ±  9%  interrupts.CPU71.PMI:Performance_monitoring_interrupts
      7502 ±  8%     -38.2%       4635 ± 20%  interrupts.CPU72.NMI:Non-maskable_interrupts
      7502 ±  8%     -38.2%       4635 ± 20%  interrupts.CPU72.PMI:Performance_monitoring_interrupts
     65.00 ± 58%    +181.9%     183.25 ± 28%  interrupts.CPU72.RES:Rescheduling_interrupts
      7426 ±  8%     -38.4%       4574 ± 30%  interrupts.CPU73.NMI:Non-maskable_interrupts
      7426 ±  8%     -38.4%       4574 ± 30%  interrupts.CPU73.PMI:Performance_monitoring_interrupts
      7444 ±  8%     -40.4%       4434 ± 28%  interrupts.CPU74.NMI:Non-maskable_interrupts
      7444 ±  8%     -40.4%       4434 ± 28%  interrupts.CPU74.PMI:Performance_monitoring_interrupts
      7454 ±  8%     -38.9%       4558 ± 24%  interrupts.CPU75.NMI:Non-maskable_interrupts
      7454 ±  8%     -38.9%       4558 ± 24%  interrupts.CPU75.PMI:Performance_monitoring_interrupts
      7446 ±  8%     -32.6%       5017 ±  7%  interrupts.CPU76.NMI:Non-maskable_interrupts
      7446 ±  8%     -32.6%       5017 ±  7%  interrupts.CPU76.PMI:Performance_monitoring_interrupts
      7465 ±  6%     -32.5%       5035 ± 10%  interrupts.CPU77.NMI:Non-maskable_interrupts
      7465 ±  6%     -32.5%       5035 ± 10%  interrupts.CPU77.PMI:Performance_monitoring_interrupts
    206.00 ± 59%     -64.7%      72.75 ± 26%  interrupts.CPU77.RES:Rescheduling_interrupts
      7424 ±  9%     -40.9%       4386 ± 28%  interrupts.CPU78.NMI:Non-maskable_interrupts
      7424 ±  9%     -40.9%       4386 ± 28%  interrupts.CPU78.PMI:Performance_monitoring_interrupts
     76.00 ± 78%    +204.3%     231.25 ± 53%  interrupts.CPU78.RES:Rescheduling_interrupts
      7717 ±  2%     -43.0%       4399 ± 29%  interrupts.CPU79.NMI:Non-maskable_interrupts
      7717 ±  2%     -43.0%       4399 ± 29%  interrupts.CPU79.PMI:Performance_monitoring_interrupts
      7454 ±  8%     -48.8%       3813 ± 36%  interrupts.CPU8.NMI:Non-maskable_interrupts
      7454 ±  8%     -48.8%       3813 ± 36%  interrupts.CPU8.PMI:Performance_monitoring_interrupts
    148.33 ± 52%    +132.1%     344.25 ± 65%  interrupts.CPU8.RES:Rescheduling_interrupts
      7484 ±  7%     -32.4%       5061 ±  6%  interrupts.CPU80.NMI:Non-maskable_interrupts
      7484 ±  7%     -32.4%       5061 ±  6%  interrupts.CPU80.PMI:Performance_monitoring_interrupts
    100.00 ± 76%    +164.2%     264.25 ± 67%  interrupts.CPU80.RES:Rescheduling_interrupts
      7523 ±  7%     -33.3%       5017 ±  8%  interrupts.CPU82.NMI:Non-maskable_interrupts
      7523 ±  7%     -33.3%       5017 ±  8%  interrupts.CPU82.PMI:Performance_monitoring_interrupts
      7446 ±  8%     -33.5%       4949 ±  7%  interrupts.CPU83.NMI:Non-maskable_interrupts
      7446 ±  8%     -33.5%       4949 ±  7%  interrupts.CPU83.PMI:Performance_monitoring_interrupts
      2219           -17.7%       1826 ±  8%  interrupts.CPU84.CAL:Function_call_interrupts
      7423 ±  8%     -25.3%       5545 ±  9%  interrupts.CPU84.NMI:Non-maskable_interrupts
      7423 ±  8%     -25.3%       5545 ±  9%  interrupts.CPU84.PMI:Performance_monitoring_interrupts
    161.00 ± 52%    +207.9%     495.75 ± 34%  interrupts.CPU84.RES:Rescheduling_interrupts
      7611 ±  5%     -34.6%       4981 ±  8%  interrupts.CPU85.NMI:Non-maskable_interrupts
      7611 ±  5%     -34.6%       4981 ±  8%  interrupts.CPU85.PMI:Performance_monitoring_interrupts
      7510 ±  7%     -33.9%       4964 ±  8%  interrupts.CPU86.NMI:Non-maskable_interrupts
      7510 ±  7%     -33.9%       4964 ±  8%  interrupts.CPU86.PMI:Performance_monitoring_interrupts
     56.67 ± 65%    +193.8%     166.50 ± 45%  interrupts.CPU87.RES:Rescheduling_interrupts
      7461 ±  7%     -41.1%       4392 ± 25%  interrupts.CPU9.NMI:Non-maskable_interrupts
      7461 ±  7%     -41.1%       4392 ± 25%  interrupts.CPU9.PMI:Performance_monitoring_interrupts
    615899 ±  8%     -30.9%     425828 ±  8%  interrupts.NMI:Non-maskable_interrupts
    615899 ±  8%     -30.9%     425828 ±  8%  interrupts.PMI:Performance_monitoring_interrupts
    224.00 ±  5%    +124.7%     503.25 ±  6%  interrupts.TLB:TLB_shootdowns



***************************************************************************************************
lkp-skl-2sp6: 72 threads Intel(R) Xeon(R) Gold 6139 CPU @ 2.30GHz with 128G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/size/tbox_group/test/testcase/ucode:
  gcc-7/performance/x86_64-rhel-7.6/debian-x86_64-2019-03-19.cgz/300s/16G/lkp-skl-2sp6/shm-pread-rand/vm-scalability/0x200005a

commit: 
  f568cf93ca ("vfs: Convert smackfs to use the new mount API")
  27eb9d500d ("vfs: Convert ramfs, shmem, tmpfs, devtmpfs, rootfs to use the new mount API")

f568cf93ca0923a0 27eb9d500d71d93e2b2f55c226b 
---------------- --------------------------- 
       fail:runs  %reproduction    fail:runs
           |             |             |    
           :4           25%           1:4     kmsg.Firmware_Bug]:the_BIOS_has_corrupted_hw-PMU_resources(MSR#is#c5)
         %stddev     %change         %stddev
             \          |                \  
     29.34           -50.3%      14.58        vm-scalability.free_time
     57369           -33.5%      38169        vm-scalability.median
   4128975           -33.3%    2753755        vm-scalability.throughput
    338.83            -5.6%     319.89        vm-scalability.time.elapsed_time
    338.83            -5.6%     319.89        vm-scalability.time.elapsed_time.max
  65853236           -50.0%   32927302        vm-scalability.time.maximum_resident_set_size
  1.18e+08           -50.0%   59015896        vm-scalability.time.minor_page_faults
      6995            +1.3%       7086        vm-scalability.time.percent_of_cpu_this_job_got
      4355           -51.8%       2097 ±  3%  vm-scalability.time.system_time
     19346            +6.3%      20571        vm-scalability.time.user_time
 1.239e+09           -33.2%  8.276e+08        vm-scalability.workload
      0.72           -13.0%       0.62 ±  8%  boot-time.smp_boot
    470049 ±  9%     -26.9%     343394 ± 25%  cpuidle.C1.time
   1115118 ± 28%     -36.4%     708982 ± 25%  cpuidle.C1E.usage
      3.07            -1.3        1.74 ± 14%  mpstat.cpu.all.idle%
     17.84            -8.7        9.12 ±  3%  mpstat.cpu.all.sys%
     79.09           +10.0       89.13        mpstat.cpu.all.usr%
     78.00           +12.8%      88.00        vmstat.cpu.us
  64111530           -47.9%   33375595        vmstat.memory.cache
  58088987           +60.4%   93196417        vmstat.memory.free
   9740248 ±  2%     -49.3%    4940013 ±  3%  numa-numastat.node0.local_node
   9744954 ±  2%     -49.3%    4943645 ±  3%  numa-numastat.node0.numa_hit
   9771507 ±  2%     -47.4%    5143109 ±  3%  numa-numastat.node1.local_node
   9780969 ±  2%     -47.3%    5153631 ±  3%  numa-numastat.node1.numa_hit
      2997            +1.2%       3033        turbostat.Avg_MHz
   1114565 ± 28%     -36.4%     709071 ± 25%  turbostat.C1E
      2.82 ±  6%     -36.7%       1.79 ± 13%  turbostat.CPU%c1
    235.19            -3.0%     228.14        turbostat.PkgWatt
     92.25           -10.5%      82.55        turbostat.RAMWatt
     43.84           +20.8%      52.98 ± 10%  sched_debug.cfs_rq:/.util_avg.stddev
   1100199 ±  8%      -9.1%    1000000        sched_debug.cpu.avg_idle.max
      4.47 ±  4%     +45.2%       6.49 ±  6%  sched_debug.cpu.clock.stddev
      4.47 ±  4%     +45.2%       6.49 ±  6%  sched_debug.cpu.clock_task.stddev
      2.93 ±  2%     +16.1%       3.40 ±  3%  sched_debug.cpu.cpu_load[0].stddev
     41.50 ±  6%      -9.2%      37.67 ±  8%  sched_debug.cpu.cpu_load[3].max
     76.58 ±  5%      -8.4%      70.17 ±  5%  sched_debug.cpu.cpu_load[4].max
      7.84 ±  5%      -8.6%       7.17 ±  6%  sched_debug.cpu.cpu_load[4].stddev
    538523 ±  2%      -7.2%     500000        sched_debug.cpu.max_idle_balance_cost.max
      5402 ± 49%    -100.0%       0.00        sched_debug.cpu.max_idle_balance_cost.stddev
    918.42 ± 11%     -21.8%     717.79 ± 13%  sched_debug.cpu.nr_switches.min
      6.31 ± 10%     -21.6%       4.95 ±  6%  sched_debug.cpu.nr_uninterruptible.stddev
      9450 ±  2%     -16.8%       7861 ±  4%  slabinfo.cred_jar.active_objs
      9450 ±  2%     -16.8%       7861 ±  4%  slabinfo.cred_jar.num_objs
    691.50 ±  3%     +25.8%     869.75 ±  4%  slabinfo.dmaengine-unmap-16.active_objs
    691.50 ±  3%     +25.8%     869.75 ±  4%  slabinfo.dmaengine-unmap-16.num_objs
      3409 ±  6%     -14.1%       2927 ±  7%  slabinfo.eventpoll_pwq.active_objs
      3409 ±  6%     -14.1%       2927 ±  7%  slabinfo.eventpoll_pwq.num_objs
    270793           -46.9%     143859        slabinfo.radix_tree_node.active_objs
      4897           -47.3%       2583        slabinfo.radix_tree_node.active_slabs
    274288           -47.3%     144682        slabinfo.radix_tree_node.num_objs
      4897           -47.3%       2583        slabinfo.radix_tree_node.num_slabs
      3430           -13.8%       2955 ± 10%  slabinfo.sock_inode_cache.active_objs
      3430           -13.8%       2955 ± 10%  slabinfo.sock_inode_cache.num_objs
   4061070           -68.8%    1267951        meminfo.Active
   4060810           -68.8%    1267700        meminfo.Active(anon)
  64101737           -48.1%   33279811        meminfo.Cached
  63977519           -49.3%   32444459        meminfo.Committed_AS
  59063468           -47.5%   31035002        meminfo.Inactive
  59062059           -47.5%   31033620        meminfo.Inactive(anon)
    217857           -34.3%     143104        meminfo.KReclaimable
  59006761           -47.5%   30978341        meminfo.Mapped
  57345075           +61.5%   92600531        meminfo.MemAvailable
  57853777           +61.0%   93146626        meminfo.MemFree
  73850245           -47.8%   38557396        meminfo.Memused
   8905110           -49.3%    4516171        meminfo.PageTables
    217857           -34.3%     143104        meminfo.SReclaimable
  62878730           -49.0%   32056754        meminfo.Shmem
    341526           -22.2%     265855        meminfo.Slab
    236622           -45.7%     128474        meminfo.max_used_kB
   1017564           -68.8%     317803        proc-vmstat.nr_active_anon
   1430333           +61.5%    2309494        proc-vmstat.nr_dirty_background_threshold
   2864165           +61.5%    4624641        proc-vmstat.nr_dirty_threshold
  16010805           -48.0%    8323862        proc-vmstat.nr_file_pages
  14478526           +60.8%   23283066        proc-vmstat.nr_free_pages
  14748269           -47.4%    7761184        proc-vmstat.nr_inactive_anon
  14734597           -47.4%    7747479        proc-vmstat.nr_mapped
   2226081           -49.3%    1128924        proc-vmstat.nr_page_table_pages
  15704787           -48.9%    8017833        proc-vmstat.nr_shmem
     54423           -34.2%      35794        proc-vmstat.nr_slab_reclaimable
   1017564           -68.8%     317803        proc-vmstat.nr_zone_active_anon
  14748269           -47.4%    7761184        proc-vmstat.nr_zone_inactive_anon
  19553150           -48.2%   10125086        proc-vmstat.numa_hit
  19538971           -48.3%   10110918        proc-vmstat.numa_local
  16473553           -50.0%    8241083        proc-vmstat.pgactivate
  19639546           -48.1%   10194611        proc-vmstat.pgalloc_normal
 1.189e+08           -49.7%   59831195        proc-vmstat.pgfault
  19263642           -47.7%   10066386        proc-vmstat.pgfree
   2040989           -68.7%     638086 ±  5%  numa-meminfo.node0.Active
   2040759           -68.7%     637895 ±  5%  numa-meminfo.node0.Active(anon)
  32255633           -49.2%   16382022 ±  2%  numa-meminfo.node0.FilePages
  29703056           -48.7%   15251546 ±  2%  numa-meminfo.node0.Inactive
  29702146           -48.7%   15250514 ±  2%  numa-meminfo.node0.Inactive(anon)
    129237 ± 27%     -47.4%      67973 ± 26%  numa-meminfo.node0.KReclaimable
  29669464           -48.7%   15233801 ±  2%  numa-meminfo.node0.Mapped
  28353196 ±  2%     +65.3%   46863502        numa-meminfo.node0.MemFree
  37324122 ±  2%     -49.6%   18813817 ±  4%  numa-meminfo.node0.MemUsed
   4634020 ±  8%     -55.8%    2049728 ± 16%  numa-meminfo.node0.PageTables
    129237 ± 27%     -47.4%      67973 ± 26%  numa-meminfo.node0.SReclaimable
  31643856           -50.2%   15768034 ±  2%  numa-meminfo.node0.Shmem
    198838 ± 19%     -31.5%     136214 ± 13%  numa-meminfo.node0.Slab
   2060932           -68.9%     641532 ±  5%  numa-meminfo.node1.Active
   2060902           -68.9%     641471 ±  5%  numa-meminfo.node1.Active(anon)
  31779623           -46.8%   16909618 ±  2%  numa-meminfo.node1.FilePages
  29252029           -46.0%   15782612 ±  2%  numa-meminfo.node1.Inactive
  29251531           -46.0%   15782260 ±  2%  numa-meminfo.node1.Inactive(anon)
  29228867           -46.1%   15743623 ±  2%  numa-meminfo.node1.Mapped
  29571324 ±  2%     +56.5%   46273312        numa-meminfo.node1.MemFree
  36455379 ±  2%     -45.8%   19753390 ±  4%  numa-meminfo.node1.MemUsed
   4267926 ±  8%     -42.2%    2465282 ± 13%  numa-meminfo.node1.PageTables
  31167329           -47.7%   16299492 ±  2%  numa-meminfo.node1.Shmem
    511423           -68.7%     160010 ±  5%  numa-vmstat.node0.nr_active_anon
   8071576           -49.3%    4095377 ±  2%  numa-vmstat.node0.nr_file_pages
   7080740 ±  2%     +65.5%   11715896        numa-vmstat.node0.nr_free_pages
   7431980           -48.7%    3811956 ±  2%  numa-vmstat.node0.nr_inactive_anon
   7423898           -48.7%    3807846 ±  2%  numa-vmstat.node0.nr_mapped
   1158358 ±  8%     -55.8%     512451 ± 16%  numa-vmstat.node0.nr_page_table_pages
   7918632           -50.2%    3941880 ±  2%  numa-vmstat.node0.nr_shmem
     32332 ± 27%     -47.4%      16993 ± 26%  numa-vmstat.node0.nr_slab_reclaimable
    511422           -68.7%     160010 ±  5%  numa-vmstat.node0.nr_zone_active_anon
   7431981           -48.7%    3811955 ±  2%  numa-vmstat.node0.nr_zone_inactive_anon
   9835567 ±  2%     -47.3%    5178639 ±  3%  numa-vmstat.node0.numa_hit
   9830733 ±  2%     -47.4%    5174906 ±  3%  numa-vmstat.node0.numa_local
    516488           -68.8%     160936 ±  5%  numa-vmstat.node1.nr_active_anon
   7952400           -46.8%    4227427 ±  2%  numa-vmstat.node1.nr_file_pages
   7385447 ±  2%     +56.6%   11567830        numa-vmstat.node1.nr_free_pages
   7319116           -46.1%    3945017 ±  2%  numa-vmstat.node1.nr_inactive_anon
   7313529           -46.2%    3935419 ±  2%  numa-vmstat.node1.nr_mapped
    244.00           -63.8%      88.25 ±100%  numa-vmstat.node1.nr_mlock
   1066849 ±  8%     -42.2%     616680 ± 13%  numa-vmstat.node1.nr_page_table_pages
   7799327           -47.8%    4074895 ±  2%  numa-vmstat.node1.nr_shmem
    516486           -68.8%     160936 ±  5%  numa-vmstat.node1.nr_zone_active_anon
   7319114           -46.1%    3945017 ±  2%  numa-vmstat.node1.nr_zone_inactive_anon
   9691224 ±  2%     -45.0%    5332953 ±  3%  numa-vmstat.node1.numa_hit
   9507279 ±  2%     -45.9%    5147811 ±  3%  numa-vmstat.node1.numa_local
     47.94            -6.1       41.86 ±  5%  perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
     79.61            -2.7       76.88        perf-profile.calltrace.cycles-pp.apic_timer_interrupt
     76.03            -2.1       73.88 ±  2%  perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt
     67.16            -1.8       65.32 ±  2%  perf-profile.calltrace.cycles-pp.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
      3.79 ±  3%      -1.8        1.97 ± 10%  perf-profile.calltrace.cycles-pp.account_user_time.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues
      7.12 ±  7%      -1.3        5.83 ± 13%  perf-profile.calltrace.cycles-pp.ktime_get_update_offsets_now.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
      3.05            -1.0        2.09 ± 25%  perf-profile.calltrace.cycles-pp.interrupt_entry
      1.36 ± 26%      -0.7        0.62 ± 58%  perf-profile.calltrace.cycles-pp.ktime_get.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt
      1.50 ±  2%      -0.6        0.93 ± 13%  perf-profile.calltrace.cycles-pp.timerqueue_del.__remove_hrtimer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt
      1.71 ±  2%      -0.5        1.20 ± 12%  perf-profile.calltrace.cycles-pp.__remove_hrtimer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
      3.35 ±  2%      -0.5        2.85 ±  7%  perf-profile.calltrace.cycles-pp.interrupt_entry.apic_timer_interrupt
      1.80 ±  5%      -0.4        1.40 ± 16%  perf-profile.calltrace.cycles-pp.irq_enter.smp_apic_timer_interrupt.apic_timer_interrupt
      1.06 ±  6%      -0.4        0.69 ± 15%  perf-profile.calltrace.cycles-pp.rcu_irq_enter.irq_enter.smp_apic_timer_interrupt.apic_timer_interrupt
      1.10 ±  5%      -0.3        0.79 ± 33%  perf-profile.calltrace.cycles-pp.prepare_exit_to_usermode.swapgs_restore_regs_and_return_to_usermode
      1.00 ± 14%      -0.2        0.76 ± 10%  perf-profile.calltrace.cycles-pp.native_apic_mem_write.smp_apic_timer_interrupt.apic_timer_interrupt
      1.20 ± 11%      +0.3        1.49 ±  9%  perf-profile.calltrace.cycles-pp.__update_load_avg_se.task_tick_fair.scheduler_tick.update_process_times.tick_sched_handle
      0.64 ±  4%      +0.3        0.97 ± 20%  perf-profile.calltrace.cycles-pp.ret_from_fork
      0.64 ±  4%      +0.3        0.97 ± 20%  perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
      0.00            +0.7        0.70 ± 17%  perf-profile.calltrace.cycles-pp.__do_page_fault.do_page_fault.page_fault
      0.00            +0.7        0.71 ± 22%  perf-profile.calltrace.cycles-pp.worker_thread.kthread.ret_from_fork
      0.00            +0.7        0.71 ± 22%  perf-profile.calltrace.cycles-pp.process_one_work.worker_thread.kthread.ret_from_fork
      0.70            +0.8        1.50 ± 24%  perf-profile.calltrace.cycles-pp.serial8250_console_putchar.uart_console_write.serial8250_console_write.console_unlock.vprintk_emit
      0.71            +0.8        1.56 ± 24%  perf-profile.calltrace.cycles-pp.uart_console_write.serial8250_console_write.console_unlock.vprintk_emit.printk
      0.72            +0.9        1.61 ± 25%  perf-profile.calltrace.cycles-pp.serial8250_console_write.console_unlock.vprintk_emit.printk.irq_work_run_list
      4.32 ±  5%      +0.9        5.22 ±  9%  perf-profile.calltrace.cycles-pp.run_local_timers.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues
      0.83            +1.0        1.81 ± 24%  perf-profile.calltrace.cycles-pp.printk.irq_work_run_list.irq_work_run.smp_irq_work_interrupt.irq_work_interrupt
      0.83            +1.0        1.81 ± 24%  perf-profile.calltrace.cycles-pp.vprintk_emit.printk.irq_work_run_list.irq_work_run.smp_irq_work_interrupt
      0.83            +1.0        1.81 ± 24%  perf-profile.calltrace.cycles-pp.console_unlock.vprintk_emit.printk.irq_work_run_list.irq_work_run
    263.00 ± 12%     -30.7%     182.25 ±  5%  interrupts.73:PCI-MSI.70260741-edge.eth3-TxRx-4
    254259            -7.7%     234765 ±  2%  interrupts.CAL:Function_call_interrupts
      7626            +7.3%       8183 ±  6%  interrupts.CPU0.NMI:Non-maskable_interrupts
      7626            +7.3%       8183 ±  6%  interrupts.CPU0.PMI:Performance_monitoring_interrupts
      7628            +7.4%       8191 ±  6%  interrupts.CPU13.NMI:Non-maskable_interrupts
      7628            +7.4%       8191 ±  6%  interrupts.CPU13.PMI:Performance_monitoring_interrupts
      7617            +7.6%       8192 ±  6%  interrupts.CPU15.NMI:Non-maskable_interrupts
      7617            +7.6%       8192 ±  6%  interrupts.CPU15.PMI:Performance_monitoring_interrupts
     55.00         +1010.0%     610.50 ± 89%  interrupts.CPU17.RES:Rescheduling_interrupts
    263.00 ± 12%     -30.7%     182.25 ±  5%  interrupts.CPU18.73:PCI-MSI.70260741-edge.eth3-TxRx-4
      3499 ±  2%      -7.3%       3242 ±  2%  interrupts.CPU18.CAL:Function_call_interrupts
      7616            +7.4%       8182 ±  6%  interrupts.CPU18.NMI:Non-maskable_interrupts
      7616            +7.4%       8182 ±  6%  interrupts.CPU18.PMI:Performance_monitoring_interrupts
    604.50 ±  4%     -67.9%     194.25 ± 49%  interrupts.CPU19.RES:Rescheduling_interrupts
      3563            -8.6%       3255        interrupts.CPU21.CAL:Function_call_interrupts
      7609            +7.6%       8187 ±  6%  interrupts.CPU22.NMI:Non-maskable_interrupts
      7609            +7.6%       8187 ±  6%  interrupts.CPU22.PMI:Performance_monitoring_interrupts
    474.00 ± 19%     -57.2%     202.75 ± 48%  interrupts.CPU25.RES:Rescheduling_interrupts
      1109 ± 62%     -79.2%     231.25 ± 55%  interrupts.CPU27.RES:Rescheduling_interrupts
     58.50 ± 23%    +211.1%     182.00 ± 71%  interrupts.CPU3.RES:Rescheduling_interrupts
    480.50 ± 17%     -58.2%     201.00 ± 53%  interrupts.CPU31.RES:Rescheduling_interrupts
    450.00 ±  5%     -68.0%     144.00 ± 29%  interrupts.CPU33.RES:Rescheduling_interrupts
    341.00 ± 21%     -45.2%     186.75 ± 66%  interrupts.CPU34.RES:Rescheduling_interrupts
    723.50 ± 77%     -90.8%      66.75 ± 56%  interrupts.CPU36.RES:Rescheduling_interrupts
     29.00 ± 41%    +239.7%      98.50 ± 33%  interrupts.CPU41.RES:Rescheduling_interrupts
     15.50 ±  9%    +390.3%      76.00 ± 35%  interrupts.CPU42.RES:Rescheduling_interrupts
      3695           -11.1%       3287 ±  2%  interrupts.CPU43.CAL:Function_call_interrupts
    251.00 ± 46%     -72.8%      68.25 ± 54%  interrupts.CPU44.RES:Rescheduling_interrupts
      3659            -9.9%       3297 ±  2%  interrupts.CPU48.CAL:Function_call_interrupts
    115.50 ± 18%     -71.2%      33.25 ±101%  interrupts.CPU55.RES:Rescheduling_interrupts
      1285 ± 10%     -70.8%     374.75 ±125%  interrupts.CPU59.RES:Rescheduling_interrupts
      3508           -27.2%       2555 ± 48%  interrupts.CPU6.CAL:Function_call_interrupts
      3638           -11.1%       3234 ±  3%  interrupts.CPU62.CAL:Function_call_interrupts
      3636            -9.8%       3279 ±  2%  interrupts.CPU63.CAL:Function_call_interrupts
    110.00 ± 39%     -55.5%      49.00 ± 91%  interrupts.CPU65.RES:Rescheduling_interrupts
    173.50 ± 50%     -62.0%      66.00 ± 34%  interrupts.CPU66.RES:Rescheduling_interrupts
      3607           -28.3%       2584 ± 47%  interrupts.CPU68.CAL:Function_call_interrupts
     49.00 ±  4%    +120.4%     108.00 ± 64%  interrupts.CPU68.RES:Rescheduling_interrupts
      3591           -10.1%       3228        interrupts.CPU71.CAL:Function_call_interrupts
     79.00 ±  8%    +388.3%     385.75 ± 42%  interrupts.CPU9.RES:Rescheduling_interrupts
     24.57            -5.2%      23.29        perf-stat.i.MPKI
 6.019e+09           -17.5%  4.965e+09        perf-stat.i.branch-instructions
      0.22 ±  3%      -0.1        0.15 ±  5%  perf-stat.i.branch-miss-rate%
   8375301           -31.9%    5702870 ±  6%  perf-stat.i.branch-misses
     74.12           +10.5       84.59        perf-stat.i.cache-miss-rate%
 5.203e+08           -15.8%   4.38e+08        perf-stat.i.cache-misses
  6.67e+08           -24.3%  5.052e+08        perf-stat.i.cache-references
      8.41           +20.4%      10.13        perf-stat.i.cpi
 2.149e+11            +1.3%  2.178e+11        perf-stat.i.cpu-cycles
    655.72            -6.9%     610.25        perf-stat.i.cycles-between-cache-misses
      2.96            +0.3        3.29        perf-stat.i.dTLB-load-miss-rate%
 2.507e+08           -13.9%  2.158e+08        perf-stat.i.dTLB-load-misses
 7.517e+09           -17.7%  6.186e+09        perf-stat.i.dTLB-loads
      0.03 ±  3%      -0.0        0.03 ± 11%  perf-stat.i.dTLB-store-miss-rate%
    881950 ±  3%     -27.7%     637506 ± 10%  perf-stat.i.dTLB-store-misses
 2.644e+09           -17.6%   2.18e+09        perf-stat.i.dTLB-stores
    748152           -41.7%     436041 ±  2%  perf-stat.i.iTLB-load-misses
     51192 ± 19%     -29.8%      35952 ± 19%  perf-stat.i.iTLB-loads
 2.617e+10           -17.8%  2.152e+10        perf-stat.i.instructions
    166690 ±  2%     -12.3%     146120 ±  3%  perf-stat.i.instructions-per-iTLB-miss
      0.14           -22.0%       0.11        perf-stat.i.ipc
    349429           -46.8%     186018        perf-stat.i.minor-faults
 2.037e+08 ±  2%     -14.3%  1.745e+08 ±  6%  perf-stat.i.node-load-misses
 3.048e+08           -15.7%   2.57e+08 ±  6%  perf-stat.i.node-loads
   1273439 ±  2%     -45.5%     694360 ±  4%  perf-stat.i.node-store-misses
    480055 ± 12%     -42.6%     275675 ±  8%  perf-stat.i.node-stores
    349431           -46.8%     186021        perf-stat.i.page-faults
     25.47            -7.9%      23.47        perf-stat.overall.MPKI
      0.14            -0.0        0.12 ±  7%  perf-stat.overall.branch-miss-rate%
     77.98            +8.7       86.69        perf-stat.overall.cache-miss-rate%
      8.22           +23.2%      10.12        perf-stat.overall.cpi
    413.74           +20.3%     497.63        perf-stat.overall.cycles-between-cache-misses
      3.22            +0.1        3.37        perf-stat.overall.dTLB-load-miss-rate%
      0.03 ±  3%      -0.0        0.03 ± 10%  perf-stat.overall.dTLB-store-miss-rate%
     34786           +41.4%      49190 ±  2%  perf-stat.overall.instructions-per-iTLB-miss
      0.12           -18.8%       0.10        perf-stat.overall.ipc
      7133           +16.5%       8310        perf-stat.overall.path-length
 5.996e+09           -17.5%  4.948e+09        perf-stat.ps.branch-instructions
   8402682           -31.9%    5720811 ±  6%  perf-stat.ps.branch-misses
 5.179e+08           -15.8%  4.363e+08        perf-stat.ps.cache-misses
 6.641e+08           -24.2%  5.033e+08        perf-stat.ps.cache-references
 2.143e+11            +1.3%  2.171e+11        perf-stat.ps.cpu-cycles
 2.495e+08           -13.8%   2.15e+08        perf-stat.ps.dTLB-load-misses
 7.488e+09           -17.7%  6.165e+09        perf-stat.ps.dTLB-loads
    880618 ±  3%     -27.8%     636134 ± 10%  perf-stat.ps.dTLB-store-misses
 2.634e+09           -17.5%  2.172e+09        perf-stat.ps.dTLB-stores
    749535           -41.8%     436237 ±  2%  perf-stat.ps.iTLB-load-misses
     51054 ± 19%     -29.7%      35915 ± 19%  perf-stat.ps.iTLB-loads
 2.607e+10           -17.8%  2.144e+10        perf-stat.ps.instructions
    350552           -46.8%     186443        perf-stat.ps.minor-faults
 2.027e+08 ±  2%     -14.2%  1.739e+08 ±  6%  perf-stat.ps.node-load-misses
 3.033e+08           -15.6%   2.56e+08 ±  6%  perf-stat.ps.node-loads
   1276424 ±  2%     -45.4%     696879 ±  4%  perf-stat.ps.node-store-misses
    483087 ± 13%     -42.6%     277396 ±  9%  perf-stat.ps.node-stores
    350552           -46.8%     186443        perf-stat.ps.page-faults
  8.84e+12           -22.2%  6.878e+12        perf-stat.total.instructions
    134477 ±  4%     -12.2%     118031 ±  4%  softirqs.CPU0.TIMER
    129516 ±  3%     -10.3%     116158 ±  5%  softirqs.CPU1.TIMER
    137061 ±  7%     -11.5%     121271 ±  2%  softirqs.CPU10.TIMER
    134708 ±  8%     -12.2%     118210        softirqs.CPU13.TIMER
    138931 ±  8%     -14.0%     119458 ±  2%  softirqs.CPU14.TIMER
     28768           +10.5%      31792 ±  5%  softirqs.CPU15.RCU
    136131 ±  9%     -12.7%     118857 ±  2%  softirqs.CPU15.TIMER
    132792 ±  3%      -7.7%     122562        softirqs.CPU16.TIMER
    143921 ±  4%     -16.2%     120596 ±  4%  softirqs.CPU18.TIMER
    145168 ±  3%     -17.5%     119736 ±  5%  softirqs.CPU19.TIMER
    153122 ± 16%     -23.8%     116679 ±  3%  softirqs.CPU2.TIMER
    141372           -16.9%     117480 ±  4%  softirqs.CPU20.TIMER
    144335 ±  3%     -17.9%     118496 ±  3%  softirqs.CPU21.TIMER
    165007 ± 11%     -27.5%     119554 ±  4%  softirqs.CPU22.TIMER
    142200 ±  2%     -17.0%     118044 ±  5%  softirqs.CPU23.TIMER
    141063 ±  5%     -15.9%     118601 ±  4%  softirqs.CPU24.TIMER
    138963 ±  4%     -16.5%     116070 ±  6%  softirqs.CPU25.TIMER
    141346 ±  4%     -15.8%     118964 ±  5%  softirqs.CPU27.TIMER
    140996 ±  5%     -16.9%     117149 ±  5%  softirqs.CPU28.TIMER
    141709 ±  5%     -17.6%     116804 ±  4%  softirqs.CPU29.TIMER
    139584 ±  4%     -15.8%     117486 ±  4%  softirqs.CPU30.TIMER
    138678 ±  4%     -14.9%     117960 ±  6%  softirqs.CPU31.TIMER
    139778 ±  5%     -16.6%     116581 ±  4%  softirqs.CPU32.TIMER
    138835 ±  4%     -16.3%     116168 ±  5%  softirqs.CPU33.TIMER
    139832 ±  2%     -14.3%     119885 ±  4%  softirqs.CPU34.TIMER
    143088 ±  3%     -14.1%     122870 ±  4%  softirqs.CPU35.TIMER
    145947 ±  5%     -11.3%     129393        softirqs.CPU38.TIMER
    131003 ±  2%     -11.2%     116301 ±  3%  softirqs.CPU4.TIMER
    130674 ±  4%      -9.8%     117891 ±  2%  softirqs.CPU42.TIMER
    131033 ±  5%     -10.0%     117896 ±  3%  softirqs.CPU43.TIMER
    131812 ±  6%      -8.3%     120870 ±  6%  softirqs.CPU44.TIMER
    133415 ±  6%     -11.1%     118651 ±  3%  softirqs.CPU45.TIMER
    138074 ±  5%     -10.5%     123620        softirqs.CPU46.TIMER
    133232 ±  5%      -6.5%     124597 ±  2%  softirqs.CPU47.TIMER
    134867 ±  7%     -10.9%     120099        softirqs.CPU49.TIMER
    133501 ±  3%     -11.0%     118863 ±  2%  softirqs.CPU5.TIMER
    139285 ±  9%     -11.0%     124014 ±  2%  softirqs.CPU50.TIMER
    138112 ±  7%     -12.4%     120918        softirqs.CPU51.TIMER
    136733 ±  6%     -10.7%     122034 ±  2%  softirqs.CPU52.TIMER
    139399 ±  6%      -7.8%     128577 ±  4%  softirqs.CPU53.TIMER
    144006 ±  3%     -16.2%     120664 ±  4%  softirqs.CPU54.TIMER
    144335 ±  2%     -15.9%     121369 ±  5%  softirqs.CPU55.TIMER
    141569           -13.3%     122777 ±  5%  softirqs.CPU56.TIMER
    142905           -16.0%     120105 ±  3%  softirqs.CPU57.TIMER
    149917 ±  2%     -19.6%     120597 ±  4%  softirqs.CPU58.TIMER
    145062           -16.0%     121875 ±  6%  softirqs.CPU59.TIMER
    129322 ±  3%     -10.2%     116093 ±  4%  softirqs.CPU6.TIMER
    139051 ±  5%     -16.6%     116013 ±  5%  softirqs.CPU60.TIMER
    136928 ±  5%     -16.7%     114040 ±  6%  softirqs.CPU61.TIMER
    139478 ±  4%     -17.0%     115821 ±  5%  softirqs.CPU63.TIMER
    140188 ±  5%     -17.4%     115808 ±  6%  softirqs.CPU64.TIMER
    139850 ±  5%     -18.4%     114185 ±  5%  softirqs.CPU65.TIMER
    139338 ±  5%     -17.6%     114794 ±  4%  softirqs.CPU66.TIMER
    139162 ±  6%     -16.5%     116236 ±  6%  softirqs.CPU67.TIMER
    138693 ±  4%     -17.7%     114160 ±  5%  softirqs.CPU68.TIMER
    137368 ±  5%     -16.1%     115277 ±  5%  softirqs.CPU69.TIMER
    129982 ±  3%     -12.3%     113979 ±  4%  softirqs.CPU7.TIMER
    136521 ±  3%     -15.0%     116043 ±  6%  softirqs.CPU70.TIMER
    139566 ±  3%     -15.0%     118588 ±  6%  softirqs.CPU71.TIMER
    137034 ±  8%      -9.9%     123532 ±  2%  softirqs.CPU8.TIMER
    139637 ±  6%     -12.2%     122590        softirqs.CPU9.TIMER
   9979650 ±  4%     -13.1%    8675751 ±  3%  softirqs.TIMER



***************************************************************************************************
lkp-skl-fpga01: 104 threads Skylake with 192G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/size/tbox_group/test/testcase:
  gcc-7/performance/x86_64-rhel-7.6/debian-x86_64-2018-04-03.cgz/300s/1T/lkp-skl-fpga01/lru-shm/vm-scalability

commit: 
  f568cf93ca ("vfs: Convert smackfs to use the new mount API")
  27eb9d500d ("vfs: Convert ramfs, shmem, tmpfs, devtmpfs, rootfs to use the new mount API")

f568cf93ca0923a0 27eb9d500d71d93e2b2f55c226b 
---------------- --------------------------- 
       fail:runs  %reproduction    fail:runs
           |             |             |    
          4:4           -1%           4:4     perf-profile.calltrace.cycles-pp.sync_regs.error_entry.do_access
         %stddev     %change         %stddev
             \          |                \  
      0.03           -51.0%       0.02        vm-scalability.free_time
    576244            +5.9%     610398        vm-scalability.median
      0.00 ± 10%     +52.3%       0.01 ±  3%  vm-scalability.median_stddev
  59786024            +6.2%   63492278        vm-scalability.throughput
    243.93            +2.5%     250.02        vm-scalability.time.elapsed_time
    243.93            +2.5%     250.02        vm-scalability.time.elapsed_time.max
     43465 ±  4%      +7.9%      46892 ±  4%  vm-scalability.time.involuntary_context_switches
    946724           -50.0%     473636        vm-scalability.time.maximum_resident_set_size
 5.177e+08            +2.6%  5.313e+08        vm-scalability.time.minor_page_faults
      1756            -5.1%       1666        vm-scalability.time.percent_of_cpu_this_job_got
      3116            -3.8%       2998        vm-scalability.time.system_time
     31133          +104.1%      63552        vm-scalability.time.voluntary_context_switches
 2.324e+09            +2.4%  2.379e+09        vm-scalability.workload
      0.00 ±110%      +0.0        0.00 ±139%  mpstat.cpu.all.iowait%
    499.00            -1.6%     491.00        turbostat.Avg_MHz
    913171 ± 41%    +828.6%    8479745 ± 90%  turbostat.C6
      3.09 ± 44%     +20.9       23.95 ±103%  turbostat.C6%
  50080150           -48.8%   25663962        vmstat.memory.cache
 1.458e+08           +16.7%  1.702e+08        vmstat.memory.free
      3501           +32.4%       4637        vmstat.system.cs
 7.929e+08 ± 44%    +687.9%  6.248e+09 ±103%  cpuidle.C6.time
    926892 ± 40%    +816.6%    8496018 ± 90%  cpuidle.C6.usage
    190135 ±  3%    +152.6%     480192 ± 88%  cpuidle.POLL.time
     84219 ±  2%     +34.4%     113180 ±  9%  cpuidle.POLL.usage
     40696 ± 15%     -26.5%      29917 ± 13%  softirqs.CPU4.RCU
     44474 ± 29%     -35.9%      28525        softirqs.CPU42.RCU
     30262 ±  9%     +19.5%      36161 ± 15%  softirqs.CPU63.RCU
     29764 ±  6%     +27.6%      37992 ± 13%  softirqs.CPU65.RCU
  50166780           -48.9%   25654809        meminfo.Cached
  49347199           -50.1%   24617888        meminfo.Committed_AS
  48930891           -50.1%   24417186        meminfo.Inactive
  48929311           -50.1%   24415179        meminfo.Inactive(anon)
      1579           +27.0%       2006        meminfo.Inactive(file)
    188217           -30.3%     131272        meminfo.KReclaimable
   7916315           -53.0%    3720091 ±  2%  meminfo.Mapped
 1.447e+08           +17.0%  1.693e+08        meminfo.MemAvailable
 1.455e+08           +16.9%  1.701e+08        meminfo.MemFree
  51232587           -48.0%   26665161        meminfo.Memused
     19466           -39.0%      11869 ±  2%  meminfo.PageTables
    188217           -30.3%     131272        meminfo.SReclaimable
  48944461           -50.1%   24432065        meminfo.Shmem
    348076           -14.6%     297129        meminfo.Slab
    413517           -50.2%     206120        meminfo.max_used_kB
   6226812 ± 15%     -48.7%    3192024 ± 15%  numa-vmstat.node0.nr_file_pages
  18066381 ±  5%     +16.8%   21103373 ±  2%  numa-vmstat.node0.nr_free_pages
   6074426 ± 15%     -50.0%    3038706 ± 16%  numa-vmstat.node0.nr_inactive_anon
    306.25 ± 10%     -34.4%     201.00 ± 47%  numa-vmstat.node0.nr_inactive_file
    987437 ±  3%     -53.6%     458349        numa-vmstat.node0.nr_mapped
      2482 ± 10%     -40.2%       1484 ± 18%  numa-vmstat.node0.nr_page_table_pages
   6076001 ± 15%     -50.0%    3040939 ± 16%  numa-vmstat.node0.nr_shmem
     24272 ±  5%     -34.8%      15832 ±  9%  numa-vmstat.node0.nr_slab_reclaimable
   6074321 ± 15%     -50.0%    3038614 ± 16%  numa-vmstat.node0.nr_zone_inactive_anon
    306.25 ± 10%     -34.4%     201.00 ± 47%  numa-vmstat.node0.nr_zone_inactive_file
   6314888 ± 14%     -48.9%    3229891 ± 15%  numa-vmstat.node1.nr_file_pages
  18310535 ±  4%     +16.9%   21407188 ±  2%  numa-vmstat.node1.nr_free_pages
   6157626 ± 14%     -50.1%    3073024 ± 16%  numa-vmstat.node1.nr_inactive_anon
     88.00 ± 34%    +240.3%     299.50 ± 31%  numa-vmstat.node1.nr_inactive_file
    989014 ±  3%     -51.0%     484627        numa-vmstat.node1.nr_mapped
      2428 ±  9%     -39.7%       1463 ± 14%  numa-vmstat.node1.nr_page_table_pages
   6159861 ± 14%     -50.1%    3075031 ± 16%  numa-vmstat.node1.nr_shmem
     22813 ±  4%     -25.5%      17007 ±  8%  numa-vmstat.node1.nr_slab_reclaimable
   6157554 ± 14%     -50.1%    3072951 ± 16%  numa-vmstat.node1.nr_zone_inactive_anon
     88.00 ± 34%    +240.3%     299.50 ± 31%  numa-vmstat.node1.nr_zone_inactive_file
    271.00           +81.2%     491.00        proc-vmstat.nr_dirtied
   3610614           +17.0%    4223136        proc-vmstat.nr_dirty_background_threshold
   7230238           +17.0%    8456814        proc-vmstat.nr_dirty_threshold
  12540351           -48.8%    6420021        proc-vmstat.nr_file_pages
  36378024           +16.9%   42512287        proc-vmstat.nr_free_pages
  12230694           -50.0%    6109850        proc-vmstat.nr_inactive_anon
    394.75           +27.0%     501.25        proc-vmstat.nr_inactive_file
     16101            +1.3%      16303        proc-vmstat.nr_kernel_stack
   1996785 ±  3%     -52.8%     941936        proc-vmstat.nr_mapped
      4896 ±  3%     -40.0%       2936        proc-vmstat.nr_page_table_pages
  12234511           -50.0%    6114074        proc-vmstat.nr_shmem
     47085           -30.2%      32861        proc-vmstat.nr_slab_reclaimable
     39964            +3.8%      41464        proc-vmstat.nr_slab_unreclaimable
    257.25 ±  3%     +88.2%     484.25        proc-vmstat.nr_written
  12230693           -50.0%    6109850        proc-vmstat.nr_zone_inactive_anon
    394.75           +27.0%     501.25        proc-vmstat.nr_zone_inactive_file
     14321 ± 18%     -51.9%       6891 ± 58%  proc-vmstat.numa_hint_faults
 5.192e+08            +2.6%  5.325e+08        proc-vmstat.numa_hit
 5.192e+08            +2.6%  5.325e+08        proc-vmstat.numa_local
 5.203e+08            +2.6%  5.336e+08        proc-vmstat.pgalloc_normal
 5.183e+08            +2.6%   5.32e+08        proc-vmstat.pgfault
 5.195e+08            +2.6%  5.329e+08        proc-vmstat.pgfree
  24881812 ± 15%     -48.7%   12770223 ± 15%  numa-meminfo.node0.FilePages
  24273499 ± 16%     -49.9%   12157797 ± 16%  numa-meminfo.node0.Inactive
  24272273 ± 16%     -49.9%   12156990 ± 16%  numa-meminfo.node0.Inactive(anon)
      1225 ±  9%     -34.2%     806.25 ± 47%  numa-meminfo.node0.Inactive(file)
     97123 ±  5%     -34.7%      63384 ±  9%  numa-meminfo.node0.KReclaimable
   4011265 ±  3%     -54.1%    1841181        numa-meminfo.node0.Mapped
  72290696 ±  5%     +16.8%   84410927 ±  2%  numa-meminfo.node0.MemFree
  25393898 ± 15%     -47.7%   13273667 ± 14%  numa-meminfo.node0.MemUsed
      9880 ± 11%     -40.2%       5904 ± 17%  numa-meminfo.node0.PageTables
     97123 ±  5%     -34.7%      63384 ±  9%  numa-meminfo.node0.SReclaimable
  24278558 ± 16%     -49.9%   12165871 ± 16%  numa-meminfo.node0.Shmem
    189220           -22.1%     147434 ±  9%  numa-meminfo.node0.Slab
  25264418 ± 13%     -48.9%   12911196 ± 15%  numa-meminfo.node1.FilePages
  24635696 ± 14%     -50.1%   12284951 ± 16%  numa-meminfo.node1.Inactive
  24635342 ± 14%     -50.1%   12283751 ± 16%  numa-meminfo.node1.Inactive(anon)
    353.50 ± 34%    +239.4%       1199 ± 31%  numa-meminfo.node1.Inactive(file)
     91245 ±  4%     -25.5%      68019 ±  8%  numa-meminfo.node1.KReclaimable
   3913740 ±  4%     -49.7%    1969074        numa-meminfo.node1.Mapped
  73237280 ±  4%     +16.9%   85636740 ±  2%  numa-meminfo.node1.MemFree
  25816602 ± 13%     -48.0%   13417142 ± 14%  numa-meminfo.node1.MemUsed
      9593 ±  9%     -38.8%       5870 ± 17%  numa-meminfo.node1.PageTables
     91245 ±  4%     -25.5%      68019 ±  8%  numa-meminfo.node1.SReclaimable
  24644294 ± 14%     -50.1%   12291753 ± 16%  numa-meminfo.node1.Shmem
     21626           +28.6%      27808 ±  3%  slabinfo.anon_vma.active_objs
    469.75           +28.6%     604.00 ±  3%  slabinfo.anon_vma.active_slabs
     21626           +28.6%      27808 ±  3%  slabinfo.anon_vma.num_objs
    469.75           +28.6%     604.00 ±  3%  slabinfo.anon_vma.num_slabs
     40574 ±  2%     +22.0%      49501        slabinfo.anon_vma_chain.active_objs
    634.00 ±  2%     +22.0%     773.50        slabinfo.anon_vma_chain.active_slabs
     40594 ±  2%     +22.1%      49547        slabinfo.anon_vma_chain.num_objs
    634.00 ±  2%     +22.0%     773.50        slabinfo.anon_vma_chain.num_slabs
      1597 ± 12%     +16.0%       1852 ±  3%  slabinfo.avc_xperms_data.active_objs
      1597 ± 12%     +16.0%       1852 ±  3%  slabinfo.avc_xperms_data.num_objs
     13724 ±  2%     +15.7%      15879 ±  6%  slabinfo.cred_jar.active_objs
     13724 ±  2%     +15.7%      15879 ±  6%  slabinfo.cred_jar.num_objs
    214679           -45.6%     116688        slabinfo.radix_tree_node.active_objs
      3911           -44.8%       2158        slabinfo.radix_tree_node.active_slabs
    219065           -44.8%     120925        slabinfo.radix_tree_node.num_objs
      3911           -44.8%       2158        slabinfo.radix_tree_node.num_slabs
      3190           +12.4%       3586        slabinfo.sighand_cache.active_objs
      3191           +12.7%       3595        slabinfo.sighand_cache.num_objs
      1032 ±  7%     +44.0%       1486 ±  6%  slabinfo.skbuff_ext_cache.active_objs
      1051 ±  5%     +43.4%       1507 ±  5%  slabinfo.skbuff_ext_cache.num_objs
     40306 ±  2%     +14.4%      46096 ±  2%  slabinfo.vm_area_struct.active_objs
      1007 ±  2%     +14.5%       1153 ±  2%  slabinfo.vm_area_struct.active_slabs
     40314 ±  2%     +14.5%      46143 ±  2%  slabinfo.vm_area_struct.num_objs
      1007 ±  2%     +14.5%       1153 ±  2%  slabinfo.vm_area_struct.num_slabs
    323228 ± 55%     -50.5%     159938 ± 22%  sched_debug.cfs_rq:/.load.max
   2195909           -10.8%    1958320        sched_debug.cfs_rq:/.min_vruntime.avg
   2099869           -13.5%    1815790        sched_debug.cfs_rq:/.min_vruntime.min
     36020 ±  4%     +34.7%      48534 ±  9%  sched_debug.cfs_rq:/.min_vruntime.stddev
    229.45 ± 12%     -32.9%     153.90 ± 23%  sched_debug.cfs_rq:/.runnable_load_avg.max
    321968 ± 56%     -51.0%     157788 ± 23%  sched_debug.cfs_rq:/.runnable_weight.max
     66477 ± 17%    +110.2%     139739 ± 23%  sched_debug.cfs_rq:/.spread0.avg
    134057 ± 20%    +119.7%     294519 ± 16%  sched_debug.cfs_rq:/.spread0.max
    -29537           -91.4%      -2550        sched_debug.cfs_rq:/.spread0.min
     35997 ±  4%     +34.7%      48487 ±  9%  sched_debug.cfs_rq:/.spread0.stddev
    175.30 ± 14%     -63.9%      63.35 ±100%  sched_debug.cfs_rq:/.util_avg.min
    192.91 ±  7%     +39.8%     269.72 ± 12%  sched_debug.cfs_rq:/.util_avg.stddev
     80.48 ± 13%     +51.5%     121.95 ± 14%  sched_debug.cfs_rq:/.util_est_enqueued.stddev
    138814 ±  7%     +25.3%     173956 ±  3%  sched_debug.cpu.avg_idle.stddev
    274.40 ± 16%     -27.3%     199.50 ± 21%  sched_debug.cpu.cpu_load[2].max
    315.45 ± 12%     -35.1%     204.70 ± 28%  sched_debug.cpu.cpu_load[3].max
    373.85 ±  8%     -34.7%     243.95 ± 25%  sched_debug.cpu.cpu_load[4].max
     10724           +50.7%      16157        sched_debug.cpu.curr->pid.max
      1467          +115.4%       3160 ± 29%  sched_debug.cpu.curr->pid.stddev
    323228 ± 55%     -50.5%     159938 ± 22%  sched_debug.cpu.load.max
      2099 ± 11%     +54.3%       3239 ± 22%  sched_debug.cpu.nr_load_updates.stddev
      0.21 ±  4%     +23.1%       0.26 ± 13%  sched_debug.cpu.nr_running.stddev
      4564           +31.8%       6015        sched_debug.cpu.nr_switches.avg
     43664 ± 27%     +43.3%      62587 ± 13%  sched_debug.cpu.nr_switches.max
      1176 ±  8%     +50.8%       1773 ±  7%  sched_debug.cpu.nr_switches.min
      5768 ± 13%     +26.4%       7290 ±  8%  sched_debug.cpu.nr_switches.stddev
      1018 ± 32%    +172.8%       2778 ± 40%  interrupts.CPU0.RES:Rescheduling_interrupts
     24.00 ± 51%    +266.7%      88.00 ± 63%  interrupts.CPU100.TLB:TLB_shootdowns
     82.00 ± 41%    +251.2%     288.00 ± 71%  interrupts.CPU14.RES:Rescheduling_interrupts
     99.75 ± 69%    +111.8%     211.25 ± 21%  interrupts.CPU17.RES:Rescheduling_interrupts
    134.50 ± 54%    +650.0%       1008 ±122%  interrupts.CPU18.RES:Rescheduling_interrupts
      1730 ±  8%     -31.8%       1180 ± 34%  interrupts.CPU2.NMI:Non-maskable_interrupts
      1730 ±  8%     -31.8%       1180 ± 34%  interrupts.CPU2.PMI:Performance_monitoring_interrupts
     66.00 ±  6%   +2245.1%       1547 ±147%  interrupts.CPU21.RES:Rescheduling_interrupts
     90.75 ± 38%    +239.9%     308.50 ± 49%  interrupts.CPU22.RES:Rescheduling_interrupts
     36.50 ± 49%    +114.4%      78.25 ± 62%  interrupts.CPU36.TLB:TLB_shootdowns
    103.25 ± 48%    +148.7%     256.75 ± 27%  interrupts.CPU37.RES:Rescheduling_interrupts
     99.00 ± 87%    +243.2%     339.75 ± 46%  interrupts.CPU5.RES:Rescheduling_interrupts
     99.50 ± 61%    +163.1%     261.75 ± 34%  interrupts.CPU52.RES:Rescheduling_interrupts
     99.25 ±117%    +256.9%     354.25 ± 60%  interrupts.CPU54.RES:Rescheduling_interrupts
     70.75 ± 49%    +501.4%     425.50 ± 48%  interrupts.CPU56.RES:Rescheduling_interrupts
     21.00 ± 44%    +290.5%      82.00 ± 64%  interrupts.CPU64.TLB:TLB_shootdowns
     26.50 ± 52%    +284.9%     102.00 ± 75%  interrupts.CPU67.TLB:TLB_shootdowns
     78.75 ± 86%    +182.9%     222.75 ± 41%  interrupts.CPU69.RES:Rescheduling_interrupts
     39.50 ± 27%    +372.2%     186.50 ± 68%  interrupts.CPU72.RES:Rescheduling_interrupts
    108.50 ± 60%    +108.5%     226.25 ± 39%  interrupts.CPU73.RES:Rescheduling_interrupts
     38.25 ± 76%    +259.5%     137.50 ± 68%  interrupts.CPU73.TLB:TLB_shootdowns
     37.50 ± 59%    +116.7%      81.25 ± 72%  interrupts.CPU85.TLB:TLB_shootdowns
     22.50 ± 38%    +354.4%     102.25 ± 70%  interrupts.CPU87.TLB:TLB_shootdowns
     21.75 ± 49%    +308.0%      88.75 ± 55%  interrupts.CPU89.TLB:TLB_shootdowns
     25.00 ± 67%    +156.0%      64.00 ± 46%  interrupts.CPU94.TLB:TLB_shootdowns
     32.75 ± 85%    +141.2%      79.00 ± 47%  interrupts.CPU96.TLB:TLB_shootdowns
    146.25 ± 31%     +58.1%     231.25 ± 17%  interrupts.CPU99.RES:Rescheduling_interrupts
     21099 ± 11%     +91.6%      40429 ± 14%  interrupts.RES:Rescheduling_interrupts
      6445 ± 13%     +48.4%       9563 ±  9%  interrupts.TLB:TLB_shootdowns
     68.61            -4.3       64.28        perf-profile.calltrace.cycles-pp.do_access
     58.14            -4.0       54.12        perf-profile.calltrace.cycles-pp.page_fault.do_access
     52.63            -4.0       48.62        perf-profile.calltrace.cycles-pp.do_page_fault.page_fault.do_access
     52.30            -4.0       48.30        perf-profile.calltrace.cycles-pp.__do_page_fault.do_page_fault.page_fault.do_access
     51.01            -4.0       47.02        perf-profile.calltrace.cycles-pp.handle_mm_fault.__do_page_fault.do_page_fault.page_fault.do_access
     50.43            -4.0       46.46        perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
     40.02            -3.8       36.23        perf-profile.calltrace.cycles-pp.shmem_getpage_gfp.shmem_fault.__do_fault.__handle_mm_fault.handle_mm_fault
     40.76            -3.8       36.98        perf-profile.calltrace.cycles-pp.__do_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
     40.69            -3.8       36.91        perf-profile.calltrace.cycles-pp.shmem_fault.__do_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault
     10.70 ±  3%      -2.0        8.71 ±  3%  perf-profile.calltrace.cycles-pp.pagevec_lru_move_fn.__lru_cache_add.shmem_getpage_gfp.shmem_fault.__do_fault
     10.96 ±  3%      -2.0        8.99 ±  4%  perf-profile.calltrace.cycles-pp.__lru_cache_add.shmem_getpage_gfp.shmem_fault.__do_fault.__handle_mm_fault
      8.90 ±  4%      -1.9        6.98 ±  4%  perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.pagevec_lru_move_fn.__lru_cache_add.shmem_getpage_gfp
      8.96 ±  4%      -1.9        7.04 ±  4%  perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.pagevec_lru_move_fn.__lru_cache_add.shmem_getpage_gfp.shmem_fault
     14.87 ±  4%      -1.6       13.27        perf-profile.calltrace.cycles-pp.shmem_alloc_and_acct_page.shmem_getpage_gfp.shmem_fault.__do_fault.__handle_mm_fault
     13.23 ±  5%      -1.5       11.78 ±  2%  perf-profile.calltrace.cycles-pp.shmem_alloc_page.shmem_alloc_and_acct_page.shmem_getpage_gfp.shmem_fault.__do_fault
     12.38 ±  5%      -1.4       10.94 ±  2%  perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.shmem_alloc_page.shmem_alloc_and_acct_page
     12.74 ±  5%      -1.4       11.30 ±  2%  perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.alloc_pages_vma.shmem_alloc_page.shmem_alloc_and_acct_page.shmem_getpage_gfp
     12.92 ±  5%      -1.4       11.48 ±  2%  perf-profile.calltrace.cycles-pp.alloc_pages_vma.shmem_alloc_page.shmem_alloc_and_acct_page.shmem_getpage_gfp.shmem_fault
      9.64 ±  6%      -1.4        8.21 ±  3%  perf-profile.calltrace.cycles-pp._raw_spin_lock.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.shmem_alloc_page
      9.49 ±  7%      -1.4        8.06 ±  3%  perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma
     15.61            -0.6       14.96        perf-profile.calltrace.cycles-pp.do_rw_once
      4.94            -0.3        4.66        perf-profile.calltrace.cycles-pp.clear_page_erms.shmem_getpage_gfp.shmem_fault.__do_fault.__handle_mm_fault
      0.57 ±  3%      -0.2        0.39 ± 57%  perf-profile.calltrace.cycles-pp.mem_cgroup_commit_charge.shmem_getpage_gfp.shmem_fault.__do_fault.__handle_mm_fault
      0.74 ±  3%      -0.1        0.68 ±  4%  perf-profile.calltrace.cycles-pp.page_add_file_rmap.alloc_set_pte.finish_fault.__handle_mm_fault.handle_mm_fault
      1.20            -0.1        1.14 ±  2%  perf-profile.calltrace.cycles-pp.finish_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
      0.80 ±  2%      -0.0        0.76 ±  3%  perf-profile.calltrace.cycles-pp.security_vm_enough_memory_mm.shmem_alloc_and_acct_page.shmem_getpage_gfp.shmem_fault.__do_fault
      7.84 ± 15%      +3.4       11.26 ± 14%  perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
     10.37 ±  9%      +3.6       13.92 ± 11%  perf-profile.calltrace.cycles-pp.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
     11.77 ±  9%      +3.6       15.32 ±  9%  perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
     11.77 ±  9%      +3.6       15.32 ±  9%  perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
     11.77 ±  9%      +3.6       15.32 ±  9%  perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
     11.87 ±  9%      +3.6       15.48 ±  9%  perf-profile.calltrace.cycles-pp.secondary_startup_64
      3.36           +21.1%       4.07 ± 20%  perf-stat.i.MPKI
 1.346e+10            +2.2%  1.375e+10        perf-stat.i.branch-instructions
  29022778            +7.7%   31267478 ±  5%  perf-stat.i.branch-misses
     47.72 ±  2%     -15.7       32.07 ± 14%  perf-stat.i.cache-miss-rate%
  43211868            -1.8%   42419129        perf-stat.i.cache-misses
      3376           +37.1%       4629        perf-stat.i.context-switches
      1.03            +2.9%       1.06 ±  3%  perf-stat.i.cpi
     72.97           +88.4%     137.51        perf-stat.i.cpu-migrations
    799.98           +21.1%     968.56 ±  2%  perf-stat.i.cycles-between-cache-misses
      0.03 ±  7%      +0.0        0.04 ± 20%  perf-stat.i.dTLB-load-miss-rate%
 1.267e+10            +2.1%  1.294e+10        perf-stat.i.dTLB-loads
      0.02            +0.0        0.03 ±  9%  perf-stat.i.dTLB-store-miss-rate%
   2158638            +2.8%    2218331        perf-stat.i.dTLB-store-misses
 3.621e+09            +4.1%  3.771e+09        perf-stat.i.dTLB-stores
     23.59            +8.0       31.57 ± 11%  perf-stat.i.iTLB-load-miss-rate%
   2744828            -9.9%    2473836 ±  3%  perf-stat.i.iTLB-load-misses
 4.811e+10            +2.9%  4.953e+10        perf-stat.i.instructions
    180646 ± 14%     -39.9%     108545 ± 35%  perf-stat.i.instructions-per-iTLB-miss
   2095485            +1.2%    2120039        perf-stat.i.minor-faults
     52.46            -2.5       49.94        perf-stat.i.node-load-miss-rate%
   2458601            +2.4%    2516686        perf-stat.i.node-loads
     43.60 ±  2%      -5.8       37.80 ±  3%  perf-stat.i.node-store-miss-rate%
   7662333            +1.0%    7738567        perf-stat.i.node-stores
   2095716            +1.2%    2120269        perf-stat.i.page-faults
      0.21            +0.0        0.23 ±  5%  perf-stat.overall.branch-miss-rate%
     18.95            -0.4       18.52 ±  2%  perf-stat.overall.cache-miss-rate%
      1.04            -3.5%       1.01        perf-stat.overall.cpi
     17502           +14.4%      20023 ±  3%  perf-stat.overall.instructions-per-iTLB-miss
      0.96            +3.6%       0.99        perf-stat.overall.ipc
     52.01            -1.6       50.40        perf-stat.overall.node-load-miss-rate%
      5108            +2.1%       5215        perf-stat.overall.path-length
 1.359e+10            +1.3%  1.377e+10        perf-stat.ps.branch-instructions
  29202158            +7.0%   31254091 ±  5%  perf-stat.ps.branch-misses
  43522733            -2.5%   42432124        perf-stat.ps.cache-misses
      3385           +36.4%       4618        perf-stat.ps.context-switches
 5.071e+10            -1.5%  4.994e+10        perf-stat.ps.cpu-cycles
     73.95           +86.7%     138.03        perf-stat.ps.cpu-migrations
 1.279e+10            +1.3%  1.296e+10        perf-stat.ps.dTLB-loads
 3.652e+09            +3.3%  3.774e+09        perf-stat.ps.dTLB-stores
   2776127           -10.7%    2478891 ±  3%  perf-stat.ps.iTLB-load-misses
 4.857e+10            +2.1%  4.959e+10        perf-stat.ps.instructions
   2451998            +2.3%    2507985        perf-stat.ps.node-loads
 1.187e+13            +4.5%  1.241e+13        perf-stat.total.instructions





Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


Thanks,
Rong Chen


View attachment "config-5.1.0-rc2-00041-g27eb9d5" of type "text/plain" (193057 bytes)

View attachment "job-script" of type "text/plain" (7552 bytes)

View attachment "job.yaml" of type "text/plain" (5010 bytes)

View attachment "reproduce" of type "text/plain" (913 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ