lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <202301161451.44ad1124-yujie.liu@intel.com>
Date:   Mon, 16 Jan 2023 14:40:04 +0800
From:   kernel test robot <yujie.liu@...el.com>
To:     David Howells <dhowells@...hat.com>
CC:     <oe-lkp@...ts.linux.dev>, <lkp@...el.com>,
        Al Viro <viro@...iv.linux.org.uk>,
        <linux-block@...r.kernel.org>,
        <v9fs-developer@...ts.sourceforge.net>,
        <linux-fsdevel@...r.kernel.org>, <ceph-devel@...r.kernel.org>,
        <nvdimm@...ts.linux.dev>, <linux-ext4@...r.kernel.org>,
        <linux-f2fs-devel@...ts.sourceforge.net>,
        <linux-xfs@...r.kernel.org>,
        <jfs-discussion@...ts.sourceforge.net>,
        <linux-nfs@...r.kernel.org>, <linux-nilfs@...r.kernel.org>,
        <ntfs3@...ts.linux.dev>, <ocfs2-devel@....oracle.com>,
        <devel@...ts.orangefs.org>, <reiserfs-devel@...r.kernel.org>,
        <ying.huang@...el.com>, <feng.tang@...el.com>,
        <zhengjun.xing@...ux.intel.com>, <fengwei.yin@...el.com>
Subject: [dhowells-fs:iov-extract] [iov_iter] 64ea9d6c5f: fio.read_iops
 126.2% improvement

Greeting,

FYI, we noticed a 126.2% improvement of fio.read_iops due to commit:

commit: 64ea9d6c5f473c29c5de97abaa697856db90fef7 ("iov_iter: Use IOCB/IOMAP_WRITE if available rather than iterator direction")
https://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs.git iov-extract

in testcase: fio-basic
on test machine: 96 threads 2 sockets Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz (Cascade Lake) with 512G memory
with following parameters:

	disk: 2pmem
	fs: ext4
	mount_option: dax
	runtime: 200s
	nr_task: 50%
	time_based: tb
	rw: randread
	bs: 2M
	ioengine: libaio
	test_size: 200G
	cpufreq_governor: performance

test-description: Fio is a tool that will spawn a number of threads or processes doing a particular type of I/O action as specified by the user.
test-url: https://github.com/axboe/fio

In addition to that, the commit also has significant impact on the following tests:

+------------------+------------------------------------------------------------------------------------------------+
| testcase: change | fio-basic: fio.write_iops 1942.1% improvement                                                  |
| test machine     | 96 threads 2 sockets Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz (Cascade Lake) with 512G memory  |
| test parameters  | bs=2M                                                                                          |
|                  | cpufreq_governor=performance                                                                   |
|                  | disk=2pmem                                                                                     |
|                  | fs=ext4                                                                                        |
|                  | ioengine=sync                                                                                  |
|                  | mount_option=dax                                                                               |
|                  | nr_task=50%                                                                                    |
|                  | runtime=200s                                                                                   |
|                  | rw=rw                                                                                          |
|                  | test_size=200G                                                                                 |
|                  | time_based=tb                                                                                  |
+------------------+------------------------------------------------------------------------------------------------+
| testcase: change | reaim: reaim.jobs_per_min 89.7% improvement                                                    |
| test machine     | 128 threads 2 sockets Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz (Ice Lake) with 128G memory |
| test parameters  | cpufreq_governor=performance                                                                   |
|                  | disk=1HDD                                                                                      |
|                  | fs=ext4                                                                                        |
|                  | nr_task=100                                                                                    |
|                  | runtime=300s                                                                                   |
|                  | test=disk                                                                                      |
+------------------+------------------------------------------------------------------------------------------------+
| testcase: change | fio-basic: fio.read_iops 127.0% improvement                                                    |
| test machine     | 96 threads 2 sockets Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz (Cascade Lake) with 512G memory  |
| test parameters  | bs=2M                                                                                          |
|                  | cpufreq_governor=performance                                                                   |
|                  | disk=2pmem                                                                                     |
|                  | fs=ext4                                                                                        |
|                  | ioengine=libaio                                                                                |
|                  | mount_option=dax                                                                               |
|                  | nr_task=50%                                                                                    |
|                  | runtime=200s                                                                                   |
|                  | rw=read                                                                                        |
|                  | test_size=200G                                                                                 |
|                  | time_based=tb                                                                                  |
+------------------+------------------------------------------------------------------------------------------------+


Details are as below:

=========================================================================================
bs/compiler/cpufreq_governor/disk/fs/ioengine/kconfig/mount_option/nr_task/rootfs/runtime/rw/tbox_group/test_size/testcase/time_based:
  2M/gcc-11/performance/2pmem/ext4/libaio/x86_64-rhel-8.3/dax/50%/debian-11.1-x86_64-20220510.cgz/200s/randread/lkp-csl-2sp7/200G/fio-basic/tb

commit: 
  e6eadc0324 ("iov_iter: Use the direction in the iterator functions")
  64ea9d6c5f ("iov_iter: Use IOCB/IOMAP_WRITE if available rather than iterator direction")

e6eadc0324e475e3 64ea9d6c5f473c29c5de97abaa6 
---------------- --------------------------- 
         %stddev     %change         %stddev
             \          |                \  
     28.42 ± 12%     +67.0       95.37        fio.latency_100ms%
      0.01            +0.2        0.23 ± 58%  fio.latency_10ms%
     70.69 ±  5%     -69.8        0.88 ± 18%  fio.latency_250ms%
      0.55 ± 45%      -0.5        0.08 ± 30%  fio.latency_500ms%
      0.16 ± 73%      +2.8        2.98 ±  7%  fio.latency_50ms%
      0.01            -0.0        0.00        fio.latency_50us%
     25388          +126.2%      57424        fio.read_bw_MBps
 1.407e+08           -63.3%   51642368        fio.read_clat_90%_us
 1.428e+08           -63.5%   52166656        fio.read_clat_95%_us
 1.957e+08 ±  3%     -54.7%   88604672 ± 17%  fio.read_clat_99%_us
 1.171e+08           -55.8%   51798447        fio.read_clat_mean_us
  27892624 ±  5%     -51.6%   13508778 ±  7%  fio.read_clat_stddev
     12694          +126.2%      28712        fio.read_iops
   3753253           -55.6%    1665798        fio.read_slat_mean_us
    904821 ±  5%     -50.3%     450059 ±  7%  fio.read_slat_stddev
     83484            -2.3%      81606        fio.time.minor_page_faults
     82.60 ±  4%     -46.1%      44.53 ± 19%  fio.time.user_time
   2538874          +126.2%    5742524        fio.workload
    118966 ±  6%      -5.6%     112317 ±  4%  numa-meminfo.node1.Slab
      0.60            -0.1        0.50        mpstat.cpu.all.irq%
      0.51 ±  4%      -0.2        0.32 ± 15%  mpstat.cpu.all.usr%
    335.01 ±  6%     -15.6%     282.74        uptime.boot
     21727 ±  9%     -22.8%      16764        uptime.idle
    365.40 ± 67%    +341.9%       1614 ± 14%  proc-vmstat.nr_written
     36539 ±  4%     +23.6%      45144 ±  6%  proc-vmstat.numa_hint_faults
    391222            +2.1%     399514        proc-vmstat.numa_hit
    390931            +1.8%     397825        proc-vmstat.numa_local
      1492 ± 65%    +334.6%       6484 ± 14%  proc-vmstat.pgpgout
     45650 ± 12%     -34.4%      29932 ± 16%  turbostat.C1
      0.15          -100.0%       0.00        turbostat.IPC
     60.80            +3.3%      62.80        turbostat.PkgTmp
    250.11           +11.2%     278.05        turbostat.PkgWatt
     58.09            -6.3%      54.45        turbostat.RAMWatt
   1115526 ±  6%     +62.8%    1816010 ± 25%  sched_debug.cpu.avg_idle.max
    155834 ± 12%     +39.7%     217674 ± 15%  sched_debug.cpu.avg_idle.stddev
    222305 ±  9%     -23.2%     170765        sched_debug.cpu.clock.avg
    222357 ±  9%     -23.1%     170937        sched_debug.cpu.clock.max
    222253 ±  9%     -23.2%     170594        sched_debug.cpu.clock.min
     29.89 ± 14%    +238.5%     101.16 ±  3%  sched_debug.cpu.clock.stddev
    221210 ±  9%     -23.3%     169766        sched_debug.cpu.clock_task.avg
    222168 ±  9%     -23.1%     170743        sched_debug.cpu.clock_task.max
    200093 ± 10%     -25.6%     148915        sched_debug.cpu.clock_task.min
    539522 ±  4%     +22.8%     662703 ±  8%  sched_debug.cpu.max_idle_balance_cost.max
      4764 ± 64%    +456.7%      26527 ± 34%  sched_debug.cpu.max_idle_balance_cost.stddev
      0.00 ±  6%    +163.8%       0.00 ±  3%  sched_debug.cpu.next_balance.stddev
    222251 ±  9%     -23.2%     170591        sched_debug.cpu_clk
    221620 ±  9%     -23.3%     169962        sched_debug.ktime
    222977 ±  9%     -23.2%     171325        sched_debug.sched_clk


***************************************************************************************************
lkp-csl-2sp7: 96 threads 2 sockets Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz (Cascade Lake) with 512G memory
=========================================================================================
bs/compiler/cpufreq_governor/disk/fs/ioengine/kconfig/mount_option/nr_task/rootfs/runtime/rw/tbox_group/test_size/testcase/time_based:
  2M/gcc-11/performance/2pmem/ext4/sync/x86_64-rhel-8.3/dax/50%/debian-11.1-x86_64-20220510.cgz/200s/rw/lkp-csl-2sp7/200G/fio-basic/tb

commit: 
  e6eadc0324 ("iov_iter: Use the direction in the iterator functions")
  64ea9d6c5f ("iov_iter: Use IOCB/IOMAP_WRITE if available rather than iterator direction")

e6eadc0324e475e3 64ea9d6c5f473c29c5de97abaa6 
---------------- --------------------------- 
         %stddev     %change         %stddev
             \          |                \  
      0.12 ± 43%      -0.1        0.02 ± 19%  fio.latency_1000us%
      9.55 ± 41%      -9.5        0.01 ±100%  fio.latency_10ms%
      0.03 ± 60%     +48.4       48.45 ± 39%  fio.latency_250us%
     20.78 ± 24%     -20.7        0.04 ± 27%  fio.latency_2ms%
     69.23 ±  3%     -69.2        0.02 ± 25%  fio.latency_4ms%
      0.13 ± 42%      -0.1        0.05 ± 22%  fio.latency_750us%
     15167 ±  4%   +1942.6%     309815 ± 12%  fio.read_bw_MBps
   4286054 ±  6%     -96.5%     151296 ± 28%  fio.read_clat_90%_us
   4623564 ±  7%     -96.5%     160597 ± 28%  fio.read_clat_95%_us
   5498470 ± 11%     -96.7%     181162 ± 28%  fio.read_clat_99%_us
   3298308 ±  4%     -96.5%     114311 ± 13%  fio.read_clat_mean_us
    767056 ± 18%     -93.3%      51658 ± 26%  fio.read_clat_stddev
      7583 ±  4%   +1942.6%     154907 ± 12%  fio.read_iops
      4754            -1.5%       4684        fio.time.percent_of_cpu_this_job_got
      8493           -20.7%       6736        fio.time.system_time
      1052          +153.3%       2665 ±  3%  fio.time.user_time
     22073           +12.6%      24848 ±  2%  fio.time.voluntary_context_switches
   3032891 ±  4%   +1942.3%   61941623 ± 12%  fio.workload
     15160 ±  4%   +1942.1%     309597 ± 12%  fio.write_bw_MBps
   3411148 ±  6%     -95.7%     147029 ± 27%  fio.write_clat_90%_us
   3725721 ±  7%     -95.8%     155733 ± 27%  fio.write_clat_95%_us
   4502323 ± 12%     -96.1%     176384 ± 27%  fio.write_clat_99%_us
   2377925 ±  4%     -95.3%     112602 ± 12%  fio.write_clat_mean_us
    772542 ± 17%     -93.3%      51635 ± 24%  fio.write_clat_stddev
      7580 ±  4%   +1942.1%     154798 ± 12%  fio.write_iops
     30.59            -1.2%      30.22        boot-time.dhcp
     44.29           -17.6%      36.48        iostat.cpu.system
      5.46          +149.4%      13.63 ±  3%  iostat.cpu.user
    226944 ± 10%     +17.0%     265432 ±  7%  numa-numastat.node0.local_node
    229493 ± 11%     +16.6%     267683 ±  6%  numa-numastat.node0.numa_hit
    327.94 ±  6%     -13.6%     283.46        uptime.boot
     20979 ±  9%     -20.4%      16696        uptime.idle
      5.00          +156.7%      12.83 ±  2%  vmstat.cpu.us
    107.20 ± 14%     -70.9%      31.17 ± 43%  vmstat.io.bo
      1947 ±  8%     +27.4%       2481        vmstat.system.cs
     42803 ± 51%     +99.7%      85467 ±  9%  meminfo.Active
     21377 ±103%    +199.6%      64049 ± 13%  meminfo.Active(anon)
   1028752 ± 12%     +14.7%    1180441 ±  2%  meminfo.DirectMap4k
     37174 ± 63%    +120.7%      82042 ± 10%  meminfo.Shmem
      0.69 ± 79%      +1.4        2.08 ±  2%  mpstat.cpu.all.irq%
      0.22 ±  5%      -0.1        0.09 ±  7%  mpstat.cpu.all.soft%
     43.81            -9.2       34.66        mpstat.cpu.all.sys%
      5.52            +8.2       13.76 ±  3%  mpstat.cpu.all.usr%
     31755 ± 93%    +138.5%      75738 ± 21%  numa-meminfo.node1.Active
     19642 ±112%    +214.5%      61783 ± 13%  numa-meminfo.node1.Active(anon)
   1297089 ±103%    +127.4%    2950212 ±  2%  numa-meminfo.node1.AnonHugePages
      8554 ±  7%      +9.0%       9320 ±  2%  numa-meminfo.node1.KernelStack
      8937 ± 48%     +41.8%      12674 ±  3%  numa-meminfo.node1.Mapped
     29411 ± 86%    +158.7%      76095 ± 11%  numa-meminfo.node1.Shmem
    229509 ± 11%     +16.7%     267844 ±  6%  numa-vmstat.node0.numa_hit
    226960 ± 10%     +17.0%     265593 ±  7%  numa-vmstat.node0.numa_local
      4884 ±112%    +216.0%      15436 ± 13%  numa-vmstat.node1.nr_active_anon
    634.40 ±103%    +127.1%       1440 ±  2%  numa-vmstat.node1.nr_anon_transparent_hugepages
      2248 ± 48%     +41.6%       3184 ±  3%  numa-vmstat.node1.nr_mapped
      7338 ± 86%    +159.2%      19022 ± 11%  numa-vmstat.node1.nr_shmem
      4884 ±112%    +216.0%      15436 ± 13%  numa-vmstat.node1.nr_zone_active_anon
      2800            -1.6%       2755        turbostat.Bzy_MHz
      6.80 ±123%     +22.7       29.51 ± 59%  turbostat.C1E%
      0.09 ± 46%     -80.1%       0.02 ± 20%  turbostat.IPC
      4217 ± 51%  +14321.1%     608138 ±205%  turbostat.POLL
     63.20 ±  2%      +4.2%      65.83 ±  2%  turbostat.PkgTmp
    259.23           +13.7%     294.79        turbostat.PkgWatt
     53.95            -3.8%      51.88 ±  2%  turbostat.RAMWatt
      5349 ±103%    +199.1%      16000 ± 13%  proc-vmstat.nr_active_anon
    837004            +1.1%     846381        proc-vmstat.nr_anon_pages
    101.00 ± 32%     -93.9%       6.17 ± 23%  proc-vmstat.nr_dirtied
    710747            +1.5%     721433        proc-vmstat.nr_file_pages
    840884            +1.2%     850985        proc-vmstat.nr_inactive_anon
     17652            +0.9%      17816        proc-vmstat.nr_kernel_stack
      3469            +2.0%       3538        proc-vmstat.nr_page_table_pages
      9308 ± 63%    +120.3%      20503 ± 10%  proc-vmstat.nr_shmem
     48010            +1.1%      48555        proc-vmstat.nr_slab_unreclaimable
      5349 ±103%    +199.1%      16000 ± 13%  proc-vmstat.nr_zone_active_anon
    840884            +1.2%     850985        proc-vmstat.nr_zone_inactive_anon
    445330 ± 16%     +23.0%     547820        proc-vmstat.numa_hit
    432726 ± 13%     +19.4%     516623        proc-vmstat.numa_local
     12623 ±120%    +147.2%      31201        proc-vmstat.numa_other
      7857 ±112%    +223.3%      25404 ± 11%  proc-vmstat.pgactivate
   1252650 ±  6%     +10.1%    1378998        proc-vmstat.pgalloc_normal
    611086 ±  7%      +9.5%     668925        proc-vmstat.pgfault
   1232126 ±  5%      +7.7%    1326857        proc-vmstat.pgfree
    353.00 ±  6%     +33.8%     472.19 ±  3%  sched_debug.cfs_rq:/.util_est_enqueued.avg
    354.05 ±  6%     +11.0%     392.89        sched_debug.cfs_rq:/.util_est_enqueued.stddev
   1227820 ±  7%     -13.1%    1066553 ±  3%  sched_debug.cpu.avg_idle.max
    495552 ± 18%     -48.0%     257439 ± 22%  sched_debug.cpu.avg_idle.min
    117660 ± 21%     +48.8%     175072 ±  6%  sched_debug.cpu.avg_idle.stddev
    215198 ±  9%     -20.7%     170684        sched_debug.cpu.clock.avg
    215249 ±  9%     -20.7%     170705        sched_debug.cpu.clock.max
    215144 ±  9%     -20.7%     170664        sched_debug.cpu.clock.min
     30.06 ±  6%     -60.5%      11.87 ± 13%  sched_debug.cpu.clock.stddev
    214020 ±  9%     -21.3%     168388        sched_debug.cpu.clock_task.avg
    214662 ±  9%     -21.3%     168991        sched_debug.cpu.clock_task.max
    192328 ±  9%     -20.3%     153377        sched_debug.cpu.clock_task.min
      2250 ± 22%     -30.0%       1575        sched_debug.cpu.clock_task.stddev
    602998 ±  8%     -10.3%     541131 ±  3%  sched_debug.cpu.max_idle_balance_cost.max
      0.00 ± 23%     -38.2%       0.00 ± 26%  sched_debug.cpu.next_balance.stddev
      4572 ±  2%      +9.7%       5016        sched_debug.cpu.nr_switches.avg
      3459 ±  7%     +18.9%       4113 ±  9%  sched_debug.cpu.nr_switches.stddev
    215142 ±  9%     -20.7%     170664        sched_debug.cpu_clk
    214467 ±  9%     -20.7%     170038        sched_debug.ktime
    215909 ±  9%     -20.6%     171391        sched_debug.sched_clk
     19.99 ±122%   +1288.4%     277.52        perf-stat.i.MPKI
      0.07 ±122%      +0.5        0.55 ±  6%  perf-stat.i.branch-miss-rate%
   1789007 ±122%    +256.9%    6385596 ±  3%  perf-stat.i.branch-misses
  3.76e+08 ±122%    +489.2%  2.215e+09 ± 12%  perf-stat.i.cache-references
    793.00 ±122%    +192.3%       2317        perf-stat.i.context-switches
      2.93 ±122%    +476.2%      16.86 ± 11%  perf-stat.i.cpi
    126.67 ±122%    +875.4%       1235 ± 11%  perf-stat.i.cycles-between-cache-misses
      0.00 ±126%      +0.0        0.01 ± 61%  perf-stat.i.dTLB-load-miss-rate%
      0.00 ±125%      +0.0        0.00 ± 21%  perf-stat.i.dTLB-store-miss-rate%
     15094 ±122%    +359.9%      69419 ± 12%  perf-stat.i.dTLB-store-misses
     14.86 ±122%     +29.2       44.09 ±  8%  perf-stat.i.iTLB-load-miss-rate%
    432930 ±122%    +556.0%    2839854 ±  6%  perf-stat.i.iTLB-load-misses
    732747 ±122%    +400.2%    3665217 ± 17%  perf-stat.i.iTLB-loads
      0.24 ±124%      +0.9        1.09 ± 18%  perf-stat.i.node-store-miss-rate%
    108219 ±122%    +433.1%     576866 ±  9%  perf-stat.i.node-store-misses
  20252133 ±122%    +171.3%   54941059 ± 14%  perf-stat.i.node-stores
     20.03 ±122%   +1278.5%     276.12        perf-stat.overall.MPKI
      0.07 ±122%      +0.5        0.59 ±  7%  perf-stat.overall.branch-miss-rate%
      2.93 ±122%    +473.4%      16.79 ± 11%  perf-stat.overall.cpi
    125.84 ±122%    +876.8%       1229 ± 11%  perf-stat.overall.cycles-between-cache-misses
      0.00 ±122%      +0.0        0.01 ± 61%  perf-stat.overall.dTLB-load-miss-rate%
      0.00 ±122%      +0.0        0.00 ± 22%  perf-stat.overall.dTLB-store-miss-rate%
     14.86 ±122%     +29.2       44.07 ±  8%  perf-stat.overall.iTLB-load-miss-rate%
      0.21 ±123%      +0.9        1.07 ± 19%  perf-stat.overall.node-store-miss-rate%
   1775193 ±122%    +259.5%    6382224 ±  4%  perf-stat.ps.branch-misses
 3.743e+08 ±122%    +488.6%  2.203e+09 ± 12%  perf-stat.ps.cache-references
    787.70 ±122%    +192.3%       2302        perf-stat.ps.context-switches
     14975 ±122%    +361.3%      69085 ± 12%  perf-stat.ps.dTLB-store-misses
    430771 ±122%    +555.7%    2824622 ±  6%  perf-stat.ps.iTLB-load-misses
    729029 ±122%    +400.0%    3644932 ± 17%  perf-stat.ps.iTLB-loads
    976.71 ±122%    +150.6%       2447        perf-stat.ps.minor-faults
    108206 ±122%    +430.4%     573881 ±  9%  perf-stat.ps.node-store-misses
  20157154 ±122%    +171.1%   54649318 ± 14%  perf-stat.ps.node-stores
    976.87 ±122%    +150.6%       2448        perf-stat.ps.page-faults
      0.00            +4.2        4.24 ± 22%  perf-profile.calltrace.cycles-pp.asm_sysvec_apic_timer_interrupt.clear_user_erms.iov_iter_zero.dax_iomap_rw.ext4_dax_write_iter
      0.00            +4.6        4.63 ± 23%  perf-profile.calltrace.cycles-pp.asm_sysvec_apic_timer_interrupt.clear_user_erms.iov_iter_zero.dax_iomap_rw.ext4_file_read_iter
      2.83 ±122%     +15.5       18.36 ± 11%  perf-profile.calltrace.cycles-pp.get_io_u
     11.19 ±123%     +20.4       31.59 ± 20%  perf-profile.calltrace.cycles-pp.cpuidle_enter.cpuidle_idle_call.do_idle.cpu_startup_entry.start_secondary
     11.20 ±122%     +20.5       31.70 ± 20%  perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call.do_idle
     11.19 ±122%     +20.5       31.70 ± 20%  perf-profile.calltrace.cycles-pp.mwait_idle_with_hints.intel_idle.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call
     11.30 ±123%     +20.8       32.14 ± 20%  perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call.do_idle.cpu_startup_entry
      0.00           +21.2       21.23 ±  8%  perf-profile.calltrace.cycles-pp.clear_user_erms.iov_iter_zero.dax_iomap_rw.ext4_dax_write_iter.vfs_write
      0.00           +22.0       21.95 ± 13%  perf-profile.calltrace.cycles-pp.clear_user_erms.iov_iter_zero.dax_iomap_rw.ext4_file_read_iter.vfs_read
      0.00           +23.3       23.26 ±  7%  perf-profile.calltrace.cycles-pp.iov_iter_zero.dax_iomap_rw.ext4_dax_write_iter.vfs_write.ksys_write
      0.00           +24.2       24.16 ± 11%  perf-profile.calltrace.cycles-pp.iov_iter_zero.dax_iomap_rw.ext4_file_read_iter.vfs_read.ksys_read
      0.02 ±125%      +0.1        0.08 ± 19%  perf-profile.children.cycles-pp.thread_main
      0.00            +0.1        0.06 ± 11%  perf-profile.children.cycles-pp.jbd2_transaction_committed
      0.00            +0.1        0.06 ± 11%  perf-profile.children.cycles-pp.ext4_set_iomap
      0.03 ±123%      +0.1        0.09 ± 18%  perf-profile.children.cycles-pp.ext4_fill_raw_inode
      0.00            +0.1        0.07 ± 18%  perf-profile.children.cycles-pp.start_this_handle
      0.03 ±124%      +0.1        0.10 ± 16%  perf-profile.children.cycles-pp.jbd2__journal_start
      0.04 ±122%      +0.1        0.12 ± 10%  perf-profile.children.cycles-pp.ext4_map_blocks
      0.02 ±122%      +0.1        0.10 ± 39%  perf-profile.children.cycles-pp.__libc_lseek64
      0.03 ±122%      +0.1        0.12 ± 18%  perf-profile.children.cycles-pp.ext4_do_update_inode
      0.03 ±122%      +0.1        0.12 ± 15%  perf-profile.children.cycles-pp.ext4_mark_iloc_dirty
      0.00            +0.1        0.11 ± 11%  perf-profile.children.cycles-pp.ext4_es_lookup_extent
      0.08 ±122%      +0.1        0.20 ± 13%  perf-profile.children.cycles-pp.generic_update_time
      0.05 ±125%      +0.1        0.17 ± 13%  perf-profile.children.cycles-pp.task_tick_fair
      0.00            +0.1        0.13 ± 11%  perf-profile.children.cycles-pp._raw_read_lock
      0.07 ±122%      +0.1        0.20 ± 18%  perf-profile.children.cycles-pp.__ext4_mark_inode_dirty
      0.09 ±122%      +0.1        0.23 ± 15%  perf-profile.children.cycles-pp.file_modified_flags
      0.06 ±122%      +0.2        0.22 ± 21%  perf-profile.children.cycles-pp.touch_atime
      0.05 ±122%      +0.2        0.20 ± 11%  perf-profile.children.cycles-pp.ext4_iomap_begin
      0.07 ±124%      +0.2        0.24 ± 15%  perf-profile.children.cycles-pp.scheduler_tick
      0.06 ±122%      +0.2        0.23 ± 12%  perf-profile.children.cycles-pp.iomap_iter
      0.11 ±123%      +0.2        0.33 ± 17%  perf-profile.children.cycles-pp.ext4_dirty_inode
      0.09 ±124%      +0.3        0.35 ± 17%  perf-profile.children.cycles-pp.update_process_times
      0.13 ±123%      +0.3        0.39 ± 16%  perf-profile.children.cycles-pp.__mark_inode_dirty
      0.09 ±124%      +0.3        0.36 ± 16%  perf-profile.children.cycles-pp.tick_sched_handle
      0.11 ±125%      +0.3        0.38 ± 14%  perf-profile.children.cycles-pp.tick_sched_timer
      0.13 ±124%      +0.3        0.47 ± 13%  perf-profile.children.cycles-pp.__hrtimer_run_queues
      0.18 ±124%      +0.4        0.57 ± 14%  perf-profile.children.cycles-pp.hrtimer_interrupt
      0.18 ±124%      +0.4        0.58 ± 14%  perf-profile.children.cycles-pp.__sysvec_apic_timer_interrupt
      0.20 ±123%      +0.5        0.66 ± 14%  perf-profile.children.cycles-pp.sysvec_apic_timer_interrupt
      0.18 ±128%      +0.5        0.70 ± 33%  perf-profile.children.cycles-pp.start_kernel
      0.18 ±128%      +0.5        0.70 ± 33%  perf-profile.children.cycles-pp.arch_call_rest_init
      0.18 ±128%      +0.5        0.70 ± 33%  perf-profile.children.cycles-pp.rest_init
      0.29 ±123%      +5.0        5.33 ± 19%  perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt
      2.84 ±122%     +15.5       18.39 ± 11%  perf-profile.children.cycles-pp.get_io_u
     11.26 ±123%     +20.6       31.84 ± 20%  perf-profile.children.cycles-pp.intel_idle
     11.37 ±123%     +20.9       32.29 ± 20%  perf-profile.children.cycles-pp.cpuidle_enter
     11.37 ±123%     +20.9       32.29 ± 20%  perf-profile.children.cycles-pp.cpuidle_enter_state
     11.27 ±123%     +21.0       32.27 ± 20%  perf-profile.children.cycles-pp.mwait_idle_with_hints
      0.00           +47.4       47.40 ±  9%  perf-profile.children.cycles-pp.clear_user_erms
      0.00           +47.4       47.43 ±  9%  perf-profile.children.cycles-pp.iov_iter_zero
      0.02 ±123%      +0.1        0.08 ± 18%  perf-profile.self.cycles-pp.thread_main
      0.00            +0.1        0.07 ± 14%  perf-profile.self.cycles-pp.ext4_es_lookup_extent
      0.00            +0.1        0.13 ± 11%  perf-profile.self.cycles-pp._raw_read_lock
      2.81 ±122%     +15.4       18.18 ± 11%  perf-profile.self.cycles-pp.get_io_u
     11.27 ±123%     +21.0       32.27 ± 20%  perf-profile.self.cycles-pp.mwait_idle_with_hints
      0.00           +46.9       46.91 ±  9%  perf-profile.self.cycles-pp.clear_user_erms



***************************************************************************************************
lkp-icl-2sp6: 128 threads 2 sockets Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz (Ice Lake) with 128G memory
=========================================================================================
compiler/cpufreq_governor/disk/fs/kconfig/nr_task/rootfs/runtime/tbox_group/test/testcase:
  gcc-11/performance/1HDD/ext4/x86_64-rhel-8.3/100/debian-11.1-x86_64-20220510.cgz/300s/lkp-icl-2sp6/disk/reaim

commit: 
  e6eadc0324 ("iov_iter: Use the direction in the iterator functions")
  64ea9d6c5f ("iov_iter: Use IOCB/IOMAP_WRITE if available rather than iterator direction")

e6eadc0324e475e3 64ea9d6c5f473c29c5de97abaa6 
---------------- --------------------------- 
         %stddev     %change         %stddev
             \          |                \  
      4946           +89.7%       9382        reaim.jobs_per_min
     49.46           +89.7%      93.83        reaim.jobs_per_min_child
      5025           +89.9%       9541        reaim.max_jobs_per_min
    121.35           -47.3%      63.97        reaim.parent_time
      0.97 ±  7%     -43.2%       0.55 ±  3%  reaim.std_dev_time
    370.80           -10.7%     331.12        reaim.time.elapsed_time
    370.80           -10.7%     331.12        reaim.time.elapsed_time.max
    120024          +150.0%     300020        reaim.time.file_system_inputs
   1338521           +59.1%    2129881        reaim.time.file_system_outputs
     20309 ±  4%     +37.4%      27914 ±  5%  reaim.time.involuntary_context_switches
      3514 ±  5%     +60.7%       5646 ±  5%  reaim.time.major_page_faults
   4134743           +66.6%    6889989        reaim.time.minor_page_faults
     22.50 ±  3%     +86.7%      42.00 ±  2%  reaim.time.percent_of_cpu_this_job_got
     63.71 ±  2%     +66.5%     106.10        reaim.time.system_time
     21.05           +66.4%      35.02        reaim.time.user_time
    676434           +65.9%    1122483        reaim.time.voluntary_context_switches
     30000           +66.7%      50000        reaim.workload
     47783            -9.3%      43362        uptime.idle
  4.77e+10           -10.9%  4.248e+10        cpuidle..time
   4928985           +48.8%    7335403        cpuidle..usage
     15054           +42.6%      21463        meminfo.Active
     10952           +60.1%      17533        meminfo.Active(file)
     11536           +57.1%      18123        meminfo.Buffers
      0.05            +0.0        0.08        mpstat.cpu.all.irq%
      0.03            +0.0        0.05        mpstat.cpu.all.soft%
      0.17 ±  2%      +0.1        0.31 ±  2%  mpstat.cpu.all.sys%
      0.03            +0.0        0.05        mpstat.cpu.all.usr%
   1590929 ±  2%     +58.9%    2527853 ±  2%  numa-numastat.node0.local_node
   1590987 ±  2%     +58.9%    2527899 ±  2%  numa-numastat.node0.numa_hit
   1804019           +46.9%    2649463 ±  2%  numa-numastat.node1.local_node
   1804160           +46.9%    2649614 ±  2%  numa-numastat.node1.numa_hit
    160.17          +179.6%     447.83        vmstat.io.bi
      3103 ± 36%     +76.0%       5462 ± 22%  vmstat.io.bo
     11573           +57.1%      18187        vmstat.memory.buff
      6442           +57.9%      10170        vmstat.system.cs
     11226           +66.2%      18653        vmstat.system.in
    165299 ± 23%     -36.4%     105103 ± 16%  numa-meminfo.node0.AnonPages
    173075 ± 22%     -33.0%     115970 ± 18%  numa-meminfo.node0.AnonPages.max
    170839 ± 22%     -35.3%     110555 ± 15%  numa-meminfo.node0.Inactive
    169315 ± 22%     -35.4%     109428 ± 15%  numa-meminfo.node0.Inactive(anon)
     32962 ± 69%    +161.0%      86037 ± 22%  numa-meminfo.node1.AnonHugePages
     82027 ± 46%     +74.4%     143051 ± 12%  numa-meminfo.node1.AnonPages
     89346 ± 42%     +69.9%     151775 ±  9%  numa-meminfo.node1.AnonPages.max
     84538 ± 44%     +71.9%     145338 ± 12%  numa-meminfo.node1.Inactive
     83881 ± 44%     +71.9%     144230 ± 12%  numa-meminfo.node1.Inactive(anon)
      2950 ± 21%     -43.6%       1664 ± 37%  sched_debug.cfs_rq:/.load.avg
     21451 ± 19%     -45.4%      11702 ± 43%  sched_debug.cfs_rq:/.load.stddev
     35609 ± 10%     +56.3%      55645 ± 18%  sched_debug.cfs_rq:/.min_vruntime.avg
     74742 ±  8%     +44.8%     108213 ± 16%  sched_debug.cfs_rq:/.min_vruntime.max
     21412 ± 11%     +60.8%      34431 ± 12%  sched_debug.cfs_rq:/.min_vruntime.min
     83.47 ± 11%     +35.5%     113.07 ± 11%  sched_debug.cfs_rq:/.runnable_avg.avg
     83.37 ± 11%     +35.5%     112.95 ± 11%  sched_debug.cfs_rq:/.util_avg.avg
    371.30 ± 11%     +82.0%     675.86 ± 29%  sched_debug.cpu.curr->pid.avg
     35562 ±  7%     +44.9%      51542 ±  9%  sched_debug.cpu.curr->pid.max
      3407 ±  8%     +57.5%       5365 ± 15%  sched_debug.cpu.curr->pid.stddev
      0.00 ± 22%     -35.8%       0.00 ± 24%  sched_debug.cpu.next_balance.stddev
     10308 ±  6%     +30.1%      13413 ±  8%  sched_debug.cpu.nr_switches.avg
      5078 ± 16%     +49.2%       7576 ± 15%  sched_debug.cpu.nr_switches.min
      6.83 ±  5%     +65.9%      11.33 ±  4%  turbostat.Avg_MHz
      0.43            +0.3        0.73        turbostat.Busy%
     68252 ±  8%     +10.5%      75386 ±  5%  turbostat.C1
   1148569           +48.6%    1706615        turbostat.C1E
      2.05            +1.1        3.14        turbostat.C1E%
   3552007           +52.2%    5405428        turbostat.C6
      4.54           +54.5%       7.02        turbostat.CPU%c1
     38.50            +8.7%      41.83 ±  3%  turbostat.CoreTmp
   4155915           +49.2%    6202633        turbostat.IRQ
     21.47 ±  2%     -10.6%      19.20 ±  2%  turbostat.Pkg%pc2
     38.80           -27.9%      27.97 ±  2%  turbostat.Pkg%pc6
     38.83 ±  2%      +6.9%      41.50 ±  2%  turbostat.PkgTmp
    129.14           +13.3%     146.29        turbostat.PkgWatt
     65.52            -1.5%      64.53        turbostat.RAMWatt
     41326 ± 23%     -36.4%      26278 ± 16%  numa-vmstat.node0.nr_anon_pages
    115634 ± 12%     +57.0%     181517 ± 13%  numa-vmstat.node0.nr_dirtied
     42326 ± 22%     -35.4%      27357 ± 15%  numa-vmstat.node0.nr_inactive_anon
    103.00 ± 10%     -61.5%      39.67 ± 39%  numa-vmstat.node0.nr_mlock
     81722 ± 13%     +57.3%     128519 ± 13%  numa-vmstat.node0.nr_written
     42326 ± 22%     -35.4%      27357 ± 15%  numa-vmstat.node0.nr_zone_inactive_anon
   1591186 ±  2%     +58.9%    2527861 ±  2%  numa-vmstat.node0.numa_hit
   1591128 ±  2%     +58.9%    2527816 ±  2%  numa-vmstat.node0.numa_local
     20505 ± 46%     +74.4%      35764 ± 12%  numa-vmstat.node1.nr_anon_pages
     53024 ± 27%     +87.7%      99527 ± 24%  numa-vmstat.node1.nr_dirtied
     20966 ± 44%     +72.0%      36055 ± 12%  numa-vmstat.node1.nr_inactive_anon
     36742 ± 30%     +87.0%      68703 ± 25%  numa-vmstat.node1.nr_written
     20966 ± 44%     +72.0%      36055 ± 12%  numa-vmstat.node1.nr_zone_inactive_anon
   1804367           +46.9%    2649736 ±  2%  numa-vmstat.node1.numa_hit
   1804226           +46.9%    2649586 ±  2%  numa-vmstat.node1.numa_local
      2738           +60.1%       4384        proc-vmstat.nr_active_file
    168659           +66.6%     281044        proc-vmstat.nr_dirtied
    108.17 ±  5%     -55.2%      48.50 ± 10%  proc-vmstat.nr_mlock
      2518            -4.1%       2416        proc-vmstat.nr_shmem
     31530            +1.3%      31939        proc-vmstat.nr_slab_reclaimable
    118464           +66.5%     197222        proc-vmstat.nr_written
      2738           +60.1%       4384        proc-vmstat.nr_zone_active_file
   3398069           +52.4%    5179028        proc-vmstat.numa_hit
   3396603           +52.5%    5178832        proc-vmstat.numa_local
     19001           +63.8%      31122        proc-vmstat.pgactivate
   3531992           +51.9%    5363392        proc-vmstat.pgalloc_normal
   5059472           +52.9%    7736238        proc-vmstat.pgfault
   3507816           +52.1%    5334638        proc-vmstat.pgfree
     60412          +149.0%     150412        proc-vmstat.pgpgin
   1167224 ± 37%     +57.1%    1833427 ± 23%  proc-vmstat.pgpgout
     48082            -7.2%      44612        proc-vmstat.pgreuse
   2824960            -9.5%    2556672        proc-vmstat.unevictable_pgs_scanned



***************************************************************************************************
lkp-csl-2sp7: 96 threads 2 sockets Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz (Cascade Lake) with 512G memory
=========================================================================================
bs/compiler/cpufreq_governor/disk/fs/ioengine/kconfig/mount_option/nr_task/rootfs/runtime/rw/tbox_group/test_size/testcase/time_based:
  2M/gcc-11/performance/2pmem/ext4/libaio/x86_64-rhel-8.3/dax/50%/debian-11.1-x86_64-20220510.cgz/200s/read/lkp-csl-2sp7/200G/fio-basic/tb

commit: 
  e6eadc0324 ("iov_iter: Use the direction in the iterator functions")
  64ea9d6c5f ("iov_iter: Use IOCB/IOMAP_WRITE if available rather than iterator direction")

e6eadc0324e475e3 64ea9d6c5f473c29c5de97abaa6 
---------------- --------------------------- 
         %stddev     %change         %stddev
             \          |                \  
     27.54 ±  9%     +69.2       96.71        fio.latency_100ms%
      0.01            -0.0        0.00        fio.latency_100us%
      0.08            +0.1        0.17        fio.latency_10ms%
      0.01            +0.0        0.02        fio.latency_10us%
      0.13 ±  5%      +0.2        0.29        fio.latency_20ms%
      0.01            +0.0        0.03        fio.latency_20us%
     71.47 ±  3%     -71.4        0.10        fio.latency_250ms%
      0.00 ±141%      +0.0        0.05        fio.latency_2ms%
      0.03            +0.0        0.05        fio.latency_4ms%
      0.25 ± 45%      -0.2        0.04        fio.latency_500ms%
      0.43 ±  2%      +2.1        2.55        fio.latency_50ms%
      0.04 ±  2%      -0.0        0.01        fio.latency_50us%
      0.02 ± 38%      -0.0        0.01        fio.latency_750ms%
     25280          +127.0%      57386        fio.read_bw_MBps
 1.402e+08           -62.8%   52166656        fio.read_clat_90%_us
 1.409e+08           -63.0%   52166656        fio.read_clat_95%_us
 1.744e+08 ±  2%     -68.0%   55836672        fio.read_clat_99%_us
 1.168e+08           -55.9%   51450892        fio.read_clat_mean_us
  25859755 ±  3%     -66.6%    8643794        fio.read_clat_stddev
     12640          +127.0%      28693        fio.read_iops
   3768596           -55.8%    1667016        fio.read_slat_mean_us
    822091 ±  3%     -64.9%     288388        fio.read_slat_stddev
     14587           -10.2%      13097        fio.time.involuntary_context_switches
     76.34 ± 12%     -56.3%      33.35        fio.time.user_time
     20547            +3.1%      21174        fio.time.voluntary_context_switches
   2528054          +127.0%    5738728        fio.workload
 9.493e+09           +40.3%  1.332e+10        cpuidle..time
  19669558           +41.3%   27786024        cpuidle..usage
     49.49           +15.9%      57.36        iostat.cpu.idle
     50.01           -15.3%      42.37        iostat.cpu.system
     49.01            +8.0       57.04        mpstat.cpu.all.idle%
      2.43            +0.4        2.83        mpstat.cpu.all.irq%
     47.86            -8.2       39.69        mpstat.cpu.all.sys%
      0.50 ± 10%      -0.2        0.27        mpstat.cpu.all.usr%
     49.00           +16.3%      57.00        vmstat.cpu.id
    103.33 ± 17%     -93.2%       7.00        vmstat.io.bo
     47.00           -17.0%      39.00        vmstat.procs.r
    195642            -1.3%     193102        vmstat.system.in
    207006 ± 12%     +36.0%     281618        numa-numastat.node0.local_node
    221613 ± 12%     +41.1%     312658        numa-numastat.node0.numa_hit
     14607 ± 86%    +112.5%      31040        numa-numastat.node0.other_node
    311816 ± 10%     -17.1%     258424        numa-numastat.node1.numa_hit
     16596 ± 76%     -99.1%     150.00        numa-numastat.node1.other_node
     60852 ±  6%     -21.6%      47683        meminfo.Active
     39736 ±  9%     -33.2%      26556        meminfo.Active(anon)
   3113621           -17.3%    2576069        meminfo.AnonHugePages
   3386543           -16.2%    2836667        meminfo.AnonPages
   5471585           -16.5%    4567705        meminfo.Committed_AS
  10203136 ±  8%     -32.7%    6868992        meminfo.DirectMap2M
   3406759           -16.3%    2852721        meminfo.Inactive
   3403984           -16.3%    2850189        meminfo.Inactive(anon)
     14220           -13.8%      12254        meminfo.PageTables
     56806 ±  7%     -28.8%      40419        meminfo.Shmem
      1446           -15.7%       1219        turbostat.Avg_MHz
     51.78            -7.3       44.44        turbostat.Busy%
      2799            -1.8%       2749        turbostat.Bzy_MHz
     41071 ± 10%     -20.7%      32578        turbostat.C1
  17513765 ±  8%     -19.0%   14194816        turbostat.C1E
     39.55 ± 15%     -27.3       12.28        turbostat.C1E%
   2087050 ± 70%    +547.6%   13516594        turbostat.C6
      9.07 ± 69%     +35.1       44.17        turbostat.C6%
      0.20 ±  4%   +1898.3%       3.93        turbostat.CPU%c6
     61.33           -23.4%      47.00        turbostat.CoreTmp
      0.04          -100.0%       0.00        turbostat.IPC
  39925293           +19.3%   47644339        turbostat.IRQ
      7201 ± 11%    +239.4%      24439        turbostat.POLL
     61.33           -21.7%      48.00        turbostat.PkgTmp
    252.93            -3.3%     244.69        turbostat.PkgWatt
     58.61           -12.0%      51.56        turbostat.RAMWatt
      9967 ±  9%     -33.4%       6639        proc-vmstat.nr_active_anon
    845290           -16.1%     709177        proc-vmstat.nr_anon_pages
      1517           -17.2%       1257        proc-vmstat.nr_anon_transparent_hugepages
    849647           -16.1%     712557        proc-vmstat.nr_inactive_anon
    691.67            -8.6%     632.00        proc-vmstat.nr_inactive_file
     11249            -8.7%      10266        proc-vmstat.nr_mapped
      3542           -13.5%       3063        proc-vmstat.nr_page_table_pages
     14239 ±  7%     -29.0%      10105        proc-vmstat.nr_shmem
     49481            -1.3%      48815        proc-vmstat.nr_slab_unreclaimable
      9967 ±  9%     -33.4%       6639        proc-vmstat.nr_zone_active_anon
    849647           -16.1%     712557        proc-vmstat.nr_zone_inactive_anon
    691.67            -8.6%     632.00        proc-vmstat.nr_zone_inactive_file
     27135 ±  5%     -21.3%      21342        proc-vmstat.numa_hint_faults_local
    535623            +7.1%     573676        proc-vmstat.numa_hit
     23491           -18.3%      19198        proc-vmstat.numa_huge_pte_updates
    504420            +7.5%     542486        proc-vmstat.numa_local
  12075054           -18.3%    9860674        proc-vmstat.numa_pte_updates
     16357 ±  6%     -39.0%       9979        proc-vmstat.pgactivate
   1743457            +2.5%    1786342        proc-vmstat.pgalloc_normal
    698136            +9.1%     761457        proc-vmstat.pgfault
   1705291            +2.8%    1753621        proc-vmstat.pgfree
     39558 ±  3%     +15.0%      45480        proc-vmstat.pgreuse
   1560064           +19.5%    1864704        proc-vmstat.unevictable_pgs_scanned
      8738 ±100%    +164.0%      23070        numa-meminfo.node0.Active
      1409 ± 14%     +42.4%       2007        numa-meminfo.node0.Active(anon)
      7328 ±118%    +187.4%      21063        numa-meminfo.node0.Active(file)
   1581062 ±  2%     -19.3%    1275629        numa-meminfo.node0.AnonHugePages
   1725552 ±  2%     -17.5%    1423635        numa-meminfo.node0.AnonPages
   2287942 ± 27%     +42.8%    3267712        numa-meminfo.node0.AnonPages.max
   1733797           -17.1%    1437793        numa-meminfo.node0.Inactive
   1732845           -17.2%    1435355        numa-meminfo.node0.Inactive(anon)
     79100           -16.5%      66071        numa-meminfo.node0.KReclaimable
     79100           -16.5%      66071        numa-meminfo.node0.SReclaimable
    111641 ±  3%     -12.7%      97495        numa-meminfo.node0.SUnreclaim
    190741           -14.2%     163567        numa-meminfo.node0.Slab
     52238 ± 21%     -52.9%      24620        numa-meminfo.node1.Active
     38449 ± 10%     -36.1%      24556        numa-meminfo.node1.Active(anon)
     13787 ± 63%     -99.5%      64.00        numa-meminfo.node1.Active(file)
   1529392           -15.3%    1295492        numa-meminfo.node1.AnonHugePages
   1657609 ±  2%     -15.1%    1408075        numa-meminfo.node1.AnonPages
   2677658 ± 22%     -33.8%    1772936        numa-meminfo.node1.AnonPages.max
   1669831           -15.6%    1409979        numa-meminfo.node1.Inactive
   1668008           -15.5%    1409881        numa-meminfo.node1.Inactive(anon)
      1823 ± 66%     -94.7%      97.00        numa-meminfo.node1.Inactive(file)
     31911 ±  4%     +40.1%      44709        numa-meminfo.node1.KReclaimable
      8625 ±  4%      -8.7%       7872        numa-meminfo.node1.KernelStack
      9770 ± 53%     -85.5%       1416        numa-meminfo.node1.Mapped
      8619 ± 50%     -78.6%       1848        numa-meminfo.node1.PageTables
     31911 ±  4%     +40.1%      44709        numa-meminfo.node1.SReclaimable
     86312 ±  3%     +13.3%      97764        numa-meminfo.node1.SUnreclaim
     48216 ± 14%     -45.0%      26528        numa-meminfo.node1.Shmem
    118223           +20.5%     142473        numa-meminfo.node1.Slab
    106246 ± 69%     +83.5%     194925        numa-meminfo.node1.Unevictable
    352.33 ± 14%     +42.5%     502.00        numa-vmstat.node0.nr_active_anon
      1831 ±118%    +187.5%       5265        numa-vmstat.node0.nr_active_file
    431431           -17.5%     355907        numa-vmstat.node0.nr_anon_pages
    772.00 ±  2%     -19.4%     622.00        numa-vmstat.node0.nr_anon_transparent_hugepages
    433200           -17.2%     358837        numa-vmstat.node0.nr_inactive_anon
     19774           -16.5%      16517        numa-vmstat.node0.nr_slab_reclaimable
     27907 ±  3%     -12.7%      24373        numa-vmstat.node0.nr_slab_unreclaimable
     68.33 ± 86%    +324.4%     290.00        numa-vmstat.node0.nr_written
    352.00 ± 14%     +42.6%     502.00        numa-vmstat.node0.nr_zone_active_anon
      1831 ±118%    +187.5%       5265        numa-vmstat.node0.nr_zone_active_file
    433200           -17.2%     358837        numa-vmstat.node0.nr_zone_inactive_anon
    221313 ± 12%     +41.4%     312932        numa-vmstat.node0.numa_hit
    206706 ± 12%     +36.4%     281892        numa-vmstat.node0.numa_local
     14607 ± 86%    +112.5%      31040        numa-vmstat.node0.numa_other
      9602 ± 10%     -36.1%       6140        numa-vmstat.node1.nr_active_anon
      3446 ± 63%     -99.5%      16.00        numa-vmstat.node1.nr_active_file
    415090 ±  2%     -15.2%     352032        numa-vmstat.node1.nr_anon_pages
    747.67           -15.5%     632.00        numa-vmstat.node1.nr_anon_transparent_hugepages
    417671 ±  2%     -15.6%     352483        numa-vmstat.node1.nr_inactive_anon
    455.00 ± 66%     -94.7%      24.00        numa-vmstat.node1.nr_inactive_file
      8624 ±  4%      -8.6%       7879        numa-vmstat.node1.nr_kernel_stack
      2476 ± 52%     -85.6%     356.00        numa-vmstat.node1.nr_mapped
      2150 ± 49%     -78.5%     463.00        numa-vmstat.node1.nr_page_table_pages
     12065 ± 14%     -45.0%       6634        numa-vmstat.node1.nr_shmem
      7974 ±  4%     +40.2%      11177        numa-vmstat.node1.nr_slab_reclaimable
     21576 ±  3%     +13.3%      24441        numa-vmstat.node1.nr_slab_unreclaimable
     26561 ± 69%     +83.5%      48731        numa-vmstat.node1.nr_unevictable
    232.67 ± 81%     -96.6%       8.00        numa-vmstat.node1.nr_written
      9602 ± 10%     -36.1%       6140        numa-vmstat.node1.nr_zone_active_anon
      3446 ± 63%     -99.5%      16.00        numa-vmstat.node1.nr_zone_active_file
    417671 ±  2%     -15.6%     352483        numa-vmstat.node1.nr_zone_inactive_anon
    455.00 ± 66%     -94.7%      24.00        numa-vmstat.node1.nr_zone_inactive_file
     26561 ± 69%     +83.5%      48731        numa-vmstat.node1.nr_zone_unevictable
    311712 ± 10%     -17.0%     258614        numa-vmstat.node1.numa_hit
     16596 ± 76%     -99.1%     150.00        numa-vmstat.node1.numa_other
      0.49 ±  9%     -18.2%       0.40        sched_debug.cfs_rq:/.h_nr_running.avg
    465919 ± 10%     -18.9%     378087        sched_debug.cfs_rq:/.load.avg
    365287 ±  3%      -8.6%     333715        sched_debug.cfs_rq:/.load.stddev
    481.31 ±  7%     -20.5%     382.74        sched_debug.cfs_rq:/.load_avg.avg
      1050           -10.5%     940.20        sched_debug.cfs_rq:/.load_avg.max
    428.86           -12.5%     375.44        sched_debug.cfs_rq:/.load_avg.stddev
     76204 ± 13%     +31.9%     100504        sched_debug.cfs_rq:/.min_vruntime.avg
     39740 ±  9%     +18.1%      46947        sched_debug.cfs_rq:/.min_vruntime.min
     17319 ± 21%     +69.5%      29363        sched_debug.cfs_rq:/.min_vruntime.stddev
      0.49 ±  9%     -17.7%       0.40        sched_debug.cfs_rq:/.nr_running.avg
    284.44 ± 14%     -28.0%     204.80        sched_debug.cfs_rq:/.removed.load_avg.max
    144.64 ± 13%     -28.4%     103.60        sched_debug.cfs_rq:/.removed.runnable_avg.max
    144.64 ± 13%     -28.4%     103.60        sched_debug.cfs_rq:/.removed.util_avg.max
    546.97 ±  6%     -21.6%     428.59        sched_debug.cfs_rq:/.runnable_avg.avg
      1155 ±  6%     -18.5%     942.20        sched_debug.cfs_rq:/.runnable_avg.max
    447.05           -13.9%     385.08        sched_debug.cfs_rq:/.runnable_avg.stddev
      5587 ±372%    +580.9%      38039        sched_debug.cfs_rq:/.spread0.avg
    -30962           -49.9%     -15523        sched_debug.cfs_rq:/.spread0.min
     17327 ± 21%     +69.5%      29364        sched_debug.cfs_rq:/.spread0.stddev
    545.86 ±  6%     -21.5%     428.25        sched_debug.cfs_rq:/.util_avg.avg
      1119 ±  4%     -15.8%     941.80        sched_debug.cfs_rq:/.util_avg.max
    445.99           -13.7%     384.75        sched_debug.cfs_rq:/.util_avg.stddev
    191.65 ±  7%     -24.1%     145.38        sched_debug.cfs_rq:/.util_est_enqueued.avg
    763.44 ± 11%     -46.3%     409.80        sched_debug.cfs_rq:/.util_est_enqueued.max
    184.14           -24.0%     140.00        sched_debug.cfs_rq:/.util_est_enqueued.stddev
    936335           +11.3%    1042304        sched_debug.cpu.avg_idle.avg
   1113428           +98.0%    2204219        sched_debug.cpu.avg_idle.max
    484973           -32.9%     325215        sched_debug.cpu.avg_idle.min
    105940          +168.2%     284137        sched_debug.cpu.avg_idle.stddev
     25.77 ± 10%    +218.7%      82.14        sched_debug.cpu.clock.stddev
      2206 ±  4%     -16.4%       1845        sched_debug.cpu.curr->pid.avg
      7105 ±  5%     +15.1%       8178        sched_debug.cpu.curr->pid.max
      2394 ±  3%      -9.5%       2167        sched_debug.cpu.curr->pid.stddev
    553458 ±  2%     +19.5%     661645        sched_debug.cpu.max_idle_balance_cost.max
      8200 ± 43%    +392.0%      40345        sched_debug.cpu.max_idle_balance_cost.stddev
      0.00 ±  8%    +168.0%       0.00        sched_debug.cpu.next_balance.stddev
      0.41 ±  4%     -17.7%       0.33        sched_debug.cpu.nr_running.avg
      0.45 ±  2%     -16.3%       0.38        sched_debug.cpu.nr_running.stddev
      4341 ±  6%     +17.3%       5092        sched_debug.cpu.nr_switches.avg
      1292           -10.8%       1152        sched_debug.cpu.nr_switches.min
      2633 ± 11%     +32.1%       3479        sched_debug.cpu.nr_switches.stddev
     42.19            +5.5%      44.52        perf-stat.i.MPKI
 3.476e+09           -94.8%  1.817e+08        perf-stat.i.branch-instructions
      0.13 ±  2%      +3.3        3.40        perf-stat.i.branch-miss-rate%
   4260734           +45.4%    6193178        perf-stat.i.branch-misses
     94.70           -55.7       38.96        perf-stat.i.cache-miss-rate%
 8.289e+08           -98.6%   11388051        perf-stat.i.cache-misses
 8.747e+08           -95.7%   37441183        perf-stat.i.cache-references
      6.62         +1894.1%     132.02        perf-stat.i.cpi
 1.366e+11           -16.9%  1.136e+11        perf-stat.i.cpu-cycles
    124.03            -8.7%     113.22        perf-stat.i.cpu-migrations
    167.03         +5270.0%       8969        perf-stat.i.cycles-between-cache-misses
      0.00 ±  6%      +0.3        0.26        perf-stat.i.dTLB-load-miss-rate%
    145663 ±  5%    +300.9%     584004        perf-stat.i.dTLB-load-misses
  3.55e+09           -92.9%  2.537e+08        perf-stat.i.dTLB-loads
      0.00 ±  2%      +0.0        0.05        perf-stat.i.dTLB-store-miss-rate%
     40129 ±  2%     +61.9%      64958        perf-stat.i.dTLB-store-misses
 3.429e+09           -96.0%  1.355e+08        perf-stat.i.dTLB-stores
     43.30           +17.4       60.74        perf-stat.i.iTLB-load-miss-rate%
   1204294 ±  4%    +175.2%    3314508        perf-stat.i.iTLB-load-misses
   1575387           +24.0%    1953164        perf-stat.i.iTLB-loads
  2.07e+10           -95.5%  9.308e+08        perf-stat.i.instructions
     17218 ±  4%     -97.8%     375.10        perf-stat.i.instructions-per-iTLB-miss
      0.15           -81.2%       0.03        perf-stat.i.ipc
      1.42           -16.9%       1.18        perf-stat.i.metric.GHz
    938.97 ±  2%     -46.1%     505.78        perf-stat.i.metric.K/sec
    119.86           -95.1%       5.91        perf-stat.i.metric.M/sec
      2609 ±  2%      -8.4%       2390        perf-stat.i.minor-faults
     50.80            +5.2       56.00        perf-stat.i.node-load-miss-rate%
  44268862           -99.3%     289365        perf-stat.i.node-load-misses
  42927271 ±  3%     -99.3%     307807        perf-stat.i.node-loads
      0.60 ±  9%     +10.4       11.00        perf-stat.i.node-store-miss-rate%
    791120 ± 13%     -91.1%      70479        perf-stat.i.node-store-misses
  1.79e+08           -95.2%    8530049        perf-stat.i.node-stores
      2610            -8.4%       2390        perf-stat.i.page-faults
     42.24            -7.3%      39.16        perf-stat.overall.MPKI
      0.12            +3.2        3.32        perf-stat.overall.branch-miss-rate%
     94.76           -63.5       31.27        perf-stat.overall.cache-miss-rate%
      6.60         +1750.9%     122.19        perf-stat.overall.cpi
    164.91         +5950.5%       9977        perf-stat.overall.cycles-between-cache-misses
      0.00 ±  5%      +0.2        0.22        perf-stat.overall.dTLB-load-miss-rate%
      0.00 ±  2%      +0.0        0.05        perf-stat.overall.dTLB-store-miss-rate%
     43.32           +19.7       63.02        perf-stat.overall.iTLB-load-miss-rate%
     17223 ±  4%     -98.4%     281.41        perf-stat.overall.instructions-per-iTLB-miss
      0.15           -94.6%       0.01        perf-stat.overall.ipc
     50.78            -1.1       49.66        perf-stat.overall.node-load-miss-rate%
      0.45 ± 13%      +0.4        0.82        perf-stat.overall.node-store-miss-rate%
   1646075           -97.6%      39879        perf-stat.overall.path-length
 3.458e+09           -94.7%  1.832e+08        perf-stat.ps.branch-instructions
   4239989           +43.4%    6081614        perf-stat.ps.branch-misses
 8.245e+08           -98.6%   11479341        perf-stat.ps.cache-misses
   8.7e+08           -95.8%   36710005        perf-stat.ps.cache-references
  1.36e+11           -15.8%  1.145e+11        perf-stat.ps.cpu-cycles
    122.93            -8.3%     112.76        perf-stat.ps.cpu-migrations
    144896 ±  5%    +282.8%     554706        perf-stat.ps.dTLB-load-misses
 3.532e+09           -92.8%  2.552e+08        perf-stat.ps.dTLB-loads
     39845 ±  2%     +57.7%      62850        perf-stat.ps.dTLB-store-misses
  3.41e+09           -96.0%  1.359e+08        perf-stat.ps.dTLB-stores
   1197745 ±  4%    +178.1%    3330985        perf-stat.ps.iTLB-load-misses
   1566364           +24.8%    1954999        perf-stat.ps.iTLB-loads
 2.059e+10           -95.4%  9.374e+08        perf-stat.ps.instructions
      2594 ±  2%      -7.1%       2408        perf-stat.ps.minor-faults
  44022802           -99.3%     308451        perf-stat.ps.node-load-misses
  42696925 ±  3%     -99.3%     312663        perf-stat.ps.node-loads
    807658 ± 14%     -91.1%      71548        perf-stat.ps.node-store-misses
  1.78e+08           -95.2%    8602052        perf-stat.ps.node-stores
      2594 ±  2%      -7.1%       2408        perf-stat.ps.page-faults
 4.161e+12           -94.5%  2.289e+11        perf-stat.total.instructions
     66.74 ±  9%     -66.7        0.00        perf-profile.calltrace.cycles-pp.dax_iomap_iter.dax_iomap_rw.ext4_file_read_iter.aio_read.io_submit_one
     66.68 ±  9%     -66.7        0.00        perf-profile.calltrace.cycles-pp._copy_mc_to_iter.dax_iomap_iter.dax_iomap_rw.ext4_file_read_iter.aio_read
     66.65 ±  9%     -66.6        0.00        perf-profile.calltrace.cycles-pp.copyout_mc._copy_mc_to_iter.dax_iomap_iter.dax_iomap_rw.ext4_file_read_iter
     66.65 ±  9%     -66.6        0.00        perf-profile.calltrace.cycles-pp.copy_mc_to_user.copyout_mc._copy_mc_to_iter.dax_iomap_iter.dax_iomap_rw
     66.44 ±  9%     -66.4        0.00        perf-profile.calltrace.cycles-pp.copy_mc_fragile.copy_mc_to_user.copyout_mc._copy_mc_to_iter.dax_iomap_iter
     67.34 ±  9%     -11.5       55.88        perf-profile.calltrace.cycles-pp.syscall
     67.32 ±  9%     -11.4       55.87        perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.syscall
     67.31 ±  9%     -11.4       55.87        perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.syscall
     67.18 ±  9%     -11.3       55.84        perf-profile.calltrace.cycles-pp.__x64_sys_io_submit.do_syscall_64.entry_SYSCALL_64_after_hwframe.syscall
     67.17 ±  9%     -11.3       55.84        perf-profile.calltrace.cycles-pp.io_submit_one.__x64_sys_io_submit.do_syscall_64.entry_SYSCALL_64_after_hwframe.syscall
     67.02 ±  9%     -11.2       55.82        perf-profile.calltrace.cycles-pp.aio_read.io_submit_one.__x64_sys_io_submit.do_syscall_64.entry_SYSCALL_64_after_hwframe
     66.94 ±  9%     -11.1       55.80        perf-profile.calltrace.cycles-pp.ext4_file_read_iter.aio_read.io_submit_one.__x64_sys_io_submit.do_syscall_64
     66.88 ±  9%     -11.1       55.77        perf-profile.calltrace.cycles-pp.dax_iomap_rw.ext4_file_read_iter.aio_read.io_submit_one.__x64_sys_io_submit
     31.73 ± 20%     +11.0       42.75        perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call.do_idle.cpu_startup_entry
     31.39 ± 21%     +11.3       42.72        perf-profile.calltrace.cycles-pp.mwait_idle_with_hints.intel_idle.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call
     31.39 ± 21%     +11.3       42.72        perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call.do_idle
     31.75 ± 20%     +11.5       43.23        perf-profile.calltrace.cycles-pp.secondary_startup_64_no_verify
     31.12 ± 20%     +11.6       42.76        perf-profile.calltrace.cycles-pp.cpuidle_idle_call.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
     31.11 ± 20%     +11.6       42.75        perf-profile.calltrace.cycles-pp.cpuidle_enter.cpuidle_idle_call.do_idle.cpu_startup_entry.start_secondary
     31.13 ± 20%     +11.6       42.78        perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
     31.13 ± 20%     +11.6       42.78        perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64_no_verify
     31.13 ± 20%     +11.6       42.78        perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64_no_verify
      0.00           +15.9       15.86        perf-profile.calltrace.cycles-pp.clear_user_erms.iov_iter_zero.dax_iomap_rw.ext4_file_read_iter.aio_read
      0.00           +55.7       55.73        perf-profile.calltrace.cycles-pp.iov_iter_zero.dax_iomap_rw.ext4_file_read_iter.aio_read.io_submit_one
      0.00           +80.1       80.09        perf-profile.calltrace.cycles-pp.asm_sysvec_apic_timer_interrupt.clear_user_erms.iov_iter_zero.dax_iomap_rw.ext4_file_read_iter
     66.74 ±  9%     -66.7        0.00        perf-profile.children.cycles-pp.dax_iomap_iter
     66.68 ±  9%     -66.7        0.00        perf-profile.children.cycles-pp._copy_mc_to_iter
     66.66 ±  9%     -66.7        0.00        perf-profile.children.cycles-pp.copyout_mc
     66.65 ±  9%     -66.6        0.00        perf-profile.children.cycles-pp.copy_mc_fragile
     66.65 ±  9%     -66.6        0.00        perf-profile.children.cycles-pp.copy_mc_to_user
     67.35 ±  9%     -11.5       55.88        perf-profile.children.cycles-pp.syscall
     67.18 ±  9%     -11.3       55.84        perf-profile.children.cycles-pp.__x64_sys_io_submit
     67.17 ±  9%     -11.3       55.84        perf-profile.children.cycles-pp.io_submit_one
     67.56 ±  9%     -11.2       56.35        perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
     67.55 ±  9%     -11.2       56.35        perf-profile.children.cycles-pp.do_syscall_64
     67.02 ±  9%     -11.2       55.82        perf-profile.children.cycles-pp.aio_read
     66.94 ±  9%     -11.1       55.80        perf-profile.children.cycles-pp.ext4_file_read_iter
     66.88 ±  9%     -11.1       55.77        perf-profile.children.cycles-pp.dax_iomap_rw
      0.83 ± 10%      -0.4        0.47        perf-profile.children.cycles-pp.sysvec_apic_timer_interrupt
      0.75 ±  8%      -0.3        0.44        perf-profile.children.cycles-pp.__sysvec_apic_timer_interrupt
      0.73 ±  7%      -0.3        0.43        perf-profile.children.cycles-pp.hrtimer_interrupt
      0.60 ±  2%      -0.2        0.36        perf-profile.children.cycles-pp.__hrtimer_run_queues
      0.43 ± 17%      -0.2        0.24        perf-profile.children.cycles-pp.tick_sched_timer
      0.40 ± 15%      -0.2        0.21        perf-profile.children.cycles-pp.update_process_times
      0.40 ± 16%      -0.2        0.22        perf-profile.children.cycles-pp.tick_sched_handle
      0.29 ± 13%      -0.2        0.14        perf-profile.children.cycles-pp.scheduler_tick
      0.61 ± 23%      -0.2        0.46        perf-profile.children.cycles-pp.start_kernel
      0.61 ± 23%      -0.2        0.46        perf-profile.children.cycles-pp.arch_call_rest_init
      0.61 ± 23%      -0.2        0.46        perf-profile.children.cycles-pp.rest_init
      0.22 ± 11%      -0.1        0.11        perf-profile.children.cycles-pp.task_tick_fair
      0.02 ±141%      +0.0        0.06        perf-profile.children.cycles-pp.ksys_write
      0.02 ±141%      +0.0        0.06        perf-profile.children.cycles-pp.vfs_write
      0.02 ±141%      +0.0        0.06        perf-profile.children.cycles-pp.__intel_pmu_enable_all
      0.00            +0.1        0.05        perf-profile.children.cycles-pp.kernel_clone
      0.00            +0.1        0.05        perf-profile.children.cycles-pp.read
      0.00            +0.1        0.05        perf-profile.children.cycles-pp.exec_binprm
      0.00            +0.1        0.05        perf-profile.children.cycles-pp.search_binary_handler
      0.00            +0.1        0.05        perf-profile.children.cycles-pp.load_elf_binary
      0.00            +0.1        0.05        perf-profile.children.cycles-pp.record__pushfn
      0.00            +0.1        0.05        perf-profile.children.cycles-pp.writen
      0.00            +0.1        0.05        perf-profile.children.cycles-pp.__libc_write
      0.00            +0.1        0.05        perf-profile.children.cycles-pp.generic_file_write_iter
      0.00            +0.1        0.06        perf-profile.children.cycles-pp.irq_work_run_list
      0.00            +0.1        0.06        perf-profile.children.cycles-pp.asm_sysvec_irq_work
      0.00            +0.1        0.06        perf-profile.children.cycles-pp.sysvec_irq_work
      0.00            +0.1        0.06        perf-profile.children.cycles-pp.__sysvec_irq_work
      0.00            +0.1        0.06        perf-profile.children.cycles-pp.irq_work_single
      0.00            +0.1        0.06        perf-profile.children.cycles-pp.irq_work_run
      0.00            +0.1        0.06        perf-profile.children.cycles-pp._printk
      0.00            +0.1        0.06        perf-profile.children.cycles-pp.vprintk_emit
      0.00            +0.1        0.06        perf-profile.children.cycles-pp.console_unlock
      0.00            +0.1        0.06        perf-profile.children.cycles-pp.console_flush_all
      0.00            +0.1        0.06        perf-profile.children.cycles-pp.console_emit_next_record
      0.00            +0.1        0.06        perf-profile.children.cycles-pp.ksys_read
      0.00            +0.1        0.06        perf-profile.children.cycles-pp.memcpy_erms
      0.00            +0.1        0.06        perf-profile.children.cycles-pp.serial8250_console_write
      0.00            +0.1        0.06        perf-profile.children.cycles-pp.vfs_read
      0.00            +0.1        0.06        perf-profile.children.cycles-pp.wait_for_lsr
      0.04 ± 71%      +0.1        0.10        perf-profile.children.cycles-pp.worker_thread
      0.02 ±141%      +0.1        0.08        perf-profile.children.cycles-pp.drm_fb_helper_damage_work
      0.02 ±141%      +0.1        0.08        perf-profile.children.cycles-pp.drm_fbdev_fb_dirty
      0.02 ±141%      +0.1        0.09        perf-profile.children.cycles-pp.process_one_work
      0.00            +0.1        0.07        perf-profile.children.cycles-pp.bprm_execve
      0.00            +0.1        0.07        perf-profile.children.cycles-pp.__handle_mm_fault
      0.00            +0.1        0.07        perf-profile.children.cycles-pp.io_serial_in
      0.07 ± 74%      +0.1        0.14        perf-profile.children.cycles-pp.kthread
      0.00            +0.1        0.08        perf-profile.children.cycles-pp.handle_mm_fault
      0.07 ± 74%      +0.1        0.15        perf-profile.children.cycles-pp.ret_from_fork
      0.00            +0.1        0.09        perf-profile.children.cycles-pp.asm_exc_page_fault
      0.00            +0.1        0.09        perf-profile.children.cycles-pp.exc_page_fault
      0.00            +0.1        0.09        perf-profile.children.cycles-pp.do_user_addr_fault
      0.02 ±141%      +0.1        0.11        perf-profile.children.cycles-pp.execve
      0.02 ±141%      +0.1        0.11        perf-profile.children.cycles-pp.__x64_sys_execve
      0.02 ±141%      +0.1        0.11        perf-profile.children.cycles-pp.do_execveat_common
     31.74 ± 20%     +11.5       43.21        perf-profile.children.cycles-pp.cpuidle_idle_call
     31.75 ± 20%     +11.5       43.23        perf-profile.children.cycles-pp.secondary_startup_64_no_verify
     31.75 ± 20%     +11.5       43.23        perf-profile.children.cycles-pp.cpu_startup_entry
     31.75 ± 20%     +11.5       43.23        perf-profile.children.cycles-pp.do_idle
     31.73 ± 20%     +11.5       43.21        perf-profile.children.cycles-pp.cpuidle_enter
     31.73 ± 20%     +11.5       43.21        perf-profile.children.cycles-pp.cpuidle_enter_state
     31.70 ± 20%     +11.5       43.19        perf-profile.children.cycles-pp.mwait_idle_with_hints
     31.55 ± 21%     +11.6       43.17        perf-profile.children.cycles-pp.intel_idle
     31.13 ± 20%     +11.6       42.78        perf-profile.children.cycles-pp.start_secondary
      1.20 ± 27%     +39.1       40.32        perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt
      0.00           +55.7       55.71        perf-profile.children.cycles-pp.clear_user_erms
      0.00           +55.7       55.73        perf-profile.children.cycles-pp.iov_iter_zero
     65.79 ±  9%     -65.8        0.00        perf-profile.self.cycles-pp.copy_mc_fragile
      0.02 ±141%      +0.0        0.06        perf-profile.self.cycles-pp.__intel_pmu_enable_all
      0.00            +0.1        0.05        perf-profile.self.cycles-pp.io_serial_in
     31.70 ± 20%     +11.5       43.19        perf-profile.self.cycles-pp.mwait_idle_with_hints
      0.00           +55.2       55.20        perf-profile.self.cycles-pp.clear_user_erms



To reproduce:

        git clone https://github.com/intel/lkp-tests.git
        cd lkp-tests
        sudo bin/lkp install job.yaml           # job file is attached in this email
        bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
        sudo bin/lkp run generated-yaml-file

        # if come across any failure that blocks the test,
        # please remove ~/.lkp and /lkp dir to run from a clean state.


Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests

View attachment "config-6.2.0-rc2-00206-g64ea9d6c5f47" of type "text/plain" (166849 bytes)

View attachment "job-script" of type "text/plain" (8747 bytes)

View attachment "job.yaml" of type "text/plain" (5993 bytes)

View attachment "reproduce" of type "text/plain" (934 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ