lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180703072239.GK15716@yexl-desktop>
Date:   Tue, 3 Jul 2018 15:22:39 +0800
From:   kernel test robot <xiaolong.ye@...el.com>
To:     AKASHI Takahiro <takahiro.akashi@...aro.org>
Cc:     Stephen Rothwell <sfr@...b.auug.org.au>,
        Baoquan He <bhe@...hat.com>,
        "Eric W. Biederman" <ebiederm@...ssion.com>,
        Vivek Goyal <vgoyal@...hat.com>,
        Dave Young <dyoung@...hat.com>,
        Philipp Rudo <prudo@...ux.vnet.ibm.com>,
        Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        LKML <linux-kernel@...r.kernel.org>, lkp@...org
Subject: [lkp-robot] [kernel/kexec_file.c]  b3ab44957a:
 will-it-scale.per_thread_ops -3.4% regression


Greeting,

FYI, we noticed a -3.4% regression of will-it-scale.per_thread_ops due to commit:


commit: b3ab44957abf947983082240f1d46a91217debce ("kernel/kexec_file.c: add walk_system_ram_res_rev()")
https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git master

in testcase: will-it-scale
on test machine: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 64G memory
with following parameters:

	nr_task: 50%
	mode: thread
	test: pread1
	cpufreq_governor: performance

test-description: Will It Scale takes a testcase and runs it from 1 through to n parallel copies to see if the testcase will scale. It builds both a process and threads based test in order to see any differences between the two.
test-url: https://github.com/antonblanchard/will-it-scale



Details are as below:
-------------------------------------------------------------------------------------------------->


To reproduce:

        git clone https://github.com/intel/lkp-tests.git
        cd lkp-tests
        bin/lkp install job.yaml  # job file is attached in this email
        bin/lkp run     job.yaml

=========================================================================================
compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase:
  gcc-7/performance/x86_64-rhel-7.2/thread/50%/debian-x86_64-2016-08-31.cgz/lkp-bdw-ep3d/pread1/will-it-scale

commit: 
  eb375ff7d4 ("fs/proc/vmcore.c: hide vmcoredd_mmap_dumps() for nommu builds")
  b3ab44957a ("kernel/kexec_file.c: add walk_system_ram_res_rev()")

eb375ff7d4369684 b3ab44957abf947983082240f1 
---------------- -------------------------- 
         %stddev     %change         %stddev
             \          |                \  
    682779            -3.4%     659754        will-it-scale.per_thread_ops
    301.12            -0.0%     301.11        will-it-scale.time.elapsed_time
    301.12            -0.0%     301.11        will-it-scale.time.elapsed_time.max
    571763            +0.2%     573080        will-it-scale.time.involuntary_context_switches
      9858            +0.6%       9920        will-it-scale.time.maximum_resident_set_size
      9218 ±  5%      -1.1%       9119 ±  6%  will-it-scale.time.minor_page_faults
      4096            +0.0%       4096        will-it-scale.time.page_size
      2394            -0.0%       2394        will-it-scale.time.percent_of_cpu_this_job_got
      6042            +0.4%       6064        will-it-scale.time.system_time
      1169            -1.9%       1146        will-it-scale.time.user_time
  30042296            -3.4%   29029223        will-it-scale.workload
     90508 ±  3%      -0.4%      90105 ±  4%  interrupts.CAL:Function_call_interrupts
     36.38 ± 10%      -5.4%      34.41 ±  3%  boot-time.boot
     24.59 ± 13%      -7.7%      22.70        boot-time.dhcp
    828.27 ± 10%      -5.4%     783.17 ±  2%  boot-time.idle
     26.35 ± 15%      -7.2%      24.46 ±  4%  boot-time.kernel_boot
      0.63            +0.0        0.63        mpstat.cpu.idle%
      0.00 ± 29%      +0.0        0.00 ± 39%  mpstat.cpu.soft%
     83.25            +0.3       83.56        mpstat.cpu.sys%
     16.12            -0.3       15.81        mpstat.cpu.usr%
     15162 ±  9%     +14.7%      17393 ±  3%  softirqs.NET_RX
    588099 ±  4%      -0.0%     587991 ±  3%  softirqs.RCU
    136751 ±  2%      -1.3%     134908        softirqs.SCHED
   3461000 ± 10%      +9.9%    3803766        softirqs.TIMER
   1300661            -0.4%    1295970        vmstat.memory.cache
  64248171            +0.0%   64253996        vmstat.memory.free
     44.00            +0.0%      44.00        vmstat.procs.r
      5640 ±  6%      +0.9%       5692 ±  9%  vmstat.system.cs
     25594            -0.3%      25524        vmstat.system.in
      0.00          -100.0%       0.00        numa-numastat.node0.interleave_hit
     23682 ± 51%     +48.7%      35218 ± 31%  numa-numastat.node0.local_node
     28389 ± 42%     +36.5%      38749 ± 30%  numa-numastat.node0.numa_hit
      4708           -25.0%       3531 ± 36%  numa-numastat.node0.other_node
      0.00          -100.0%       0.00        numa-numastat.node1.interleave_hit
    560644            -1.9%     549933        numa-numastat.node1.local_node
    560647            -1.7%     551122        numa-numastat.node1.numa_hit
      4.00 ±115%  +29631.2%       1189 ±108%  numa-numastat.node1.other_node
    248561 ±  8%      -7.9%     228859 ± 17%  cpuidle.C1.time
      6157 ± 10%     -10.2%       5528 ± 13%  cpuidle.C1.usage
     49658 ± 17%     -11.3%      44041 ± 15%  cpuidle.C1E.time
    565.50 ±  6%     -10.2%     508.00 ± 17%  cpuidle.C1E.usage
    229326 ±  3%      -1.2%     226688 ±  9%  cpuidle.C3.time
    803.00 ±  3%      -0.2%     801.25 ±  8%  cpuidle.C3.usage
  28230821            +0.1%   28246630        cpuidle.C6.time
     31693 ±  3%      +1.9%      32302        cpuidle.C6.usage
    903.25 ± 29%      +2.4%     924.50 ± 25%  cpuidle.POLL.time
     41.50 ± 23%     +31.3%      54.50 ±  9%  cpuidle.POLL.usage
      3080            +0.0%       3081        turbostat.Avg_MHz
     99.61            +0.0       99.61        turbostat.Busy%
      3100            +0.0%       3100        turbostat.Bzy_MHz
      3834 ± 15%     -14.8%       3268 ± 19%  turbostat.C1
    379.00 ± 21%     -10.9%     337.75 ± 16%  turbostat.C1E
    626.00 ±  2%      +2.9%     644.00 ± 10%  turbostat.C3
     31027 ±  3%      +2.0%      31636 ±  2%  turbostat.C6
      0.38 ±  2%      -0.0        0.38 ±  2%  turbostat.C6%
      0.09            +2.8%       0.09 ±  4%  turbostat.CPU%c1
      0.30 ±  2%      -1.7%       0.29 ±  2%  turbostat.CPU%c6
     69.50            +0.0%      69.50 ±  2%  turbostat.CoreTmp
   7775056            -0.3%    7752685        turbostat.IRQ
     74.00 ±  2%      -0.3%      73.75 ±  2%  turbostat.PkgTmp
    153.45            -0.2%     153.12        turbostat.PkgWatt
      7.19            +0.0%       7.19        turbostat.RAMWatt
      2195            +0.0%       2195        turbostat.TSC_MHz
    149421            +0.2%     149660        slabinfo.Acpi-Operand.active_objs
    149436            +0.1%     149660        slabinfo.Acpi-Operand.num_objs
      7470 ±  3%      +5.7%       7895 ±  3%  slabinfo.anon_vma.active_objs
      7470 ±  3%      +5.7%       7895 ±  3%  slabinfo.anon_vma.num_objs
      3380 ±  6%      +7.4%       3632 ±  9%  slabinfo.cred_jar.active_objs
      3380 ±  6%      +7.4%       3632 ±  9%  slabinfo.cred_jar.num_objs
      1214 ± 29%      +5.6%       1282 ± 31%  slabinfo.dmaengine-unmap-16.active_objs
      1214 ± 29%      +5.6%       1282 ± 31%  slabinfo.dmaengine-unmap-16.num_objs
      6704 ±  7%      +3.0%       6908 ±  4%  slabinfo.filp.active_objs
      7063 ±  5%      +4.2%       7363 ±  3%  slabinfo.filp.num_objs
      1862 ±  4%      -1.3%       1837 ±  5%  slabinfo.kmalloc-1024.active_objs
     11712 ±  2%      -2.2%      11456        slabinfo.kmalloc-16.active_objs
     11712 ±  2%      -2.2%      11456        slabinfo.kmalloc-16.num_objs
     29375 ±  2%      +1.5%      29825 ±  2%  slabinfo.kmalloc-32.active_objs
     29375 ±  2%      +1.6%      29841 ±  2%  slabinfo.kmalloc-32.num_objs
     35475            +0.7%      35714        slabinfo.kmalloc-64.active_objs
     14083 ±  3%      +3.5%      14578        slabinfo.kmalloc-8.active_objs
     14336 ±  2%      +1.8%      14592        slabinfo.kmalloc-8.num_objs
     17073 ±  7%      +5.4%      17995 ±  6%  slabinfo.pid.active_objs
     17096 ±  7%      +5.4%      18026 ±  6%  slabinfo.pid.num_objs
      1504 ±  8%      -5.9%       1416 ±  3%  slabinfo.skbuff_head_cache.num_objs
    660.25 ±  5%      +8.1%     714.00 ± 12%  slabinfo.task_group.active_objs
    660.25 ±  5%      +8.1%     714.00 ± 12%  slabinfo.task_group.num_objs
     96898 ±  3%      -1.1%      95832 ±  4%  meminfo.Active
     96894 ±  3%      -1.1%      95828 ±  4%  meminfo.Active(anon)
     49798            +1.4%      50488        meminfo.AnonHugePages
     80654            -0.2%      80531        meminfo.AnonPages
   1253425            -0.4%    1248650        meminfo.Cached
    203816            +0.0%     203816        meminfo.CmaFree
    204800            +0.0%     204800        meminfo.CmaTotal
  32933252            +0.0%   32933252        meminfo.CommitLimit
    515408 ±  2%      +3.2%     531677 ±  2%  meminfo.Committed_AS
  63176704            +0.0%   63176704        meminfo.DirectMap1G
   5824000 ±  7%      -0.1%    5815808 ±  7%  meminfo.DirectMap2M
    108476 ±  6%      +7.6%     116668 ±  5%  meminfo.DirectMap4k
      2048            +0.0%       2048        meminfo.Hugepagesize
      9888            -0.2%       9869        meminfo.Inactive
      9749            -0.2%       9729        meminfo.Inactive(anon)
      6714            +0.1%       6721        meminfo.KernelStack
     25376            +0.1%      25396        meminfo.Mapped
  63914004            +0.0%   63919889        meminfo.MemAvailable
  64248100            +0.0%   64253952        meminfo.MemFree
  65866508            +0.0%   65866508        meminfo.MemTotal
    851.50 ±100%     -50.2%     423.75 ±173%  meminfo.Mlocked
      3914            +0.2%       3920        meminfo.PageTables
     47199            +0.2%      47282        meminfo.SReclaimable
     60418            +0.8%      60894        meminfo.SUnreclaim
     25767 ± 12%      -4.0%      24737 ± 15%  meminfo.Shmem
    107619            +0.5%     108177        meminfo.Slab
   1227649            -0.3%    1223837        meminfo.Unevictable
 3.436e+10            +0.0%  3.436e+10        meminfo.VmallocTotal
     24231 ±  3%      -1.1%      23965 ±  4%  proc-vmstat.nr_active_anon
     20174            -0.2%      20142        proc-vmstat.nr_anon_pages
   1594921            +0.0%    1595066        proc-vmstat.nr_dirty_background_threshold
   3193742            +0.0%    3194034        proc-vmstat.nr_dirty_threshold
    313356            -0.4%     312164        proc-vmstat.nr_file_pages
     50954            +0.0%      50954        proc-vmstat.nr_free_cma
  16062025            +0.0%   16063488        proc-vmstat.nr_free_pages
      2435            -0.3%       2429        proc-vmstat.nr_inactive_anon
      6715            +0.1%       6722        proc-vmstat.nr_kernel_stack
      6516            +0.1%       6524        proc-vmstat.nr_mapped
    212.50 ±100%     -50.2%     105.75 ±173%  proc-vmstat.nr_mlock
    977.75            +0.2%     979.75        proc-vmstat.nr_page_table_pages
      6441 ± 12%      -4.0%       6186 ± 15%  proc-vmstat.nr_shmem
     11799            +0.2%      11820        proc-vmstat.nr_slab_reclaimable
     15104            +0.8%      15223        proc-vmstat.nr_slab_unreclaimable
    306911            -0.3%     305958        proc-vmstat.nr_unevictable
     24231 ±  3%      -1.1%      23965 ±  4%  proc-vmstat.nr_zone_active_anon
      2435            -0.3%       2429        proc-vmstat.nr_zone_inactive_anon
    306911            -0.3%     305958        proc-vmstat.nr_zone_unevictable
      1963 ± 21%      -6.7%       1832 ± 25%  proc-vmstat.numa_hint_faults
      1902 ± 22%      -6.8%       1773 ± 26%  proc-vmstat.numa_hint_faults_local
    613652            +0.1%     614137        proc-vmstat.numa_hit
    608937            +0.1%     609415        proc-vmstat.numa_local
      4714            +0.2%       4722        proc-vmstat.numa_other
      2017 ± 20%      -6.8%       1880 ± 24%  proc-vmstat.numa_pte_updates
      5114 ± 22%      -7.4%       4738 ± 29%  proc-vmstat.pgactivate
    614558            +0.2%     616065        proc-vmstat.pgalloc_normal
    749565            +0.1%     749943        proc-vmstat.pgfault
    603735            +0.4%     606404        proc-vmstat.pgfree
 2.916e+12            -3.4%  2.818e+12        perf-stat.branch-instructions
      1.25            +0.0        1.26        perf-stat.branch-miss-rate%
 3.653e+10            -3.0%  3.542e+10        perf-stat.branch-misses
      0.43 ±  9%      -0.1        0.38 ±  9%  perf-stat.cache-miss-rate%
  43087790 ± 15%     -15.2%   36533875 ±  6%  perf-stat.cache-misses
 1.001e+10 ± 13%      -3.6%  9.652e+09 ±  4%  perf-stat.cache-references
   1708228 ±  6%      +0.9%    1724116 ±  9%  perf-stat.context-switches
      1.48            +3.5%       1.53        perf-stat.cpi
 2.225e+13            -0.0%  2.225e+13        perf-stat.cpu-cycles
     34524 ±  4%      +3.0%      35573 ±  4%  perf-stat.cpu-migrations
      0.00 ±  4%      +0.0        0.00 ±  2%  perf-stat.dTLB-load-miss-rate%
  34282378 ±  3%      -0.3%   34173390 ±  2%  perf-stat.dTLB-load-misses
 5.637e+12            -3.4%  5.447e+12        perf-stat.dTLB-loads
      0.00 ±  8%      -0.0        0.00        perf-stat.dTLB-store-miss-rate%
   8134912 ±  8%     -37.6%    5073760        perf-stat.dTLB-store-misses
 4.068e+12            -3.4%  3.931e+12        perf-stat.dTLB-stores
     63.64 ±  2%      +3.3       66.89        perf-stat.iTLB-load-miss-rate%
 1.297e+10 ±  2%      +8.7%  1.409e+10 ±  2%  perf-stat.iTLB-load-misses
 7.412e+09 ±  5%      -5.8%  6.983e+09 ±  5%  perf-stat.iTLB-loads
 1.503e+13            -3.4%  1.452e+13        perf-stat.instructions
      1159 ±  2%     -11.1%       1031 ±  2%  perf-stat.instructions-per-iTLB-miss
      0.68            -3.4%       0.65        perf-stat.ipc
    729973            +0.0%     730221        perf-stat.minor-faults
     96.62            +0.9       97.53        perf-stat.node-load-miss-rate%
  10657666 ± 10%      -9.7%    9627177 ±  6%  perf-stat.node-load-misses
    375031 ± 23%     -35.0%     243953 ± 19%  perf-stat.node-loads
     42.92 ± 33%      +0.1       43.06 ± 33%  perf-stat.node-store-miss-rate%
   3584932 ± 33%      -5.2%    3397064 ± 38%  perf-stat.node-store-misses
   4787149 ± 26%      -9.3%    4344213 ± 19%  perf-stat.node-stores
    729978            +0.0%     730227        perf-stat.page-faults
    500171            +0.0%     500239        perf-stat.path-length
     62237 ± 17%     -61.1%      24187 ± 54%  numa-meminfo.node0.Active
     62235 ± 17%     -61.1%      24186 ± 54%  numa-meminfo.node0.Active(anon)
     44827 ± 18%     -82.9%       7680 ±173%  numa-meminfo.node0.AnonHugePages
     62016 ± 17%     -65.6%      21346 ± 63%  numa-meminfo.node0.AnonPages
    604560 ±  3%      -0.4%     602002 ±  4%  numa-meminfo.node0.FilePages
      5077 ± 77%     -50.0%       2540 ±136%  numa-meminfo.node0.Inactive
      5010 ± 79%     -49.5%       2528 ±136%  numa-meminfo.node0.Inactive(anon)
      3419 ±  9%      +0.3%       3430 ±  7%  numa-meminfo.node0.KernelStack
     12652 ± 17%      -8.6%      11559 ± 18%  numa-meminfo.node0.Mapped
  32052630            +0.1%   32094566        numa-meminfo.node0.MemFree
  32864896            +0.0%   32864896        numa-meminfo.node0.MemTotal
    812264 ±  3%      -5.2%     770328 ±  3%  numa-meminfo.node0.MemUsed
      1946 ± 16%      -5.1%       1846 ± 12%  numa-meminfo.node0.PageTables
     19496 ±  5%     +17.7%      22938 ±  6%  numa-meminfo.node0.SReclaimable
     32438 ±  4%      -5.8%      30547 ±  4%  numa-meminfo.node0.SUnreclaim
      5387 ± 73%      -6.8%       5019 ± 81%  numa-meminfo.node0.Shmem
     51935 ±  2%      +3.0%      53486 ±  5%  numa-meminfo.node0.Slab
    599171 ±  4%      -0.4%     597036 ±  4%  numa-meminfo.node0.Unevictable
     34689 ± 39%    +106.6%      71654 ± 14%  numa-meminfo.node1.Active
     34687 ± 39%    +106.6%      71651 ± 14%  numa-meminfo.node1.Active(anon)
      4959 ±173%    +763.0%      42800 ± 30%  numa-meminfo.node1.AnonHugePages
     18649 ± 59%    +217.4%      59201 ± 22%  numa-meminfo.node1.AnonPages
    648874 ±  3%      -0.3%     646673 ±  4%  numa-meminfo.node1.FilePages
      4819 ± 81%     +52.8%       7362 ± 46%  numa-meminfo.node1.Inactive
      4747 ± 84%     +52.4%       7235 ± 47%  numa-meminfo.node1.Inactive(anon)
      3293 ±  9%      -0.1%       3290 ±  8%  numa-meminfo.node1.KernelStack
     12723 ± 18%      +9.2%      13889 ± 13%  numa-meminfo.node1.Mapped
  32195451            -0.1%   32159310        numa-meminfo.node1.MemFree
  33001612            +0.0%   33001612        numa-meminfo.node1.MemTotal
    806159 ±  3%      +4.5%     842300 ±  3%  numa-meminfo.node1.MemUsed
      1968 ± 16%      +5.4%       2074 ± 10%  numa-meminfo.node1.PageTables
     27702 ±  3%     -12.1%      24344 ±  6%  numa-meminfo.node1.SReclaimable
     27980 ±  4%      +8.5%      30346 ±  5%  numa-meminfo.node1.SUnreclaim
     20389 ± 33%      -3.2%      19742 ± 30%  numa-meminfo.node1.Shmem
     55683 ±  2%      -1.8%      54690 ±  5%  numa-meminfo.node1.Slab
    628477 ±  4%      -0.3%     626799 ±  4%  numa-meminfo.node1.Unevictable
     10005 ±  7%      +1.0%      10104 ± 14%  numa-vmstat.node0
     15558 ± 17%     -61.1%       6048 ± 54%  numa-vmstat.node0.nr_active_anon
     15503 ± 17%     -65.6%       5336 ± 63%  numa-vmstat.node0.nr_anon_pages
    151139 ±  3%      -0.4%     150500 ±  4%  numa-vmstat.node0.nr_file_pages
   8013153            +0.1%    8023648        numa-vmstat.node0.nr_free_pages
      1252 ± 79%     -49.8%     629.25 ±137%  numa-vmstat.node0.nr_inactive_anon
      3419 ±  9%      +0.3%       3430 ±  7%  numa-vmstat.node0.nr_kernel_stack
      3203 ± 15%      -8.6%       2927 ± 17%  numa-vmstat.node0.nr_mapped
    106.25 ±102%     -42.4%      61.25 ±173%  numa-vmstat.node0.nr_mlock
    486.00 ± 16%      -5.0%     461.50 ± 12%  numa-vmstat.node0.nr_page_table_pages
      1346 ± 73%      -6.8%       1254 ± 81%  numa-vmstat.node0.nr_shmem
      4873 ±  5%     +17.7%       5734 ±  6%  numa-vmstat.node0.nr_slab_reclaimable
      8109 ±  4%      -5.8%       7636 ±  4%  numa-vmstat.node0.nr_slab_unreclaimable
    149792 ±  4%      -0.4%     149259 ±  4%  numa-vmstat.node0.nr_unevictable
     15558 ± 17%     -61.1%       6048 ± 54%  numa-vmstat.node0.nr_zone_active_anon
      1252 ± 79%     -49.8%     629.25 ±137%  numa-vmstat.node0.nr_zone_inactive_anon
    149792 ±  4%      -0.4%     149259 ±  4%  numa-vmstat.node0.nr_zone_unevictable
    299240 ±  6%      -4.1%     287069 ±  8%  numa-vmstat.node0.numa_hit
    167067            -0.4%     166359        numa-vmstat.node0.numa_interleave
    294457 ±  7%      -3.7%     283478 ±  8%  numa-vmstat.node0.numa_local
      4782           -24.9%       3590 ± 36%  numa-vmstat.node0.numa_other
      7061 ± 11%      -1.9%       6924 ± 22%  numa-vmstat.node1
      8687 ± 39%    +106.5%      17941 ± 14%  numa-vmstat.node1.nr_active_anon
      4675 ± 59%    +217.0%      14820 ± 22%  numa-vmstat.node1.nr_anon_pages
    162209 ±  3%      -0.3%     161667 ±  4%  numa-vmstat.node1.nr_file_pages
     50954            +0.0%      50954        numa-vmstat.node1.nr_free_cma
   8048870            -0.1%    8039831        numa-vmstat.node1.nr_free_pages
      1179 ± 84%     +52.6%       1800 ± 47%  numa-vmstat.node1.nr_inactive_anon
      3294 ±  9%      -0.1%       3292 ±  8%  numa-vmstat.node1.nr_kernel_stack
      3307 ± 16%      +8.9%       3601 ± 12%  numa-vmstat.node1.nr_mapped
    106.00 ±102%     -58.3%      44.25 ±173%  numa-vmstat.node1.nr_mlock
    492.00 ± 17%      +5.4%     518.50 ± 10%  numa-vmstat.node1.nr_page_table_pages
      5088 ± 33%      -3.0%       4935 ± 30%  numa-vmstat.node1.nr_shmem
      6924 ±  3%     -12.1%       6085 ±  6%  numa-vmstat.node1.nr_slab_reclaimable
      6994 ±  4%      +8.5%       7586 ±  5%  numa-vmstat.node1.nr_slab_unreclaimable
    157119 ±  4%      -0.3%     156699 ±  4%  numa-vmstat.node1.nr_unevictable
      8687 ± 39%    +106.5%      17941 ± 14%  numa-vmstat.node1.nr_zone_active_anon
      1179 ± 84%     +52.6%       1800 ± 47%  numa-vmstat.node1.nr_zone_inactive_anon
    157119 ±  4%      -0.3%     156699 ±  4%  numa-vmstat.node1.nr_zone_unevictable
    561343 ±  3%      +1.9%     571867 ±  4%  numa-vmstat.node1.numa_hit
    166468            -0.1%     166256        numa-vmstat.node1.numa_interleave
    394163 ±  4%      +2.4%     403699 ±  6%  numa-vmstat.node1.numa_local
    167179            +0.6%     168167        numa-vmstat.node1.numa_other
     23.09 ± 65%    -100.0%       0.00        sched_debug.cfs_rq:/.MIN_vruntime.avg
    554.08 ± 65%    -100.0%       0.00        sched_debug.cfs_rq:/.MIN_vruntime.max
      0.00            +0.0%       0.00        sched_debug.cfs_rq:/.MIN_vruntime.min
    110.72 ± 65%    -100.0%       0.00        sched_debug.cfs_rq:/.MIN_vruntime.stddev
    149263            +0.0%     149296        sched_debug.cfs_rq:/.exec_clock.avg
    149435            +0.0%     149495        sched_debug.cfs_rq:/.exec_clock.max
    149160            +0.0%     149203        sched_debug.cfs_rq:/.exec_clock.min
     59.55 ± 11%      +1.2%      60.28 ± 16%  sched_debug.cfs_rq:/.exec_clock.stddev
     41494 ±  8%      -0.6%      41224 ±  6%  sched_debug.cfs_rq:/.load.avg
     99113 ± 74%      +9.4%     108430 ± 66%  sched_debug.cfs_rq:/.load.max
     19603 ±  2%      -0.2%      19565 ±  4%  sched_debug.cfs_rq:/.load.min
     17562 ± 79%     +11.4%      19556 ± 71%  sched_debug.cfs_rq:/.load.stddev
     59.82 ±  5%      -0.7%      59.40 ±  4%  sched_debug.cfs_rq:/.load_avg.avg
    210.46 ±  7%      +4.4%     219.62 ±  8%  sched_debug.cfs_rq:/.load_avg.max
     20.88 ±  7%      +0.0%      20.88 ±  6%  sched_debug.cfs_rq:/.load_avg.min
     50.00 ±  9%      +2.5%      51.27 ± 11%  sched_debug.cfs_rq:/.load_avg.stddev
     23.11 ± 65%    -100.0%       0.00        sched_debug.cfs_rq:/.max_vruntime.avg
    554.58 ± 65%    -100.0%       0.00        sched_debug.cfs_rq:/.max_vruntime.max
      0.00            +0.0%       0.00        sched_debug.cfs_rq:/.max_vruntime.min
    110.82 ± 65%    -100.0%       0.00        sched_debug.cfs_rq:/.max_vruntime.stddev
   3846045            +0.1%    3848426        sched_debug.cfs_rq:/.min_vruntime.avg
   5171034            +2.6%    5305804        sched_debug.cfs_rq:/.min_vruntime.max
   3359147            +0.1%    3363335        sched_debug.cfs_rq:/.min_vruntime.min
    683180            +2.0%     696678        sched_debug.cfs_rq:/.min_vruntime.stddev
      0.92            -1.1%       0.91        sched_debug.cfs_rq:/.nr_running.avg
      1.04 ±  6%      -4.0%       1.00        sched_debug.cfs_rq:/.nr_running.max
      0.83            +0.0%       0.83        sched_debug.cfs_rq:/.nr_running.min
      0.09 ±  6%      -4.1%       0.08        sched_debug.cfs_rq:/.nr_running.stddev
      1.48 ±  5%      +2.1%       1.51        sched_debug.cfs_rq:/.nr_spread_over.avg
     10.21 ±  7%      -7.3%       9.46 ±  7%  sched_debug.cfs_rq:/.nr_spread_over.max
      0.83            +0.0%       0.83        sched_debug.cfs_rq:/.nr_spread_over.min
      1.94 ±  7%      -5.3%       1.84 ±  8%  sched_debug.cfs_rq:/.nr_spread_over.stddev
      3.44 ± 99%     -48.3%       1.78 ±173%  sched_debug.cfs_rq:/.removed.load_avg.avg
     82.54 ± 99%     -48.3%      42.67 ±173%  sched_debug.cfs_rq:/.removed.load_avg.max
     16.49 ± 99%     -48.3%       8.53 ±173%  sched_debug.cfs_rq:/.removed.load_avg.stddev
    159.38 ± 99%     -48.9%      81.50 ±173%  sched_debug.cfs_rq:/.removed.runnable_sum.avg
      3825 ± 99%     -48.9%       1955 ±173%  sched_debug.cfs_rq:/.removed.runnable_sum.max
    764.35 ± 99%     -48.9%     390.85 ±173%  sched_debug.cfs_rq:/.removed.runnable_sum.stddev
      0.47 ±158%     -20.7%       0.37 ±173%  sched_debug.cfs_rq:/.removed.util_avg.avg
     11.25 ±158%     -20.7%       8.92 ±173%  sched_debug.cfs_rq:/.removed.util_avg.max
      2.25 ±158%     -20.7%       1.78 ±173%  sched_debug.cfs_rq:/.removed.util_avg.stddev
     37.41            +0.8%      37.72        sched_debug.cfs_rq:/.runnable_load_avg.avg
     53.42 ±  5%     +21.7%      65.00 ± 12%  sched_debug.cfs_rq:/.runnable_load_avg.max
     18.21            +1.4%      18.46        sched_debug.cfs_rq:/.runnable_load_avg.min
      8.65 ±  6%     +27.0%      10.98 ± 16%  sched_debug.cfs_rq:/.runnable_load_avg.stddev
     39819 ±  8%      -0.9%      39476 ±  7%  sched_debug.cfs_rq:/.runnable_weight.avg
     90006 ± 85%     +10.9%      99774 ± 77%  sched_debug.cfs_rq:/.runnable_weight.max
     19061            -0.2%      19031        sched_debug.cfs_rq:/.runnable_weight.min
     16392 ± 88%      +8.6%      17796 ± 84%  sched_debug.cfs_rq:/.runnable_weight.stddev
      0.02 ±173%    -100.0%       0.00        sched_debug.cfs_rq:/.spread.avg
      0.50 ±173%    -100.0%       0.00        sched_debug.cfs_rq:/.spread.max
      0.10 ±173%    -100.0%       0.00        sched_debug.cfs_rq:/.spread.stddev
    467775 ±  2%      +0.8%     471743        sched_debug.cfs_rq:/.spread0.avg
   1792733 ±  4%      +7.6%    1929070 ±  2%  sched_debug.cfs_rq:/.spread0.max
    -19100           -30.4%     -13297        sched_debug.cfs_rq:/.spread0.min
    683158            +2.0%     696653        sched_debug.cfs_rq:/.spread0.stddev
    984.38            -0.3%     981.20        sched_debug.cfs_rq:/.util_avg.avg
      1317 ±  2%      +8.2%       1426 ±  3%  sched_debug.cfs_rq:/.util_avg.max
    696.79 ±  5%     -12.5%     609.67 ±  3%  sched_debug.cfs_rq:/.util_avg.min
    131.17 ±  9%     +41.2%     185.24 ±  8%  sched_debug.cfs_rq:/.util_avg.stddev
    541.36 ± 52%      +5.2%     569.38 ± 26%  sched_debug.cfs_rq:/.util_est_enqueued.avg
      1142 ± 32%     +15.4%       1318 ± 12%  sched_debug.cfs_rq:/.util_est_enqueued.max
     65.08 ±170%     -12.9%      56.67 ±170%  sched_debug.cfs_rq:/.util_est_enqueued.min
    289.32 ± 23%     +20.7%     349.08 ± 14%  sched_debug.cfs_rq:/.util_est_enqueued.stddev
    858242 ±  2%      -0.6%     852891 ±  2%  sched_debug.cpu.avg_idle.avg
   1087080 ± 13%      -8.8%     991460        sched_debug.cpu.avg_idle.max
    442181 ± 44%      +7.5%     475384 ± 43%  sched_debug.cpu.avg_idle.min
    151322 ± 21%     -11.4%     134022 ± 33%  sched_debug.cpu.avg_idle.stddev
    196896            -0.0%     196878        sched_debug.cpu.clock.avg
    196901            -0.0%     196883        sched_debug.cpu.clock.max
    196890            -0.0%     196872        sched_debug.cpu.clock.min
      3.11 ± 28%     +10.8%       3.44 ± 14%  sched_debug.cpu.clock.stddev
    196896            -0.0%     196878        sched_debug.cpu.clock_task.avg
    196901            -0.0%     196883        sched_debug.cpu.clock_task.max
    196890            -0.0%     196872        sched_debug.cpu.clock_task.min
      3.11 ± 28%     +10.8%       3.44 ± 14%  sched_debug.cpu.clock_task.stddev
     37.64            +2.1%      38.41        sched_debug.cpu.cpu_load[0].avg
     57.96 ±  5%     +20.1%      69.62 ± 11%  sched_debug.cpu.cpu_load[0].max
     18.21            +0.0%      18.21        sched_debug.cpu.cpu_load[0].min
      9.35 ±  3%     +27.5%      11.92 ± 15%  sched_debug.cpu.cpu_load[0].stddev
     38.38            +1.9%      39.11 ±  2%  sched_debug.cpu.cpu_load[1].avg
     60.46 ± 12%     +24.4%      75.21 ± 13%  sched_debug.cpu.cpu_load[1].max
     18.33            -0.2%      18.29        sched_debug.cpu.cpu_load[1].min
      9.63 ± 13%     +31.6%      12.67 ± 13%  sched_debug.cpu.cpu_load[1].stddev
     39.10            +0.7%      39.36 ±  2%  sched_debug.cpu.cpu_load[2].avg
     63.83 ± 14%      +8.9%      69.54 ± 10%  sched_debug.cpu.cpu_load[2].max
     18.83 ±  3%      -0.2%      18.79 ±  2%  sched_debug.cpu.cpu_load[2].min
      9.81 ± 17%     +17.9%      11.57 ± 11%  sched_debug.cpu.cpu_load[2].stddev
     39.52            -0.3%      39.41        sched_debug.cpu.cpu_load[3].avg
     61.58 ± 10%      +4.3%      64.21 ±  4%  sched_debug.cpu.cpu_load[3].max
     19.67 ±  5%      -2.5%      19.17 ±  4%  sched_debug.cpu.cpu_load[3].min
      9.21 ± 15%     +12.4%      10.35 ±  7%  sched_debug.cpu.cpu_load[3].stddev
     39.47            -0.8%      39.14        sched_debug.cpu.cpu_load[4].avg
     57.79 ±  5%      +3.1%      59.58 ±  3%  sched_debug.cpu.cpu_load[4].max
     20.12 ±  3%      -2.9%      19.54 ±  5%  sched_debug.cpu.cpu_load[4].min
      8.28 ± 11%     +10.5%       9.15 ±  8%  sched_debug.cpu.cpu_load[4].stddev
      1353            -0.2%       1350        sched_debug.cpu.curr->pid.avg
      4842            -0.1%       4839        sched_debug.cpu.curr->pid.max
      1054 ±  3%      +4.2%       1098        sched_debug.cpu.curr->pid.min
    791.78            -0.2%     789.97        sched_debug.cpu.curr->pid.stddev
     43461 ±  7%      -9.3%      39400        sched_debug.cpu.load.avg
    142424 ± 57%     -53.2%      66622 ± 13%  sched_debug.cpu.load.max
     19604 ±  2%      -0.2%      19565 ±  4%  sched_debug.cpu.load.min
     25791 ± 61%     -54.9%      11630 ± 12%  sched_debug.cpu.load.stddev
    500392            -0.1%     500000        sched_debug.cpu.max_idle_balance_cost.avg
    509412 ±  3%      -1.8%     500000        sched_debug.cpu.max_idle_balance_cost.max
    500000            +0.0%     500000        sched_debug.cpu.max_idle_balance_cost.min
      1880 ±173%    -100.0%       0.00        sched_debug.cpu.max_idle_balance_cost.stddev
      4294            -0.0%       4294        sched_debug.cpu.next_balance.avg
      4294            -0.0%       4294        sched_debug.cpu.next_balance.max
      4294            -0.0%       4294        sched_debug.cpu.next_balance.min
      0.00 ±  4%      +1.4%       0.00 ±  2%  sched_debug.cpu.next_balance.stddev
    164593            -0.2%     164285        sched_debug.cpu.nr_load_updates.avg
    170650            -1.2%     168593        sched_debug.cpu.nr_load_updates.max
    162423            -0.2%     162133        sched_debug.cpu.nr_load_updates.min
      1617 ± 38%     -27.4%       1174 ± 10%  sched_debug.cpu.nr_load_updates.stddev
      1.68            -1.5%       1.66        sched_debug.cpu.nr_running.avg
      2.25 ±  6%     +13.0%       2.54 ± 16%  sched_debug.cpu.nr_running.max
      0.83            +0.0%       0.83        sched_debug.cpu.nr_running.min
      0.38 ±  4%     +14.0%       0.43 ± 14%  sched_debug.cpu.nr_running.stddev
     37976 ±  6%      +1.4%      38521 ±  7%  sched_debug.cpu.nr_switches.avg
    153577 ±  9%      +9.1%     167490 ± 25%  sched_debug.cpu.nr_switches.max
     13031            +0.2%      13057        sched_debug.cpu.nr_switches.min
     36971 ±  7%      +3.2%      38157 ± 22%  sched_debug.cpu.nr_switches.stddev
      0.00 ±100%    +300.0%       0.01        sched_debug.cpu.nr_uninterruptible.avg
      8.96 ±  8%      +6.0%       9.50 ± 28%  sched_debug.cpu.nr_uninterruptible.max
     -6.79           +46.6%      -9.96        sched_debug.cpu.nr_uninterruptible.min
      4.00 ±  4%     +19.0%       4.76 ± 12%  sched_debug.cpu.nr_uninterruptible.stddev
     38229 ±  6%      +1.9%      38941 ±  8%  sched_debug.cpu.sched_count.avg
    152429 ± 10%      +8.0%     164557 ± 24%  sched_debug.cpu.sched_count.max
     12291            +0.1%      12309        sched_debug.cpu.sched_count.min
     38689 ± 11%      +0.9%      39027 ± 21%  sched_debug.cpu.sched_count.stddev
    117.05 ±  8%      +1.2%     118.40 ±  4%  sched_debug.cpu.sched_goidle.avg
      1356 ± 11%     -13.4%       1174 ± 15%  sched_debug.cpu.sched_goidle.max
     10.83 ± 16%     -17.3%       8.96 ± 16%  sched_debug.cpu.sched_goidle.min
    272.20 ± 11%     -10.2%     244.42 ± 12%  sched_debug.cpu.sched_goidle.stddev
     12900 ±  9%      +2.3%      13193 ± 11%  sched_debug.cpu.ttwu_count.avg
     70199 ±  8%     +10.5%      77560 ± 26%  sched_debug.cpu.ttwu_count.max
    545.58 ± 10%      -6.2%     511.92 ± 15%  sched_debug.cpu.ttwu_count.min
     18765 ±  5%      +2.6%      19255 ± 22%  sched_debug.cpu.ttwu_count.stddev
     12003 ± 10%      +2.3%      12285 ± 12%  sched_debug.cpu.ttwu_local.avg
     68106 ±  9%     +10.7%      75417 ± 27%  sched_debug.cpu.ttwu_local.max
    407.67 ± 12%      -2.4%     397.88 ± 17%  sched_debug.cpu.ttwu_local.min
     18217 ±  6%      +2.8%      18731 ± 22%  sched_debug.cpu.ttwu_local.stddev
    196889            -0.0%     196871        sched_debug.cpu_clk
    996147            +0.0%     996147        sched_debug.dl_rq:.dl_bw->bw.avg
    996147            +0.0%     996147        sched_debug.dl_rq:.dl_bw->bw.max
    996147            +0.0%     996147        sched_debug.dl_rq:.dl_bw->bw.min
 4.295e+09            -0.0%  4.295e+09        sched_debug.jiffies
    196889            -0.0%     196871        sched_debug.ktime
      0.00 ±100%     -50.0%       0.00 ±173%  sched_debug.rt_rq:/.rt_nr_migratory.avg
      0.08 ± 99%     -50.0%       0.04 ±173%  sched_debug.rt_rq:/.rt_nr_migratory.max
      0.02 ±100%     -50.0%       0.01 ±173%  sched_debug.rt_rq:/.rt_nr_migratory.stddev
      0.00 ±100%     -50.0%       0.00 ±173%  sched_debug.rt_rq:/.rt_nr_running.avg
      0.08 ± 99%     -50.0%       0.04 ±173%  sched_debug.rt_rq:/.rt_nr_running.max
      0.02 ±100%     -50.0%       0.01 ±173%  sched_debug.rt_rq:/.rt_nr_running.stddev
    950.00            +0.0%     950.00        sched_debug.rt_rq:/.rt_runtime.avg
    950.00            +0.0%     950.00        sched_debug.rt_rq:/.rt_runtime.max
    950.00            +0.0%     950.00        sched_debug.rt_rq:/.rt_runtime.min
      0.07 ±  6%      +6.8%       0.08 ± 11%  sched_debug.rt_rq:/.rt_time.avg
      1.70 ±  6%      +4.9%       1.79 ± 10%  sched_debug.rt_rq:/.rt_time.max
      0.00 ± 63%     +31.3%       0.00 ± 57%  sched_debug.rt_rq:/.rt_time.min
      0.34 ±  6%      +4.9%       0.36 ± 10%  sched_debug.rt_rq:/.rt_time.stddev
    197335            -0.0%     197289        sched_debug.sched_clk
      1.00            +0.0%       1.00        sched_debug.sched_clock_stable()
   4118331            +0.0%    4118331        sched_debug.sysctl_sched.sysctl_sched_features
     24.00            +0.0%      24.00        sched_debug.sysctl_sched.sysctl_sched_latency
      3.00            +0.0%       3.00        sched_debug.sysctl_sched.sysctl_sched_min_granularity
      1.00            +0.0%       1.00        sched_debug.sysctl_sched.sysctl_sched_tunable_scaling
      4.00            +0.0%       4.00        sched_debug.sysctl_sched.sysctl_sched_wakeup_granularity
     14.99            -0.6       14.36        perf-profile.calltrace.cycles-pp.copyout.copy_page_to_iter.shmem_file_read_iter.__vfs_read.vfs_read
     14.59            -0.6       13.98        perf-profile.calltrace.cycles-pp.copy_user_enhanced_fast_string.copyout.copy_page_to_iter.shmem_file_read_iter.__vfs_read
     20.35            -0.6       19.75        perf-profile.calltrace.cycles-pp.copy_page_to_iter.shmem_file_read_iter.__vfs_read.vfs_read.ksys_pread64
      0.54 ±  2%      -0.5        0.00        perf-profile.calltrace.cycles-pp.selinux_file_permission
     63.83            -0.4       63.41        perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
     62.51            -0.4       62.12        perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
     39.57            -0.4       39.20        perf-profile.calltrace.cycles-pp.shmem_file_read_iter.__vfs_read.vfs_read.ksys_pread64.do_syscall_64
     11.08            -0.3       10.75        perf-profile.calltrace.cycles-pp.__entry_trampoline_start
     59.21            -0.3       58.91        perf-profile.calltrace.cycles-pp.ksys_pread64.do_syscall_64.entry_SYSCALL_64_after_hwframe
     12.79            -0.3       12.52        perf-profile.calltrace.cycles-pp.syscall_return_via_sysret
     42.25            -0.2       42.03        perf-profile.calltrace.cycles-pp.__vfs_read.vfs_read.ksys_pread64.do_syscall_64.entry_SYSCALL_64_after_hwframe
      3.43 ±  6%      -0.2        3.28 ±  2%  perf-profile.calltrace.cycles-pp.__fget_light.ksys_pread64.do_syscall_64.entry_SYSCALL_64_after_hwframe
      0.15 ±173%      -0.1        0.00        perf-profile.calltrace.cycles-pp.ktime_get_coarse_real_ts64.current_time.atime_needs_update.touch_atime.shmem_file_read_iter
      2.57 ±  6%      -0.1        2.43 ±  2%  perf-profile.calltrace.cycles-pp.__fget.__fget_light.ksys_pread64.do_syscall_64.entry_SYSCALL_64_after_hwframe
      3.98 ±  2%      -0.1        3.88        perf-profile.calltrace.cycles-pp.touch_atime.shmem_file_read_iter.__vfs_read.vfs_read.ksys_pread64
      1.61 ±  4%      -0.1        1.51        perf-profile.calltrace.cycles-pp.current_time.atime_needs_update.touch_atime.shmem_file_read_iter.__vfs_read
      3.07 ±  2%      -0.1        2.97        perf-profile.calltrace.cycles-pp.atime_needs_update.touch_atime.shmem_file_read_iter.__vfs_read.vfs_read
      1.01 ±  2%      -0.1        0.92 ±  5%  perf-profile.calltrace.cycles-pp.fput.ksys_pread64.do_syscall_64.entry_SYSCALL_64_after_hwframe
      0.74 ±  6%      -0.1        0.66 ±  2%  perf-profile.calltrace.cycles-pp.ksys_pread64
      0.92            -0.1        0.85        perf-profile.calltrace.cycles-pp.__fsnotify_parent
      1.10 ±  2%      -0.1        1.04        perf-profile.calltrace.cycles-pp.unlock_page.shmem_file_read_iter.__vfs_read.vfs_read.ksys_pread64
      0.59            -0.1        0.54 ±  3%  perf-profile.calltrace.cycles-pp.find_lock_entry
      1.15            -0.0        1.12        perf-profile.calltrace.cycles-pp.__x64_sys_pread64.do_syscall_64.entry_SYSCALL_64_after_hwframe
      0.97 ±  2%      -0.0        0.95        perf-profile.calltrace.cycles-pp.page_mapping.find_lock_entry.shmem_getpage_gfp.shmem_file_read_iter.__vfs_read
      0.93            -0.0        0.91        perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_stage2
      0.53 ±  2%      -0.0        0.52 ±  2%  perf-profile.calltrace.cycles-pp.__fsnotify_parent.security_file_permission.vfs_read.ksys_pread64.do_syscall_64
      0.78 ±  2%      -0.0        0.78 ±  3%  perf-profile.calltrace.cycles-pp.___might_sleep.copy_page_to_iter.shmem_file_read_iter.__vfs_read.vfs_read
     54.02            -0.0       54.02        perf-profile.calltrace.cycles-pp.vfs_read.ksys_pread64.do_syscall_64.entry_SYSCALL_64_after_hwframe
      0.60            +0.0        0.60        perf-profile.calltrace.cycles-pp.shmem_getpage_gfp
      0.84            +0.0        0.88 ±  2%  perf-profile.calltrace.cycles-pp.copy_page_to_iter
      0.64 ±  5%      +0.1        0.70 ±  3%  perf-profile.calltrace.cycles-pp.___might_sleep.find_lock_entry.shmem_getpage_gfp.shmem_file_read_iter.__vfs_read
      0.68 ±  4%      +0.1        0.74 ±  2%  perf-profile.calltrace.cycles-pp.___might_sleep.__might_fault.copy_page_to_iter.shmem_file_read_iter.__vfs_read
      0.59 ±  5%      +0.1        0.66 ±  2%  perf-profile.calltrace.cycles-pp.__indirect_thunk_start
      1.55 ±  2%      +0.1        1.62        perf-profile.calltrace.cycles-pp.__might_fault.copy_page_to_iter.shmem_file_read_iter.__vfs_read.vfs_read
      5.99            +0.1        6.09        perf-profile.calltrace.cycles-pp.security_file_permission.vfs_read.ksys_pread64.do_syscall_64.entry_SYSCALL_64_after_hwframe
      1.57 ±  2%      +0.1        1.69        perf-profile.calltrace.cycles-pp.shmem_file_read_iter
      1.50            +0.1        1.62 ±  3%  perf-profile.calltrace.cycles-pp.__inode_security_revalidate.selinux_file_permission.security_file_permission.vfs_read.ksys_pread64
      0.13 ±173%      +0.1        0.26 ±100%  perf-profile.calltrace.cycles-pp.vfs_read
      4.57            +0.1        4.72        perf-profile.calltrace.cycles-pp.selinux_file_permission.security_file_permission.vfs_read.ksys_pread64.do_syscall_64
      0.62 ±  2%      +0.2        0.83        perf-profile.calltrace.cycles-pp.mark_page_accessed.shmem_file_read_iter.__vfs_read.vfs_read.ksys_pread64
      3.42 ±  2%      +0.3        3.71        perf-profile.calltrace.cycles-pp.fsnotify.vfs_read.ksys_pread64.do_syscall_64.entry_SYSCALL_64_after_hwframe
      9.01            +0.4        9.38        perf-profile.calltrace.cycles-pp.shmem_getpage_gfp.shmem_file_read_iter.__vfs_read.vfs_read.ksys_pread64
      3.21            +0.4        3.63        perf-profile.calltrace.cycles-pp.find_get_entry.find_lock_entry.shmem_getpage_gfp.shmem_file_read_iter.__vfs_read
      0.00            +0.4        0.43 ± 57%  perf-profile.calltrace.cycles-pp.___might_sleep.__inode_security_revalidate.selinux_file_permission.security_file_permission.vfs_read
      6.75            +0.5        7.21        perf-profile.calltrace.cycles-pp.find_lock_entry.shmem_getpage_gfp.shmem_file_read_iter.__vfs_read.vfs_read
      0.00            +0.5        0.54 ±  3%  perf-profile.calltrace.cycles-pp.__vfs_read
      0.00            +0.6        0.57 ±  3%  perf-profile.calltrace.cycles-pp.___might_sleep
      0.00            +0.7        0.71 ±  2%  perf-profile.calltrace.cycles-pp.mark_page_accessed
      0.00            +0.8        0.76        perf-profile.calltrace.cycles-pp.xas_start.xas_load.find_get_entry.find_lock_entry.shmem_getpage_gfp
      0.00            +0.8        0.85 ±  2%  perf-profile.calltrace.cycles-pp.xas_load.find_get_entry.find_lock_entry.shmem_getpage_gfp.shmem_file_read_iter
     14.94            -0.6       14.30        perf-profile.children.cycles-pp.copy_user_enhanced_fast_string
     15.07            -0.6       14.45        perf-profile.children.cycles-pp.copyout
     21.20            -0.6       20.64        perf-profile.children.cycles-pp.copy_page_to_iter
     63.92            -0.4       63.50        perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
     62.75            -0.4       62.36        perf-profile.children.cycles-pp.do_syscall_64
     59.95            -0.4       59.57        perf-profile.children.cycles-pp.ksys_pread64
     11.09            -0.3       10.76        perf-profile.children.cycles-pp.__entry_trampoline_start
     12.79            -0.3       12.52        perf-profile.children.cycles-pp.syscall_return_via_sysret
     41.15            -0.3       40.90        perf-profile.children.cycles-pp.shmem_file_read_iter
      3.53 ±  6%      -0.2        3.37 ±  2%  perf-profile.children.cycles-pp.__fget_light
      2.64 ±  6%      -0.1        2.49 ±  2%  perf-profile.children.cycles-pp.__fget
     42.72            -0.1       42.59        perf-profile.children.cycles-pp.__vfs_read
      1.76 ±  4%      -0.1        1.64 ±  2%  perf-profile.children.cycles-pp.current_time
      4.10 ±  2%      -0.1        3.99        perf-profile.children.cycles-pp.touch_atime
      1.85 ±  2%      -0.1        1.76        perf-profile.children.cycles-pp.__fsnotify_parent
      3.19 ±  2%      -0.1        3.10        perf-profile.children.cycles-pp.atime_needs_update
      1.12            -0.1        1.04 ±  4%  perf-profile.children.cycles-pp.fput
      0.47 ± 18%      -0.1        0.40 ±  7%  perf-profile.children.cycles-pp.ktime_get_coarse_real_ts64
      0.29 ±  6%      -0.1        0.23 ±  3%  perf-profile.children.cycles-pp.apic_timer_interrupt
      0.29 ±  6%      -0.1        0.23 ±  3%  perf-profile.children.cycles-pp.smp_apic_timer_interrupt
      0.24 ±  7%      -0.1        0.18 ±  4%  perf-profile.children.cycles-pp.hrtimer_interrupt
      1.10 ±  2%      -0.1        1.05        perf-profile.children.cycles-pp.unlock_page
      0.20 ±  6%      -0.0        0.16 ±  5%  perf-profile.children.cycles-pp.__hrtimer_run_queues
      0.06 ± 64%      -0.0        0.02 ±173%  perf-profile.children.cycles-pp.console_unlock
      0.16 ± 13%      -0.0        0.12 ±  5%  perf-profile.children.cycles-pp.tick_sched_timer
      0.05 ± 64%      -0.0        0.01 ±173%  perf-profile.children.cycles-pp.serial8250_console_write
      0.05 ± 64%      -0.0        0.01 ±173%  perf-profile.children.cycles-pp.wait_for_xmitr
      0.05 ± 64%      -0.0        0.01 ±173%  perf-profile.children.cycles-pp.uart_console_write
      0.05 ± 64%      -0.0        0.01 ±173%  perf-profile.children.cycles-pp.serial8250_console_putchar
      1.15            -0.0        1.12        perf-profile.children.cycles-pp.__x64_sys_pread64
      0.14 ± 13%      -0.0        0.11 ±  4%  perf-profile.children.cycles-pp.update_process_times
      1.03            -0.0        1.00 ±  3%  perf-profile.children.cycles-pp._cond_resched
      0.14 ± 13%      -0.0        0.11        perf-profile.children.cycles-pp.tick_sched_handle
      0.98 ±  2%      -0.0        0.95        perf-profile.children.cycles-pp.page_mapping
      0.93            -0.0        0.91        perf-profile.children.cycles-pp.entry_SYSCALL_64_stage2
      0.03 ±100%      -0.0        0.00        perf-profile.children.cycles-pp.ktime_get
      0.03 ±100%      -0.0        0.00        perf-profile.children.cycles-pp.__vfs_write
      0.02 ±173%      -0.0        0.00        perf-profile.children.cycles-pp.irq_work_run_list
      0.32 ±  5%      -0.0        0.30 ±  4%  perf-profile.children.cycles-pp.rw_verify_area
      0.03 ±100%      -0.0        0.01 ±173%  perf-profile.children.cycles-pp.vfs_write
      0.01 ±173%      -0.0        0.00        perf-profile.children.cycles-pp.update_load_avg
      0.01 ±173%      -0.0        0.00        perf-profile.children.cycles-pp.io_serial_in
      0.01 ±173%      -0.0        0.00        perf-profile.children.cycles-pp.irq_exit
      0.08 ± 15%      -0.0        0.07 ±  5%  perf-profile.children.cycles-pp.task_tick_fair
      0.05 ± 61%      -0.0        0.04 ± 59%  perf-profile.children.cycles-pp.write
      0.04 ±106%      -0.0        0.04 ±107%  perf-profile.children.cycles-pp.ret_from_fork
      0.04 ±106%      -0.0        0.04 ±107%  perf-profile.children.cycles-pp.kthread
      0.04 ±103%      -0.0        0.03 ±105%  perf-profile.children.cycles-pp.worker_thread
      0.04 ±103%      -0.0        0.03 ±105%  perf-profile.children.cycles-pp.process_one_work
      0.09 ±  8%      -0.0        0.09 ±  4%  perf-profile.children.cycles-pp.scheduler_tick
      0.28 ± 11%      +0.0        0.28        perf-profile.children.cycles-pp.iov_iter_init
      0.08 ±  8%      +0.0        0.08 ±  8%  perf-profile.children.cycles-pp.avc_policy_seqno
      0.03 ±100%      +0.0        0.03 ±100%  perf-profile.children.cycles-pp.ksys_write
      0.06 ± 16%      +0.0        0.06 ±  6%  perf-profile.children.cycles-pp.__fdget
      0.00            +0.0        0.01 ±173%  perf-profile.children.cycles-pp.fb_flashcursor
      0.00            +0.0        0.01 ±173%  perf-profile.children.cycles-pp.bit_cursor
      0.00            +0.0        0.01 ±173%  perf-profile.children.cycles-pp.soft_cursor
      0.00            +0.0        0.01 ±173%  perf-profile.children.cycles-pp.mga_dirty_update
      0.00            +0.0        0.01 ±173%  perf-profile.children.cycles-pp.memcpy_erms
      0.46 ± 11%      +0.0        0.47 ±  5%  perf-profile.children.cycles-pp.timespec64_trunc
      1.69            +0.1        1.74        perf-profile.children.cycles-pp.__might_fault
     54.49            +0.1       54.54        perf-profile.children.cycles-pp.vfs_read
      0.79 ±  3%      +0.1        0.84 ±  2%  perf-profile.children.cycles-pp.__indirect_thunk_start
      0.61 ±  2%      +0.1        0.66 ±  2%  perf-profile.children.cycles-pp.rcu_all_qs
      1.92            +0.1        1.98        perf-profile.children.cycles-pp.__might_sleep
      5.11            +0.1        5.18        perf-profile.children.cycles-pp.selinux_file_permission
      6.39            +0.1        6.47        perf-profile.children.cycles-pp.security_file_permission
      1.84            +0.1        1.97 ±  3%  perf-profile.children.cycles-pp.__inode_security_revalidate
      3.66            +0.4        4.03        perf-profile.children.cycles-pp.fsnotify
      9.61            +0.4        9.99        perf-profile.children.cycles-pp.shmem_getpage_gfp
      2.96 ±  3%      +0.4        3.35 ±  2%  perf-profile.children.cycles-pp.___might_sleep
      7.34            +0.4        7.75        perf-profile.children.cycles-pp.find_lock_entry
      3.38            +0.4        3.79        perf-profile.children.cycles-pp.find_get_entry
      0.37            +0.5        0.85        perf-profile.children.cycles-pp.xas_start
      0.57 ±  2%      +0.5        1.09        perf-profile.children.cycles-pp.xas_load
      0.72 ±  2%      +0.8        1.54        perf-profile.children.cycles-pp.mark_page_accessed
     14.88            -0.6       14.27        perf-profile.self.cycles-pp.copy_user_enhanced_fast_string
     11.09            -0.3       10.76        perf-profile.self.cycles-pp.__entry_trampoline_start
     12.79            -0.3       12.52        perf-profile.self.cycles-pp.syscall_return_via_sysret
      2.63 ±  6%      -0.1        2.48 ±  2%  perf-profile.self.cycles-pp.__fget
      1.31 ±  3%      -0.1        1.18 ±  3%  perf-profile.self.cycles-pp.ksys_pread64
      2.08            -0.1        1.99        perf-profile.self.cycles-pp.vfs_read
      1.83 ±  2%      -0.1        1.75        perf-profile.self.cycles-pp.__fsnotify_parent
      2.85            -0.1        2.77        perf-profile.self.cycles-pp.find_get_entry
      2.85            -0.1        2.77        perf-profile.self.cycles-pp.shmem_getpage_gfp
      0.47 ± 18%      -0.1        0.40 ±  7%  perf-profile.self.cycles-pp.ktime_get_coarse_real_ts64
      6.02            -0.1        5.94        perf-profile.self.cycles-pp.shmem_file_read_iter
      1.11            -0.1        1.04 ±  4%  perf-profile.self.cycles-pp.fput
      2.23 ±  2%      -0.1        2.17        perf-profile.self.cycles-pp.do_syscall_64
      1.29 ±  3%      -0.1        1.23        perf-profile.self.cycles-pp.security_file_permission
      1.09 ±  2%      -0.1        1.04        perf-profile.self.cycles-pp.unlock_page
      1.85            -0.1        1.80        perf-profile.self.cycles-pp.find_lock_entry
      3.54            -0.0        3.50        perf-profile.self.cycles-pp.selinux_file_permission
      0.78            -0.0        0.74 ±  3%  perf-profile.self.cycles-pp._cond_resched
      1.32            -0.0        1.28 ±  2%  perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
      1.15            -0.0        1.11        perf-profile.self.cycles-pp.__x64_sys_pread64
      0.93 ±  5%      -0.0        0.90 ±  5%  perf-profile.self.cycles-pp.current_time
      0.98 ±  2%      -0.0        0.95        perf-profile.self.cycles-pp.page_mapping
      0.93            -0.0        0.91        perf-profile.self.cycles-pp.entry_SYSCALL_64_stage2
      0.96 ±  7%      -0.0        0.93 ±  4%  perf-profile.self.cycles-pp.__fget_light
      0.58 ±  3%      -0.0        0.56 ±  3%  perf-profile.self.cycles-pp.__might_fault
      0.32 ±  5%      -0.0        0.30 ±  4%  perf-profile.self.cycles-pp.rw_verify_area
      0.01 ±173%      -0.0        0.00        perf-profile.self.cycles-pp.io_serial_in
      0.01 ±173%      -0.0        0.00        perf-profile.self.cycles-pp.ktime_get
      0.48 ±  3%      -0.0        0.47 ±  4%  perf-profile.self.cycles-pp.copyout
      0.28 ± 11%      -0.0        0.28        perf-profile.self.cycles-pp.iov_iter_init
      1.00 ±  4%      -0.0        1.00 ±  3%  perf-profile.self.cycles-pp.touch_atime
      0.08 ±  8%      +0.0        0.08 ±  8%  perf-profile.self.cycles-pp.avc_policy_seqno
      0.06 ± 16%      +0.0        0.06 ±  6%  perf-profile.self.cycles-pp.__fdget
      0.00            +0.0        0.01 ±173%  perf-profile.self.cycles-pp.memcpy_erms
      0.45 ± 11%      +0.0        0.47 ±  4%  perf-profile.self.cycles-pp.timespec64_trunc
      1.55            +0.0        1.58        perf-profile.self.cycles-pp.atime_needs_update
      0.78            +0.0        0.82 ±  3%  perf-profile.self.cycles-pp.__inode_security_revalidate
      2.77            +0.0        2.81        perf-profile.self.cycles-pp.copy_page_to_iter
      0.61 ±  2%      +0.0        0.66 ±  3%  perf-profile.self.cycles-pp.rcu_all_qs
      0.79 ±  3%      +0.1        0.84        perf-profile.self.cycles-pp.__indirect_thunk_start
      1.92            +0.1        1.98        perf-profile.self.cycles-pp.__might_sleep
      0.27            +0.1        0.33 ±  3%  perf-profile.self.cycles-pp.xas_load
      2.85 ±  2%      +0.2        3.09        perf-profile.self.cycles-pp.__vfs_read
      2.96 ±  3%      +0.4        3.33 ±  2%  perf-profile.self.cycles-pp.___might_sleep
      3.64            +0.4        4.02        perf-profile.self.cycles-pp.fsnotify
      0.37            +0.5        0.84 ±  2%  perf-profile.self.cycles-pp.xas_start
      0.72            +0.8        1.53        perf-profile.self.cycles-pp.mark_page_accessed


                                                                                
                            will-it-scale.per_thread_ops                        
                                                                                
  690000 +-+----------------------------------------------------------------+   
         |                                                                  |   
  685000 +-+              .++. +.+.+                           +.++. .++   .|   
         |           .++.+    +     :                   .++. .+     +   + + |   
  680000 +-++.   +. +               +. +.   +. +.      +    +            +  |   
         | +  + +  +                  +  +. : +  + .+.+                     |   
  675000 +-+   +                           +      +                         |   
         |                                                                  |   
  670000 +-+                                                                |   
         |                                                                  |   
  665000 +-+                                                                |   
         |                                                                  |   
  660000 O-OO OO O OO OO O O  OO                                            |   
         |                  O                                               |   
  655000 +-+----------------------------------------------------------------+   
                                                                                
                                                                                                                                                                
                                will-it-scale.workload                          
                                                                                
  3.02e+07 +-+--------------------------------------------------------------+   
           |            .++.+  +.++ :                          + +  +.++   .|   
     3e+07 +-+        ++             :                   +.+. +         + + |   
           |. +.+  + +               +. +.+  +. +.+    .+    +           +  |   
  2.98e+07 +-+   :+ +                  +   + : +   :.++                     |   
           |     +                          +      +                        |   
  2.96e+07 +-+                                                              |   
           |                                                                |   
  2.94e+07 +-+                                                              |   
           |                                                                |   
  2.92e+07 +-+                                                              |   
           |                   O O                                          |   
   2.9e+07 O-OO OO OO OO OO O O                                             |   
           |                                                                |   
  2.88e+07 +-+--------------------------------------------------------------+   
                                                                                
                                                                                
[*] bisect-good sample
[O] bisect-bad  sample



Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


Thanks,
Xiaolong

View attachment "config-4.18.0-rc2-03440-gb3ab449" of type "text/plain" (166254 bytes)

View attachment "job-script" of type "text/plain" (6845 bytes)

View attachment "job.yaml" of type "text/plain" (4517 bytes)

View attachment "reproduce" of type "text/plain" (299 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ