lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200927004119.GR28663@shao2-debian>
Date:   Sun, 27 Sep 2020 08:41:19 +0800
From:   kernel test robot <rong.a.chen@...el.com>
To:     Peter Xu <peterx@...hat.com>
Cc:     linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        Jason Gunthorpe <jgg@...pe.ca>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Jan Kara <jack@...e.cz>, Michal Hocko <mhocko@...e.com>,
        Kirill Tkhai <ktkhai@...tuozzo.com>,
        Kirill Shutemov <kirill@...temov.name>,
        Hugh Dickins <hughd@...gle.com>, Peter Xu <peterx@...hat.com>,
        Christoph Hellwig <hch@....de>,
        Andrea Arcangeli <aarcange@...hat.com>,
        John Hubbard <jhubbard@...dia.com>,
        Oleg Nesterov <oleg@...hat.com>,
        Leon Romanovsky <leonro@...dia.com>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Jann Horn <jannh@...gle.com>, 0day robot <lkp@...el.com>,
        lkp@...ts.01.org, ying.huang@...el.com, feng.tang@...el.com,
        zhengjun.xing@...el.com
Subject: [mm] 698ac7610f: will-it-scale.per_thread_ops 8.2% improvement

Greeting,

FYI, we noticed a 8.2% improvement of will-it-scale.per_thread_ops due to commit:


commit: 698ac7610f7928ddfa44a0736e89d776579d8b82 ("[PATCH 1/5] mm: Introduce mm_struct.has_pinned")
url: https://github.com/0day-ci/linux/commits/Peter-Xu/mm-Break-COW-for-pinned-pages-during-fork/20200922-052211
base: https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git bcf876870b95592b52519ed4aafcf9d95999bc9c

in testcase: will-it-scale
on test machine: 96 threads Intel(R) Xeon(R) CPU @ 2.30GHz with 128G memory
with following parameters:

	nr_task: 100%
	mode: thread
	test: mmap2
	cpufreq_governor: performance
	ucode: 0x4002f01

test-description: Will It Scale takes a testcase and runs it from 1 through to n parallel copies to see if the testcase will scale. It builds both a process and threads based test in order to see any differences between the two.
test-url: https://github.com/antonblanchard/will-it-scale





Details are as below:
-------------------------------------------------------------------------------------------------->


To reproduce:

        git clone https://github.com/intel/lkp-tests.git
        cd lkp-tests
        bin/lkp install job.yaml  # job file is attached in this email
        bin/lkp run     job.yaml

=========================================================================================
compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase/ucode:
  gcc-9/performance/x86_64-rhel-8.3/thread/100%/debian-10.4-x86_64-20200603.cgz/lkp-csl-2sp4/mmap2/will-it-scale/0x4002f01

commit: 
  v5.8
  698ac7610f ("mm: Introduce mm_struct.has_pinned")

            v5.8 698ac7610f7928ddfa44a0736e8 
---------------- --------------------------- 
         %stddev     %change         %stddev
             \          |                \  
      2003            +8.2%       2168        will-it-scale.per_thread_ops
    192350            +8.3%     208245        will-it-scale.workload
      2643 ± 33%     -36.3%       1683        meminfo.Active(file)
      3.88 ±  2%      -0.6        3.25 ±  2%  mpstat.cpu.all.idle%
      0.00 ±  3%      -0.0        0.00 ±  8%  mpstat.cpu.all.iowait%
    307629 ±  3%     +10.5%     340075 ±  3%  numa-numastat.node0.local_node
     15503 ± 60%     +60.2%      24839 ± 25%  numa-numastat.node1.other_node
    161670           -58.7%      66739 ±  4%  vmstat.system.cs
    209406            -3.9%     201176        vmstat.system.in
    364.00 ± 23%     -53.8%     168.00        slabinfo.pid_namespace.active_objs
    364.00 ± 23%     -53.8%     168.00        slabinfo.pid_namespace.num_objs
    985.50 ± 11%     +14.6%       1129 ±  8%  slabinfo.task_group.active_objs
    985.50 ± 11%     +14.6%       1129 ±  8%  slabinfo.task_group.num_objs
    660.25 ± 33%     -36.3%     420.25        proc-vmstat.nr_active_file
    302055            +2.4%     309211        proc-vmstat.nr_file_pages
    281010            +2.6%     288385        proc-vmstat.nr_unevictable
    660.25 ± 33%     -36.3%     420.25        proc-vmstat.nr_zone_active_file
    281010            +2.6%     288385        proc-vmstat.nr_zone_unevictable
  20640832 ± 16%     +40.0%   28888446 ±  6%  cpuidle.C1.time
   1743036 ±  6%     -54.5%     792376 ±  4%  cpuidle.C1.usage
 5.048e+08 ± 54%     -98.3%    8642335 ±  2%  cpuidle.C6.time
    706531 ± 51%     -94.9%      36224        cpuidle.C6.usage
  38313880           -56.5%   16666274 ±  5%  cpuidle.POLL.time
  18289550           -59.1%    7488947 ±  5%  cpuidle.POLL.usage
    302.94 ±  5%     -32.9%     203.13 ±  6%  sched_debug.cfs_rq:/.exec_clock.stddev
     31707 ±  6%     -43.6%      17867 ±  6%  sched_debug.cfs_rq:/.min_vruntime.stddev
      0.77 ± 22%     +41.1%       1.09 ±  4%  sched_debug.cfs_rq:/.nr_spread_over.avg
    163292 ± 15%     -75.8%      39543 ± 24%  sched_debug.cfs_rq:/.spread0.avg
    220569 ± 13%     -65.9%      75287 ± 16%  sched_debug.cfs_rq:/.spread0.max
     -1903         +1952.7%     -39073        sched_debug.cfs_rq:/.spread0.min
     31680 ±  6%     -43.7%      17850 ±  6%  sched_debug.cfs_rq:/.spread0.stddev
    698820 ±  2%     -28.4%     500526 ±  3%  sched_debug.cpu.avg_idle.avg
   1100275 ±  3%      -7.2%    1020875 ±  3%  sched_debug.cpu.avg_idle.max
    250869           -58.4%     104239 ±  4%  sched_debug.cpu.nr_switches.avg
    766741 ± 25%     -64.8%     269919 ± 10%  sched_debug.cpu.nr_switches.max
    111347 ± 16%     -50.8%      54786 ± 11%  sched_debug.cpu.nr_switches.min
    108077 ± 11%     -67.3%      35316 ±  8%  sched_debug.cpu.nr_switches.stddev
    262769           -59.1%     107346 ±  4%  sched_debug.cpu.sched_count.avg
    800567 ± 25%     -65.9%     272755 ± 10%  sched_debug.cpu.sched_count.max
    115870 ± 15%     -51.6%      56034 ± 11%  sched_debug.cpu.sched_count.min
    112678 ± 11%     -67.8%      36309 ±  9%  sched_debug.cpu.sched_count.stddev
    122760           -59.2%      50082 ±  4%  sched_debug.cpu.sched_goidle.avg
    372289 ± 25%     -65.9%     126854 ± 10%  sched_debug.cpu.sched_goidle.max
     53911 ± 15%     -51.7%      26040 ± 11%  sched_debug.cpu.sched_goidle.min
     52986 ± 12%     -67.7%      17092 ±  9%  sched_debug.cpu.sched_goidle.stddev
    138914           -59.1%      56816 ±  4%  sched_debug.cpu.ttwu_count.avg
    168089           -57.1%      72151 ±  4%  sched_debug.cpu.ttwu_count.max
    123046           -61.1%      47837 ±  4%  sched_debug.cpu.ttwu_count.min
     11828 ± 15%     -39.8%       7114 ±  4%  sched_debug.cpu.ttwu_count.stddev
      3050 ±  6%     -34.3%       2004 ±  6%  sched_debug.cpu.ttwu_local.max
    383.97 ±  6%     -30.1%     268.54 ± 12%  sched_debug.cpu.ttwu_local.stddev
     51.21            -0.4       50.79        perf-profile.calltrace.cycles-pp.rwsem_down_write_slowpath.down_write_killable.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64
     51.24            -0.4       50.82        perf-profile.calltrace.cycles-pp.down_write_killable.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
      0.84 ±  8%      -0.1        0.76 ±  4%  perf-profile.calltrace.cycles-pp.secondary_startup_64
      0.83 ±  8%      -0.1        0.74 ±  4%  perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
      0.83 ±  8%      -0.1        0.74 ±  4%  perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
      0.83 ±  8%      -0.1        0.74 ±  4%  perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
     47.27            +0.5       47.79        perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__munmap
     47.27            +0.5       47.79        perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__munmap
     47.26            +0.5       47.78        perf-profile.calltrace.cycles-pp.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe.__munmap
     47.25            +0.5       47.78        perf-profile.calltrace.cycles-pp.__vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe.__munmap
     47.28            +0.5       47.81        perf-profile.calltrace.cycles-pp.__munmap
     46.75            +0.5       47.30        perf-profile.calltrace.cycles-pp.rwsem_down_write_slowpath.down_write_killable.__vm_munmap.__x64_sys_munmap.do_syscall_64
     46.24            +0.5       46.79        perf-profile.calltrace.cycles-pp.osq_lock.rwsem_optimistic_spin.rwsem_down_write_slowpath.down_write_killable.__vm_munmap
     46.77            +0.6       47.33        perf-profile.calltrace.cycles-pp.down_write_killable.__vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe
     46.70            +0.6       47.26        perf-profile.calltrace.cycles-pp.rwsem_optimistic_spin.rwsem_down_write_slowpath.down_write_killable.__vm_munmap.__x64_sys_munmap
      0.84 ±  8%      -0.1        0.76 ±  4%  perf-profile.children.cycles-pp.secondary_startup_64
      0.84 ±  8%      -0.1        0.76 ±  4%  perf-profile.children.cycles-pp.cpu_startup_entry
      0.84 ±  8%      -0.1        0.76 ±  4%  perf-profile.children.cycles-pp.do_idle
      0.83 ±  8%      -0.1        0.74 ±  4%  perf-profile.children.cycles-pp.start_secondary
      0.11 ±  4%      -0.1        0.04 ± 57%  perf-profile.children.cycles-pp.__sched_text_start
      0.14 ±  3%      -0.1        0.08 ±  6%  perf-profile.children.cycles-pp.rwsem_wake
      0.10 ±  4%      -0.0        0.06 ±  9%  perf-profile.children.cycles-pp.wake_up_q
      0.10 ±  4%      -0.0        0.05        perf-profile.children.cycles-pp.try_to_wake_up
      0.07 ±  7%      -0.0        0.05        perf-profile.children.cycles-pp._raw_spin_lock_irqsave
      0.24 ±  2%      +0.0        0.26        perf-profile.children.cycles-pp.mmap_region
      0.42 ±  2%      +0.0        0.45        perf-profile.children.cycles-pp.do_mmap
      0.67            +0.0        0.71 ±  2%  perf-profile.children.cycles-pp.rwsem_spin_on_owner
     97.96            +0.1       98.09        perf-profile.children.cycles-pp.rwsem_down_write_slowpath
     98.02            +0.1       98.14        perf-profile.children.cycles-pp.down_write_killable
     97.86            +0.2       98.02        perf-profile.children.cycles-pp.rwsem_optimistic_spin
     96.90            +0.2       97.07        perf-profile.children.cycles-pp.osq_lock
     47.26            +0.5       47.78        perf-profile.children.cycles-pp.__x64_sys_munmap
     47.28            +0.5       47.81        perf-profile.children.cycles-pp.__munmap
     47.25            +0.5       47.78        perf-profile.children.cycles-pp.__vm_munmap
      0.24            -0.0        0.19 ±  4%  perf-profile.self.cycles-pp.rwsem_optimistic_spin
      0.06            -0.0        0.03 ±100%  perf-profile.self.cycles-pp._raw_spin_lock_irqsave
      0.09            +0.0        0.10        perf-profile.self.cycles-pp.find_vma
      0.65            +0.0        0.70 ±  2%  perf-profile.self.cycles-pp.rwsem_spin_on_owner
     96.31            +0.2       96.53        perf-profile.self.cycles-pp.osq_lock
      0.66           -21.0%       0.52 ±  2%  perf-stat.i.MPKI
      0.11            +0.0        0.11        perf-stat.i.branch-miss-rate%
  14774053            +4.3%   15410076        perf-stat.i.branch-misses
     41.17            +0.9       42.09        perf-stat.i.cache-miss-rate%
  19999033           -18.7%   16264313 ±  3%  perf-stat.i.cache-misses
  48696879           -20.4%   38779964 ±  2%  perf-stat.i.cache-references
    162553           -58.9%      66790 ±  4%  perf-stat.i.context-switches
    157.36            -3.9%     151.20        perf-stat.i.cpu-migrations
     14271           +23.9%      17676 ±  3%  perf-stat.i.cycles-between-cache-misses
      0.00 ±  8%      +0.0        0.00 ±  4%  perf-stat.i.dTLB-load-miss-rate%
    155048 ±  5%     +46.3%     226826 ±  4%  perf-stat.i.dTLB-load-misses
      0.00 ±  2%      +0.0        0.01        perf-stat.i.dTLB-store-miss-rate%
     16761           +19.2%      19972 ±  2%  perf-stat.i.dTLB-store-misses
 4.118e+08           -15.7%   3.47e+08        perf-stat.i.dTLB-stores
     93.99            +1.4       95.42        perf-stat.i.iTLB-load-miss-rate%
   4103535 ±  3%      -6.7%    3827881        perf-stat.i.iTLB-load-misses
    263239 ±  4%     -28.6%     188037 ±  2%  perf-stat.i.iTLB-loads
     18498 ±  3%      +7.5%      19885        perf-stat.i.instructions-per-iTLB-miss
      0.31          +226.8%       1.01 ±  3%  perf-stat.i.metric.K/sec
     88.51            +1.8       90.30        perf-stat.i.node-load-miss-rate%
   8104644           -13.5%    7009101        perf-stat.i.node-load-misses
   1047577           -28.3%     751122        perf-stat.i.node-loads
   3521008           -13.1%    3058055        perf-stat.i.node-store-misses
      0.64           -20.6%       0.51 ±  2%  perf-stat.overall.MPKI
      0.10            +0.0        0.10        perf-stat.overall.branch-miss-rate%
     41.07            +0.9       41.92        perf-stat.overall.cache-miss-rate%
     14273           +23.7%      17662 ±  3%  perf-stat.overall.cycles-between-cache-misses
      0.00 ±  5%      +0.0        0.00 ±  5%  perf-stat.overall.dTLB-load-miss-rate%
      0.00 ±  2%      +0.0        0.01 ±  2%  perf-stat.overall.dTLB-store-miss-rate%
     93.95            +1.3       95.29        perf-stat.overall.iTLB-load-miss-rate%
     18507 ±  3%      +7.5%      19890        perf-stat.overall.instructions-per-iTLB-miss
     88.55            +1.8       90.32        perf-stat.overall.node-load-miss-rate%
 1.191e+08            -7.0%  1.107e+08        perf-stat.overall.path-length
  14739033            +4.2%   15364552        perf-stat.ps.branch-misses
  19930054           -18.7%   16209879 ±  3%  perf-stat.ps.cache-misses
  48535919           -20.3%   38662093 ±  2%  perf-stat.ps.cache-references
    161841           -58.9%      66477 ±  4%  perf-stat.ps.context-switches
    157.06            -3.9%     150.89        perf-stat.ps.cpu-migrations
    154906 ±  5%     +46.1%     226269 ±  5%  perf-stat.ps.dTLB-load-misses
     16730           +19.1%      19927 ±  2%  perf-stat.ps.dTLB-store-misses
 4.104e+08           -15.7%  3.459e+08        perf-stat.ps.dTLB-stores
   4089203 ±  3%      -6.7%    3814320        perf-stat.ps.iTLB-load-misses
    263134 ±  4%     -28.4%     188441 ±  2%  perf-stat.ps.iTLB-loads
   8076488           -13.5%    6984923        perf-stat.ps.node-load-misses
   1044002           -28.3%     748425        perf-stat.ps.node-loads
   3509068           -13.1%    3048190        perf-stat.ps.node-store-misses
      2270 ±170%     -98.9%      24.50 ±166%  interrupts.36:PCI-MSI.31981569-edge.i40e-eth0-TxRx-0
   3730764 ±  2%     -62.0%    1416300 ±  4%  interrupts.CAL:Function_call_interrupts
      2270 ±170%     -98.9%      24.25 ±168%  interrupts.CPU0.36:PCI-MSI.31981569-edge.i40e-eth0-TxRx-0
    111116 ± 29%     -67.4%      36227 ±  8%  interrupts.CPU0.CAL:Function_call_interrupts
     11091 ± 38%     -62.2%       4195 ± 12%  interrupts.CPU0.RES:Rescheduling_interrupts
     48001 ± 26%     -53.7%      22228 ± 14%  interrupts.CPU1.CAL:Function_call_interrupts
      4189 ± 29%     -43.2%       2378 ±  5%  interrupts.CPU1.RES:Rescheduling_interrupts
     34024 ± 32%     -44.7%      18804 ±  4%  interrupts.CPU10.CAL:Function_call_interrupts
     26171 ± 19%     -26.1%      19334 ±  3%  interrupts.CPU11.CAL:Function_call_interrupts
      2370 ± 13%     -20.8%       1876 ± 18%  interrupts.CPU11.RES:Rescheduling_interrupts
     30568 ± 23%     -45.9%      16537 ±  4%  interrupts.CPU12.CAL:Function_call_interrupts
      3094 ± 27%     -46.9%       1643 ±  8%  interrupts.CPU12.RES:Rescheduling_interrupts
    514.50 ± 10%     +35.6%     697.75 ± 16%  interrupts.CPU12.TLB:TLB_shootdowns
     29819 ± 32%     -41.7%      17393        interrupts.CPU13.CAL:Function_call_interrupts
     35364 ± 38%     -49.6%      17819 ±  9%  interrupts.CPU14.CAL:Function_call_interrupts
    694.75 ± 11%     +23.2%     856.00 ±  9%  interrupts.CPU14.TLB:TLB_shootdowns
     38361 ± 48%     -49.8%      19273 ±  9%  interrupts.CPU16.CAL:Function_call_interrupts
     30069 ± 23%     -31.6%      20559 ±  3%  interrupts.CPU17.CAL:Function_call_interrupts
    809.75 ±  7%     -19.5%     651.75 ± 13%  interrupts.CPU17.TLB:TLB_shootdowns
     28245 ± 29%     -33.1%      18894 ±  6%  interrupts.CPU18.CAL:Function_call_interrupts
     33560 ± 23%     -48.2%      17369 ±  6%  interrupts.CPU19.CAL:Function_call_interrupts
      2863 ± 19%     -40.3%       1709 ± 13%  interrupts.CPU19.RES:Rescheduling_interrupts
     47118 ± 32%     -55.7%      20868 ±  3%  interrupts.CPU2.CAL:Function_call_interrupts
      3897 ± 38%     -48.9%       1991 ±  7%  interrupts.CPU2.RES:Rescheduling_interrupts
     34735 ± 29%     -41.7%      20246 ±  9%  interrupts.CPU20.CAL:Function_call_interrupts
     37232 ± 23%     -46.6%      19883 ± 12%  interrupts.CPU21.CAL:Function_call_interrupts
     32345 ± 16%     -38.6%      19845 ±  6%  interrupts.CPU22.CAL:Function_call_interrupts
     34083 ± 22%     -43.4%      19301 ±  6%  interrupts.CPU23.CAL:Function_call_interrupts
     61308 ± 16%     -76.3%      14529 ± 13%  interrupts.CPU24.CAL:Function_call_interrupts
      6610 ± 26%     -76.3%       1568 ± 11%  interrupts.CPU24.RES:Rescheduling_interrupts
     51384 ± 32%     -75.0%      12848 ± 12%  interrupts.CPU25.CAL:Function_call_interrupts
      4643 ± 39%     -70.6%       1366 ±  9%  interrupts.CPU25.RES:Rescheduling_interrupts
     48788 ± 25%     -71.7%      13826 ± 17%  interrupts.CPU26.CAL:Function_call_interrupts
      4076 ± 32%     -70.5%       1203 ± 18%  interrupts.CPU26.RES:Rescheduling_interrupts
     45702 ± 14%     -70.7%      13369 ± 12%  interrupts.CPU27.CAL:Function_call_interrupts
      3614 ± 21%     -67.3%       1180 ± 16%  interrupts.CPU27.RES:Rescheduling_interrupts
     51216 ± 14%     -71.6%      14546 ± 15%  interrupts.CPU28.CAL:Function_call_interrupts
      4395 ± 24%     -67.9%       1410 ± 21%  interrupts.CPU28.RES:Rescheduling_interrupts
    614.25 ± 18%     +33.3%     818.75 ± 17%  interrupts.CPU28.TLB:TLB_shootdowns
     44945 ± 23%     -66.5%      15059 ± 14%  interrupts.CPU29.CAL:Function_call_interrupts
      3994 ± 34%     -68.2%       1271 ± 10%  interrupts.CPU29.RES:Rescheduling_interrupts
     39154 ± 24%     -41.6%      22857 ±  6%  interrupts.CPU3.CAL:Function_call_interrupts
     45674 ± 11%     -68.3%      14470 ±  8%  interrupts.CPU30.CAL:Function_call_interrupts
      4097 ± 23%     -68.8%       1278 ± 10%  interrupts.CPU30.RES:Rescheduling_interrupts
     51890 ± 13%     -72.6%      14227 ± 16%  interrupts.CPU31.CAL:Function_call_interrupts
      4557 ± 26%     -71.4%       1305 ± 21%  interrupts.CPU31.RES:Rescheduling_interrupts
     41324 ± 23%     -76.0%       9933 ± 11%  interrupts.CPU32.CAL:Function_call_interrupts
      3284 ± 33%     -73.4%     873.75 ± 15%  interrupts.CPU32.RES:Rescheduling_interrupts
     39758 ± 31%     -74.5%      10120 ± 17%  interrupts.CPU33.CAL:Function_call_interrupts
      3373 ± 42%     -74.2%     869.00 ± 15%  interrupts.CPU33.RES:Rescheduling_interrupts
    513.00 ± 27%     +46.0%     748.75 ± 16%  interrupts.CPU33.TLB:TLB_shootdowns
     40015 ± 14%     -72.8%      10885 ±  8%  interrupts.CPU34.CAL:Function_call_interrupts
      3402 ± 25%     -68.2%       1080 ± 13%  interrupts.CPU34.RES:Rescheduling_interrupts
    635.25 ± 22%     +49.3%     948.25 ± 13%  interrupts.CPU34.TLB:TLB_shootdowns
     45251 ± 19%     -75.2%      11204 ± 17%  interrupts.CPU35.CAL:Function_call_interrupts
      3731 ± 31%     -73.4%     992.50 ± 20%  interrupts.CPU35.RES:Rescheduling_interrupts
     43390 ± 11%     -78.3%       9434 ± 15%  interrupts.CPU36.CAL:Function_call_interrupts
      3536 ± 23%     -77.3%     803.75 ± 14%  interrupts.CPU36.RES:Rescheduling_interrupts
     39820 ± 11%     -75.9%       9613 ± 10%  interrupts.CPU37.CAL:Function_call_interrupts
      2987 ± 21%     -70.8%     873.25 ±  9%  interrupts.CPU37.RES:Rescheduling_interrupts
     42969 ± 32%     -76.6%      10068 ± 17%  interrupts.CPU38.CAL:Function_call_interrupts
      3202 ± 36%     -74.4%     818.50 ± 27%  interrupts.CPU38.RES:Rescheduling_interrupts
     35571 ± 16%     -72.4%       9822 ±  9%  interrupts.CPU39.CAL:Function_call_interrupts
      2986 ± 24%     -73.9%     778.75 ± 15%  interrupts.CPU39.RES:Rescheduling_interrupts
     45001 ± 21%     -48.2%      23317 ±  7%  interrupts.CPU4.CAL:Function_call_interrupts
      3689 ± 24%     -43.6%       2080 ±  2%  interrupts.CPU4.RES:Rescheduling_interrupts
     39302 ± 21%     -73.4%      10463 ±  8%  interrupts.CPU40.CAL:Function_call_interrupts
      2968 ± 32%     -71.4%     848.50 ± 18%  interrupts.CPU40.RES:Rescheduling_interrupts
     40826 ± 19%     -75.3%      10070 ± 10%  interrupts.CPU41.CAL:Function_call_interrupts
      3321 ± 30%     -70.9%     967.25 ± 10%  interrupts.CPU41.RES:Rescheduling_interrupts
    700.25 ± 18%     +26.2%     883.75 ± 17%  interrupts.CPU41.TLB:TLB_shootdowns
     35368 ± 11%     -73.7%       9308 ± 15%  interrupts.CPU42.CAL:Function_call_interrupts
      2839 ± 12%     -70.3%     844.50 ± 13%  interrupts.CPU42.RES:Rescheduling_interrupts
     45459 ± 25%     -78.7%       9687 ± 11%  interrupts.CPU43.CAL:Function_call_interrupts
      3703 ± 29%     -74.1%     959.50 ± 16%  interrupts.CPU43.RES:Rescheduling_interrupts
     41495 ± 15%     -77.1%       9522 ± 12%  interrupts.CPU44.CAL:Function_call_interrupts
      3153 ± 28%     -75.0%     789.75 ± 15%  interrupts.CPU44.RES:Rescheduling_interrupts
     38501 ± 26%     -72.5%      10601 ± 14%  interrupts.CPU45.CAL:Function_call_interrupts
      3024 ± 38%     -73.8%     791.00 ± 19%  interrupts.CPU45.RES:Rescheduling_interrupts
     39083 ± 35%     -73.6%      10323 ± 18%  interrupts.CPU46.CAL:Function_call_interrupts
      3173 ± 37%     -73.9%     829.75 ± 24%  interrupts.CPU46.RES:Rescheduling_interrupts
     44486 ± 20%     -75.3%      10968 ± 15%  interrupts.CPU47.CAL:Function_call_interrupts
      3773 ± 34%     -76.4%     890.25 ± 24%  interrupts.CPU47.RES:Rescheduling_interrupts
     34967 ± 42%     -51.0%      17117 ± 10%  interrupts.CPU48.CAL:Function_call_interrupts
     31969 ± 38%     -51.7%      15432 ± 12%  interrupts.CPU49.CAL:Function_call_interrupts
     33786 ± 12%     -29.2%      23918 ±  5%  interrupts.CPU5.CAL:Function_call_interrupts
      3014 ± 16%     -33.2%       2012 ±  9%  interrupts.CPU5.RES:Rescheduling_interrupts
     30514 ± 29%     -46.4%      16343 ±  6%  interrupts.CPU51.CAL:Function_call_interrupts
     34448 ± 26%     -48.7%      17686 ±  4%  interrupts.CPU52.CAL:Function_call_interrupts
      2811 ± 34%     -42.0%       1631 ±  4%  interrupts.CPU52.RES:Rescheduling_interrupts
     30848 ± 33%     -47.9%      16059 ±  3%  interrupts.CPU54.CAL:Function_call_interrupts
     31017 ± 22%     -52.7%      14676 ±  7%  interrupts.CPU55.CAL:Function_call_interrupts
      2501 ± 41%     -41.8%       1455 ± 10%  interrupts.CPU55.RES:Rescheduling_interrupts
     28249 ± 23%     -46.9%      14997 ± 10%  interrupts.CPU56.CAL:Function_call_interrupts
      2113 ± 18%     -36.0%       1352 ± 15%  interrupts.CPU56.RES:Rescheduling_interrupts
     27658 ± 20%     -49.3%      14034 ±  3%  interrupts.CPU57.CAL:Function_call_interrupts
     26559 ± 34%     -40.6%      15778 ± 11%  interrupts.CPU58.CAL:Function_call_interrupts
     27984 ± 27%     -39.9%      16815 ± 12%  interrupts.CPU59.CAL:Function_call_interrupts
     35098 ± 33%     -37.5%      21921        interrupts.CPU6.CAL:Function_call_interrupts
      3073 ± 37%     -40.6%       1825 ±  8%  interrupts.CPU6.RES:Rescheduling_interrupts
     29248 ± 38%     -48.2%      15149 ±  6%  interrupts.CPU60.CAL:Function_call_interrupts
     30880 ± 33%     -52.3%      14722 ± 10%  interrupts.CPU61.CAL:Function_call_interrupts
     31218 ± 43%     -51.5%      15152 ±  3%  interrupts.CPU62.CAL:Function_call_interrupts
     29210 ± 40%     -46.5%      15627 ±  6%  interrupts.CPU63.CAL:Function_call_interrupts
     26813 ± 15%     -39.0%      16343 ± 13%  interrupts.CPU64.CAL:Function_call_interrupts
     24791 ± 14%     -32.6%      16704 ± 10%  interrupts.CPU67.CAL:Function_call_interrupts
     29638 ± 33%     -42.9%      16914 ±  9%  interrupts.CPU68.CAL:Function_call_interrupts
     36247 ± 33%     -48.5%      18670 ±  8%  interrupts.CPU69.CAL:Function_call_interrupts
     30379 ± 24%     -30.6%      21096 ±  5%  interrupts.CPU7.CAL:Function_call_interrupts
      3027 ± 25%     -32.0%       2059 ±  3%  interrupts.CPU7.RES:Rescheduling_interrupts
     31064 ± 25%     -42.8%      17774        interrupts.CPU70.CAL:Function_call_interrupts
     52949 ± 14%     -77.0%      12198 ± 10%  interrupts.CPU72.CAL:Function_call_interrupts
      4057 ± 23%     -75.7%     985.00 ±  8%  interrupts.CPU72.RES:Rescheduling_interrupts
     42694 ± 22%     -73.6%      11281 ± 16%  interrupts.CPU73.CAL:Function_call_interrupts
      3318 ± 39%     -74.3%     851.75 ± 16%  interrupts.CPU73.RES:Rescheduling_interrupts
     49143 ± 24%     -76.1%      11756 ± 12%  interrupts.CPU74.CAL:Function_call_interrupts
      3606 ± 31%     -73.8%     946.00 ± 12%  interrupts.CPU74.RES:Rescheduling_interrupts
     50587 ± 24%     -72.5%      13930 ± 20%  interrupts.CPU75.CAL:Function_call_interrupts
      3655 ± 36%     -71.1%       1056 ± 12%  interrupts.CPU75.RES:Rescheduling_interrupts
     57791 ± 21%     -78.4%      12488 ± 11%  interrupts.CPU76.CAL:Function_call_interrupts
      5109 ± 37%     -79.7%       1037 ± 20%  interrupts.CPU76.RES:Rescheduling_interrupts
     52455 ± 26%     -75.4%      12922 ±  5%  interrupts.CPU77.CAL:Function_call_interrupts
      3997 ± 37%     -73.9%       1043 ± 14%  interrupts.CPU77.RES:Rescheduling_interrupts
     49188 ± 21%     -74.5%      12521 ± 10%  interrupts.CPU78.CAL:Function_call_interrupts
      3867 ± 42%     -74.5%     986.25 ± 18%  interrupts.CPU78.RES:Rescheduling_interrupts
     45517 ± 19%     -72.6%      12484 ± 19%  interrupts.CPU79.CAL:Function_call_interrupts
      3369 ± 34%     -71.9%     946.25 ± 20%  interrupts.CPU79.RES:Rescheduling_interrupts
     30702 ± 22%     -39.9%      18462 ±  9%  interrupts.CPU8.CAL:Function_call_interrupts
      2580 ± 28%     -39.9%       1550 ± 10%  interrupts.CPU8.RES:Rescheduling_interrupts
     35561 ± 30%     -69.8%      10728 ± 12%  interrupts.CPU80.CAL:Function_call_interrupts
      2675 ± 44%     -70.6%     787.50 ±  7%  interrupts.CPU80.RES:Rescheduling_interrupts
     38762 ± 33%     -73.3%      10349 ± 18%  interrupts.CPU81.CAL:Function_call_interrupts
      2892 ± 48%     -70.5%     853.25 ± 20%  interrupts.CPU81.RES:Rescheduling_interrupts
     46500 ± 39%     -80.2%       9203 ±  6%  interrupts.CPU82.CAL:Function_call_interrupts
      3726 ± 41%     -83.3%     622.75 ± 12%  interrupts.CPU82.RES:Rescheduling_interrupts
     42125 ± 25%     -76.0%      10103 ±  7%  interrupts.CPU83.CAL:Function_call_interrupts
      3275 ± 40%     -75.4%     804.50 ±  6%  interrupts.CPU83.RES:Rescheduling_interrupts
     37359 ± 28%     -74.7%       9436 ±  7%  interrupts.CPU84.CAL:Function_call_interrupts
      2762 ± 45%     -71.1%     797.50 ± 17%  interrupts.CPU84.RES:Rescheduling_interrupts
     38900 ± 13%     -76.2%       9272 ±  8%  interrupts.CPU85.CAL:Function_call_interrupts
      2704 ± 27%     -77.0%     622.25 ± 10%  interrupts.CPU85.RES:Rescheduling_interrupts
     40662 ± 24%     -77.2%       9274 ± 14%  interrupts.CPU86.CAL:Function_call_interrupts
      3139 ± 39%     -79.5%     643.00 ± 28%  interrupts.CPU86.RES:Rescheduling_interrupts
     33538 ± 23%     -71.7%       9484 ± 14%  interrupts.CPU87.CAL:Function_call_interrupts
      2406 ± 40%     -73.5%     638.25 ± 21%  interrupts.CPU87.RES:Rescheduling_interrupts
     36240 ± 26%     -73.8%       9499 ± 10%  interrupts.CPU88.CAL:Function_call_interrupts
      2450 ± 39%     -70.5%     721.75 ±  5%  interrupts.CPU88.RES:Rescheduling_interrupts
     41267 ± 29%     -77.1%       9463 ± 11%  interrupts.CPU89.CAL:Function_call_interrupts
      3286 ± 34%     -73.2%     879.50 ± 17%  interrupts.CPU89.RES:Rescheduling_interrupts
     36038 ± 18%     -50.6%      17796 ±  3%  interrupts.CPU9.CAL:Function_call_interrupts
      3140 ± 28%     -48.3%       1622 ±  9%  interrupts.CPU9.RES:Rescheduling_interrupts
     38534 ± 25%     -77.5%       8675 ±  9%  interrupts.CPU90.CAL:Function_call_interrupts
      3008 ± 27%     -79.1%     629.50 ± 16%  interrupts.CPU90.RES:Rescheduling_interrupts
     38422 ± 29%     -77.2%       8741 ± 14%  interrupts.CPU91.CAL:Function_call_interrupts
      3095 ± 51%     -78.7%     658.75 ± 14%  interrupts.CPU91.RES:Rescheduling_interrupts
     38120 ± 45%     -73.6%      10059 ± 10%  interrupts.CPU92.CAL:Function_call_interrupts
      2711 ± 61%     -73.3%     722.75 ± 10%  interrupts.CPU92.RES:Rescheduling_interrupts
     37155 ± 19%     -74.1%       9628 ± 12%  interrupts.CPU93.CAL:Function_call_interrupts
      2724 ± 32%     -70.4%     806.00 ± 32%  interrupts.CPU93.RES:Rescheduling_interrupts
     43458 ± 15%     -77.1%       9936 ± 10%  interrupts.CPU94.CAL:Function_call_interrupts
      2832 ± 25%     -76.8%     655.75 ± 18%  interrupts.CPU94.RES:Rescheduling_interrupts
     54226 ± 22%     -76.8%      12596 ± 17%  interrupts.CPU95.CAL:Function_call_interrupts
      4437 ± 37%     -80.6%     860.50 ± 11%  interrupts.CPU95.RES:Rescheduling_interrupts
    302853 ±  2%     -58.2%     126676 ±  2%  interrupts.RES:Rescheduling_interrupts


                                                                                
                           will-it-scale.per_thread_ops                         
                                                                                
  2300 +--------------------------------------------------------------------+   
       |                                                                    |   
  2250 |-+  O    O O                   O  O                                 |   
  2200 |-+                 O O       O                                 O    |   
       |                        O           O      O  O      O  O O  O      |   
  2150 |-O    O         O         O           O         O                   |   
       |              O                          O         O              O |   
  2100 |-+    +                                                             |   
       |     : +        +              +..      .+.                         |   
  2050 |.+   :  + .+.. + :             :  +   +.   +..  +                   |   
  2000 |-+..+    +    +   :           :    + +         +                    |   
       |                  :           :     +         +                     |   
  1950 |-+                 +.+..  +..+                                      |   
       |                         +                                          |   
  1900 +--------------------------------------------------------------------+   
                                                                                
                                                                                
[*] bisect-good sample
[O] bisect-bad  sample



Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


Thanks,
Rong Chen


View attachment "config-5.8.0-00001-g698ac7610f7928" of type "text/plain" (169434 bytes)

View attachment "job-script" of type "text/plain" (7989 bytes)

View attachment "job.yaml" of type "text/plain" (5431 bytes)

View attachment "reproduce" of type "text/plain" (336 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ