lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Date:   Thu, 2 Jul 2020 17:11:58 +0800
From:   kernel test robot <rong.a.chen@...el.com>
To:     Muchun Song <songmuchun@...edance.com>
Cc:     Linus Torvalds <torvalds@...ux-foundation.org>,
        Roman Gushchin <guro@...com>, Michal Hocko <mhocko@...e.com>,
        Johannes Weiner <hannes@...xchg.org>,
        Vladimir Davydov <vdavydov.dev@...il.com>,
        Qian Cai <cai@....pw>,
        Andrew Morton <akpm@...ux-foundation.org>,
        LKML <linux-kernel@...r.kernel.org>, lkp@...ts.01.org
Subject: [mm/memcontrol.c] 3a98990ae2: will-it-scale.per_process_ops 17.6%
 improvement

Greeting,

FYI, we noticed a 17.6% improvement of will-it-scale.per_process_ops due to commit:


commit: 3a98990ae2150277ed34d3b248c60e68bf2244b2 ("mm/memcontrol.c: add missed css_put()")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master

in testcase: will-it-scale
on test machine: 192 threads Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz with 192G memory
with following parameters:

	nr_task: 50%
	mode: process
	test: page_fault3
	cpufreq_governor: performance
	ucode: 0x5002f01

test-description: Will It Scale takes a testcase and runs it from 1 through to n parallel copies to see if the testcase will scale. It builds both a process and threads based test in order to see any differences between the two.
test-url: https://github.com/antonblanchard/will-it-scale





Details are as below:
-------------------------------------------------------------------------------------------------->


To reproduce:

        git clone https://github.com/intel/lkp-tests.git
        cd lkp-tests
        bin/lkp install job.yaml  # job file is attached in this email
        bin/lkp run     job.yaml

=========================================================================================
compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase/ucode:
  gcc-9/performance/x86_64-rhel-7.6/process/50%/debian-x86_64-20191114.cgz/lkp-csl-2ap3/page_fault3/will-it-scale/0x5002f01

commit: 
  cd324edce5 ("mm: memcontrol: handle div0 crash race condition in memory.low")
  3a98990ae2 ("mm/memcontrol.c: add missed css_put()")

cd324edce598ebdd 3a98990ae2150277ed34d3b248c 
---------------- --------------------------- 
       fail:runs  %reproduction    fail:runs
           |             |             |    
         22:4           88%          26:4     perf-profile.calltrace.cycles-pp.sync_regs.error_entry
         25:4          100%          30:4     perf-profile.calltrace.cycles-pp.error_entry
          0:4            1%           0:4     perf-profile.children.cycles-pp.error_return
         27:4          107%          31:4     perf-profile.children.cycles-pp.error_entry
          3:4           14%           3:4     perf-profile.self.cycles-pp.error_entry
         %stddev     %change         %stddev
             \          |                \  
    816090           +17.6%     959398        will-it-scale.per_process_ops
  78344698           +17.6%   92102246        will-it-scale.workload
   1386196 ±  4%     +11.5%    1545940 ±  4%  meminfo.DirectMap4k
     10.35 ±  4%      -2.6        7.73 ±  2%  mpstat.cpu.all.usr%
    583.84            +9.7%     640.64 ±  6%  sched_debug.cfs_rq:/.util_avg.avg
     14192 ±  6%      -5.2%      13456 ±  3%  slabinfo.skbuff_head_cache.active_objs
      9.50 ±  9%     -26.3%       7.00        vmstat.cpu.us
      2239            +2.9%       2303        vmstat.system.cs
     83616 ± 59%     -66.1%      28322 ±173%  numa-meminfo.node2.AnonHugePages
    146070 ± 33%     -62.7%      54505 ±112%  numa-meminfo.node2.AnonPages
      7688 ±  8%     -12.3%       6742 ±  3%  numa-meminfo.node2.PageTables
      6070 ± 18%     -79.7%       1232 ±168%  proc-vmstat.numa_hint_faults_local
  50826584           +16.1%   58997489        proc-vmstat.numa_hit
  50733426           +16.1%   58904352        proc-vmstat.numa_local
  50995327           +16.1%   59188836        proc-vmstat.pgalloc_normal
 2.356e+10           +17.5%  2.769e+10        proc-vmstat.pgfault
  48137715           +16.2%   55946671        proc-vmstat.pgfree
  12465628           +17.8%   14686105        numa-numastat.node0.local_node
  12496560           +17.8%   14717057        numa-numastat.node0.numa_hit
  12774189           +15.8%   14794996        numa-numastat.node1.local_node
  12797480           +15.8%   14818293        numa-numastat.node1.numa_hit
  12839343           +15.1%   14784197        numa-numastat.node2.local_node
  12862682           +15.2%   14815225        numa-numastat.node2.numa_hit
  12715592           +15.8%   14728262        numa-numastat.node3.local_node
  12731168           +15.7%   14736092        numa-numastat.node3.numa_hit
   7054845           +15.4%    8144671        numa-vmstat.node0.numa_hit
   6935425           +15.7%    8025335        numa-vmstat.node0.numa_local
   7156041           +14.8%    8214026        numa-vmstat.node1.numa_hit
   7116597           +14.9%    8174605        numa-vmstat.node1.numa_local
     36535 ± 33%     -62.7%      13635 ±112%  numa-vmstat.node2.nr_anon_pages
      1923 ±  8%     -12.4%       1684 ±  3%  numa-vmstat.node2.nr_page_table_pages
   7217832           +13.1%    8162065        numa-vmstat.node2.numa_hit
   7090042           +13.2%    8026683        numa-vmstat.node2.numa_local
   7142761           +14.6%    8184149        numa-vmstat.node3.numa_hit
   7022610           +14.9%    8071610        numa-vmstat.node3.numa_local
    488.25 ± 14%     +64.7%     804.00 ± 41%  interrupts.CPU0.CAL:Function_call_interrupts
      7269 ± 34%     -40.5%       4329 ± 58%  interrupts.CPU1.NMI:Non-maskable_interrupts
      7269 ± 34%     -40.5%       4329 ± 58%  interrupts.CPU1.PMI:Performance_monitoring_interrupts
      2143 ± 33%    +152.5%       5412 ± 39%  interrupts.CPU102.NMI:Non-maskable_interrupts
      2143 ± 33%    +152.5%       5412 ± 39%  interrupts.CPU102.PMI:Performance_monitoring_interrupts
    122.50 ± 77%     -96.5%       4.25 ±115%  interrupts.CPU138.RES:Rescheduling_interrupts
     64.75 ±108%     -91.9%       5.25 ± 51%  interrupts.CPU140.RES:Rescheduling_interrupts
      8729           -58.9%       3584 ± 34%  interrupts.CPU151.NMI:Non-maskable_interrupts
      8729           -58.9%       3584 ± 34%  interrupts.CPU151.PMI:Performance_monitoring_interrupts
      3236 ± 19%    +101.0%       6503 ± 37%  interrupts.CPU186.NMI:Non-maskable_interrupts
      3236 ± 19%    +101.0%       6503 ± 37%  interrupts.CPU186.PMI:Performance_monitoring_interrupts
      2857          +171.6%       7761 ± 15%  interrupts.CPU188.NMI:Non-maskable_interrupts
      2857          +171.6%       7761 ± 15%  interrupts.CPU188.PMI:Performance_monitoring_interrupts
    122.75 ± 76%     -66.8%      40.75 ±154%  interrupts.CPU20.RES:Rescheduling_interrupts
      8729           -39.8%       5257 ± 47%  interrupts.CPU3.NMI:Non-maskable_interrupts
      8729           -39.8%       5257 ± 47%  interrupts.CPU3.PMI:Performance_monitoring_interrupts
    800.50           +25.6%       1005 ± 19%  interrupts.CPU38.CAL:Function_call_interrupts
      2866          +166.3%       7633 ± 24%  interrupts.CPU55.NMI:Non-maskable_interrupts
      2866          +166.3%       7633 ± 24%  interrupts.CPU55.PMI:Performance_monitoring_interrupts
    110.25 ± 95%     -63.5%      40.25 ±156%  interrupts.CPU58.RES:Rescheduling_interrupts
      7637 ± 24%     -38.5%       4699 ± 51%  interrupts.CPU6.NMI:Non-maskable_interrupts
      7637 ± 24%     -38.5%       4699 ± 51%  interrupts.CPU6.PMI:Performance_monitoring_interrupts
    804.75         +1360.1%      11750 ±159%  interrupts.CPU7.CAL:Function_call_interrupts
    801.50           +62.1%       1299 ± 60%  interrupts.CPU75.CAL:Function_call_interrupts
      2877 ± 36%    +126.1%       6507 ± 37%  interrupts.CPU84.NMI:Non-maskable_interrupts
      2877 ± 36%    +126.1%       6507 ± 37%  interrupts.CPU84.PMI:Performance_monitoring_interrupts
      2891 ± 35%    +113.5%       6172 ± 42%  interrupts.CPU88.NMI:Non-maskable_interrupts
      2891 ± 35%    +113.5%       6172 ± 42%  interrupts.CPU88.PMI:Performance_monitoring_interrupts
    157.25 ± 56%     -74.1%      40.75 ±153%  interrupts.CPU90.RES:Rescheduling_interrupts
      6542 ± 33%     -47.4%       3441 ± 16%  interrupts.CPU92.NMI:Non-maskable_interrupts
      6542 ± 33%     -47.4%       3441 ± 16%  interrupts.CPU92.PMI:Performance_monitoring_interrupts
    177.00 ± 40%     -96.9%       5.50 ± 37%  interrupts.CPU93.RES:Rescheduling_interrupts
     16478 ±  5%     -10.7%      14710 ±  2%  softirqs.CPU1.RCU
     15434 ±  6%     -10.7%      13788 ±  3%  softirqs.CPU107.RCU
     15727 ±  8%      -9.4%      14256 ±  9%  softirqs.CPU112.RCU
     15720 ±  7%     -11.4%      13927 ±  8%  softirqs.CPU114.RCU
     16890 ±  3%     -15.1%      14340 ± 10%  softirqs.CPU127.RCU
     16036 ±  3%     -15.1%      13607 ±  8%  softirqs.CPU138.RCU
     17390 ± 73%    +119.5%      38170 ± 11%  softirqs.CPU138.SCHED
     11134 ± 45%     +44.5%      16084 ± 16%  softirqs.CPU139.SCHED
    104957 ± 10%     -10.0%      94436 ±  4%  softirqs.CPU149.TIMER
      5856 ± 26%    +315.3%      24324 ± 47%  softirqs.CPU150.SCHED
     96752 ±  4%     +15.3%     111562 ± 12%  softirqs.CPU150.TIMER
     16447 ±  4%     -15.7%      13869 ±  9%  softirqs.CPU151.RCU
     10162 ± 92%    +182.7%      28733 ± 28%  softirqs.CPU151.SCHED
     16631 ±  4%      -8.9%      15143 ±  5%  softirqs.CPU161.RCU
     15476 ±  7%     -14.2%      13277 ±  5%  softirqs.CPU170.RCU
     15192 ±  6%     -13.5%      13143 ±  6%  softirqs.CPU173.RCU
     15535 ±  6%     -13.8%      13389 ±  6%  softirqs.CPU175.RCU
     15214 ±  4%     -13.1%      13221 ±  7%  softirqs.CPU176.RCU
     36899 ± 12%     -59.7%      14881 ± 76%  softirqs.CPU189.SCHED
    119647 ±  8%     -16.1%     100430 ±  9%  softirqs.CPU189.TIMER
     15761 ±  7%      -8.7%      14396 ±  4%  softirqs.CPU22.RCU
     16909 ±  8%      -9.9%      15237 ±  8%  softirqs.CPU25.RCU
     16714 ±  3%     -13.5%      14456 ±  4%  softirqs.CPU3.RCU
     12479 ± 33%     +98.0%      24709 ± 41%  softirqs.CPU3.SCHED
     16843 ±  7%     -12.3%      14765 ±  6%  softirqs.CPU30.RCU
    109803 ±  6%     -15.1%      93182 ±  2%  softirqs.CPU42.TIMER
     16129 ±  4%     -13.3%      13978 ±  9%  softirqs.CPU49.RCU
     21022 ± 61%     +52.4%      32031 ± 28%  softirqs.CPU5.SCHED
     15860 ±  6%     -14.0%      13636 ±  9%  softirqs.CPU53.RCU
     37063 ±  6%     -46.3%      19910 ± 55%  softirqs.CPU54.SCHED
     16373 ±  4%      -9.5%      14820 ±  5%  softirqs.CPU6.RCU
     16710 ±  8%     -10.8%      14904 ±  2%  softirqs.CPU66.RCU
     16491 ±  7%      -9.5%      14932 ±  8%  softirqs.CPU68.RCU
     17296 ±  3%     -14.0%      14875 ±  8%  softirqs.CPU72.RCU
     15282 ±  2%      -8.7%      13953 ±  6%  softirqs.CPU75.RCU
     16402           -10.0%      14769 ±  5%  softirqs.CPU76.RCU
     16116 ±  5%     -12.8%      14055 ±  3%  softirqs.CPU78.RCU
     14928 ±  5%      -9.7%      13473 ±  7%  softirqs.CPU81.RCU
     15305 ±  5%      -9.7%      13824 ±  8%  softirqs.CPU85.RCU
     15870 ±  6%     -10.8%      14152 ±  8%  softirqs.CPU91.RCU
     16314 ±  4%     -12.2%      14319 ±  7%  softirqs.CPU92.RCU
     16131           -16.0%      13548 ± 11%  softirqs.CPU93.RCU
      5823 ± 59%    +403.2%      29303 ± 39%  softirqs.CPU93.SCHED
     96719 ±  6%     +21.3%     117354 ± 10%  softirqs.CPU93.TIMER
     28006 ± 30%     -35.0%      18196 ± 35%  softirqs.CPU97.SCHED
      2.21           -15.6%       1.87        perf-stat.i.MPKI
   4.2e+10           +17.4%   4.93e+10        perf-stat.i.branch-instructions
 1.052e+08           +17.2%  1.233e+08        perf-stat.i.branch-misses
     42.41            +1.0       43.37        perf-stat.i.cache-miss-rate%
 1.929e+08            +1.4%  1.955e+08        perf-stat.i.cache-misses
      2211 ±  2%      +2.9%       2275        perf-stat.i.context-switches
      1.45           -14.8%       1.23        perf-stat.i.cpi
      1548            -1.3%       1527        perf-stat.i.cycles-between-cache-misses
   2740421 ±  3%     +16.7%    3197740 ±  4%  perf-stat.i.dTLB-load-misses
 5.784e+10           +17.4%  6.791e+10        perf-stat.i.dTLB-loads
 2.339e+09           +17.9%  2.759e+09        perf-stat.i.dTLB-store-misses
 2.937e+10           +17.4%  3.448e+10        perf-stat.i.dTLB-stores
 2.403e+08           +16.1%  2.791e+08        perf-stat.i.iTLB-load-misses
   2707233            -3.0%    2626312 ±  2%  perf-stat.i.iTLB-loads
 2.067e+11           +17.4%  2.427e+11        perf-stat.i.instructions
      0.69           +17.4%       0.81        perf-stat.i.ipc
      0.01           +11.6%       0.01 ± 13%  perf-stat.i.metric.K/sec
    688.78           +17.3%     808.24        perf-stat.i.metric.M/sec
  77964830           +17.5%   91630478        perf-stat.i.minor-faults
     16.06            -9.2        6.87 ±  7%  perf-stat.i.node-load-miss-rate%
   2444096           -56.3%    1068500 ±  8%  perf-stat.i.node-load-misses
  13040793           +15.9%   15113454        perf-stat.i.node-loads
   5876763           +17.7%    6915066        perf-stat.i.node-store-misses
  78577487           +17.4%   92247555        perf-stat.i.node-stores
  77964830           +17.5%   91630478        perf-stat.i.page-faults
      2.20           -15.6%       1.85        perf-stat.overall.MPKI
     42.50            +1.0       43.47        perf-stat.overall.cache-miss-rate%
      1.44           -14.8%       1.23        perf-stat.overall.cpi
      1545            -1.4%       1524        perf-stat.overall.cycles-between-cache-misses
      0.69           +17.4%       0.81        perf-stat.overall.ipc
     15.78            -9.2        6.60 ±  8%  perf-stat.overall.node-load-miss-rate%
 4.186e+10           +17.4%  4.914e+10        perf-stat.ps.branch-instructions
 1.048e+08           +17.2%  1.228e+08        perf-stat.ps.branch-misses
 1.924e+08            +1.4%   1.95e+08        perf-stat.ps.cache-misses
      2186            +2.9%       2250        perf-stat.ps.context-switches
   2725763 ±  3%     +16.9%    3187700 ±  4%  perf-stat.ps.dTLB-load-misses
 5.764e+10           +17.4%  6.769e+10        perf-stat.ps.dTLB-loads
 2.331e+09           +17.9%   2.75e+09        perf-stat.ps.dTLB-store-misses
 2.927e+10           +17.4%  3.437e+10        perf-stat.ps.dTLB-stores
 2.395e+08           +16.1%  2.781e+08        perf-stat.ps.iTLB-load-misses
   2696794            -3.0%    2616040 ±  2%  perf-stat.ps.iTLB-loads
 2.061e+11           +17.4%  2.419e+11        perf-stat.ps.instructions
  77707624           +17.5%   91329166        perf-stat.ps.minor-faults
   2435827           -56.3%    1064249 ±  8%  perf-stat.ps.node-load-misses
  12997344           +15.9%   15069284        perf-stat.ps.node-loads
   5856974           +17.7%    6892404        perf-stat.ps.node-store-misses
  78316534           +17.4%   91943807        perf-stat.ps.node-stores
  77707624           +17.5%   91329166        perf-stat.ps.page-faults
 6.224e+13           +17.4%  7.306e+13        perf-stat.total.instructions
      7.59            -4.3        3.27 ± 12%  perf-profile.calltrace.cycles-pp.__count_memcg_events.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
     30.10            -3.6       26.53 ± 11%  perf-profile.calltrace.cycles-pp.handle_mm_fault.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
     35.03            -2.8       32.28 ± 10%  perf-profile.calltrace.cycles-pp.do_user_addr_fault.exc_page_fault.asm_exc_page_fault
      5.96            -1.8        4.20 ± 11%  perf-profile.calltrace.cycles-pp.page_add_file_rmap.alloc_set_pte.finish_fault.do_fault.__handle_mm_fault
      2.66            -1.6        1.09 ± 10%  perf-profile.calltrace.cycles-pp.lock_page_memcg.page_add_file_rmap.alloc_set_pte.finish_fault.do_fault
      7.71            -1.5        6.18 ± 11%  perf-profile.calltrace.cycles-pp.alloc_set_pte.finish_fault.do_fault.__handle_mm_fault.handle_mm_fault
      8.04            -1.5        6.55 ± 11%  perf-profile.calltrace.cycles-pp.finish_fault.do_fault.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
      3.62            -0.3        3.30 ± 11%  perf-profile.calltrace.cycles-pp.page_remove_rmap.unmap_page_range.unmap_vmas.unmap_region.__do_munmap
      1.22            -0.2        1.01 ±  9%  perf-profile.calltrace.cycles-pp.__mod_memcg_state.__mod_lruvec_state.page_remove_rmap.unmap_page_range.unmap_vmas
      1.28            -0.2        1.10 ± 11%  perf-profile.calltrace.cycles-pp.__mod_memcg_state.__mod_lruvec_state.page_add_file_rmap.alloc_set_pte.finish_fault
      1.89            -0.1        1.75 ± 10%  perf-profile.calltrace.cycles-pp.__mod_lruvec_state.page_remove_rmap.unmap_page_range.unmap_vmas.unmap_region
     31.64            +5.6       37.29 ± 17%  perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
     31.64            +5.6       37.29 ± 17%  perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
     31.64            +5.6       37.29 ± 17%  perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
     31.63            +5.7       37.29 ± 17%  perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
     31.63            +5.7       37.29 ± 17%  perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary
     31.99            +5.7       37.70 ± 17%  perf-profile.calltrace.cycles-pp.secondary_startup_64
     31.62            +5.8       37.41 ± 18%  perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
     15.54            -4.7       10.87 ± 10%  perf-profile.children.cycles-pp.native_irq_return_iret
      7.59            -4.3        3.27 ± 12%  perf-profile.children.cycles-pp.__count_memcg_events
     30.28            -3.5       26.75 ± 10%  perf-profile.children.cycles-pp.handle_mm_fault
     35.11            -2.7       32.36 ± 10%  perf-profile.children.cycles-pp.do_user_addr_fault
      3.36            -1.9        1.50 ± 10%  perf-profile.children.cycles-pp.lock_page_memcg
      6.01            -1.7        4.27 ± 11%  perf-profile.children.cycles-pp.page_add_file_rmap
      7.89            -1.5        6.39 ± 11%  perf-profile.children.cycles-pp.alloc_set_pte
      8.08            -1.5        6.60 ± 11%  perf-profile.children.cycles-pp.finish_fault
      2.52            -0.4        2.14 ± 10%  perf-profile.children.cycles-pp.__mod_memcg_state
      3.69            -0.3        3.39 ± 11%  perf-profile.children.cycles-pp.page_remove_rmap
      0.31            -0.1        0.21 ± 10%  perf-profile.children.cycles-pp.__unlock_page_memcg
      0.05 ±  8%      +0.0        0.07 ±  6%  perf-profile.children.cycles-pp.native_set_pte_at
      0.36            +0.1        0.44 ±  9%  perf-profile.children.cycles-pp.__set_page_dirty_no_writeback
     31.64            +5.6       37.29 ± 17%  perf-profile.children.cycles-pp.start_secondary
     31.99            +5.7       37.70 ± 17%  perf-profile.children.cycles-pp.secondary_startup_64
     31.99            +5.7       37.70 ± 17%  perf-profile.children.cycles-pp.cpu_startup_entry
     31.99            +5.7       37.70 ± 17%  perf-profile.children.cycles-pp.do_idle
     31.98            +5.7       37.69 ± 17%  perf-profile.children.cycles-pp.intel_idle
     31.99            +5.7       37.70 ± 17%  perf-profile.children.cycles-pp.cpuidle_enter
     31.99            +5.7       37.70 ± 17%  perf-profile.children.cycles-pp.cpuidle_enter_state
     15.54            -4.7       10.87 ± 10%  perf-profile.self.cycles-pp.native_irq_return_iret
      7.56            -4.3        3.25 ± 11%  perf-profile.self.cycles-pp.__count_memcg_events
      3.31            -1.9        1.45 ± 11%  perf-profile.self.cycles-pp.lock_page_memcg
      2.47            -0.4        2.08 ± 10%  perf-profile.self.cycles-pp.__mod_memcg_state
      1.31            -0.1        1.19 ± 12%  perf-profile.self.cycles-pp.page_add_file_rmap
      0.31            -0.1        0.20 ± 10%  perf-profile.self.cycles-pp.__unlock_page_memcg
      0.07 ±  5%      +0.0        0.10 ± 35%  perf-profile.self.cycles-pp.mem_cgroup_from_task
      0.30            +0.1        0.38 ±  8%  perf-profile.self.cycles-pp.__set_page_dirty_no_writeback
     31.98            +5.7       37.69 ± 17%  perf-profile.self.cycles-pp.intel_idle


                                                                                
                            will-it-scale.per_process_ops                       
                                                                                
  980000 +------------------------------------------------------------------+   
         |                                                                  |   
  960000 |-O O  O O O O  O O O O  O O O O  O O O O                          |   
  940000 |-+                                                                |   
         |                                                                  |   
  920000 |-+                                                                |   
  900000 |-+                                                                |   
         |                                                                  |   
  880000 |-+                                                                |   
  860000 |-+                                                                |   
         |                                                                  |   
  840000 |-+                                                                |   
  820000 |-+            .+.+.                     .+..        .+.           |   
         |.+.+..+.+.+.+.     +.+..+.+.+.+..+.+.+.+    +.+.+.+.   +.+.+..+.+.|   
  800000 +------------------------------------------------------------------+   
                                                                                
                                                                                                                                                                
                                will-it-scale.workload                          
                                                                                
  9.4e+07 +-----------------------------------------------------------------+   
          |                                                                 |   
  9.2e+07 |-O O  O O O O O  O O O O O  O O O O O  O                         |   
    9e+07 |-+                                                               |   
          |                                                                 |   
  8.8e+07 |-+                                                               |   
          |                                                                 |   
  8.6e+07 |-+                                                               |   
          |                                                                 |   
  8.4e+07 |-+                                                               |   
  8.2e+07 |-+                                                               |   
          |                                                                 |   
    8e+07 |-+                                                               |   
          |             .+..+.                     .+. .+.    .+.           |   
  7.8e+07 +-----------------------------------------------------------------+   
                                                                                
                                                                                
[*] bisect-good sample
[O] bisect-bad  sample



Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


Thanks,
Rong Chen


View attachment "config-5.8.0-rc2-00124-g3a98990ae2150" of type "text/plain" (206161 bytes)

View attachment "job-script" of type "text/plain" (7527 bytes)

View attachment "job.yaml" of type "text/plain" (5097 bytes)

View attachment "reproduce" of type "text/plain" (343 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ