lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Date:	Mon, 1 Aug 2016 09:55:04 +0800
From:	kernel test robot <xiaolong.ye@...el.com>
To:	William Roberts <william.c.roberts@...el.com>
Cc:	0day robot <fengguang.wu@...el.com>,
	LKML <linux-kernel@...r.kernel.org>, lkp@...org
Subject: [lkp] eadd46295d: hackbench.throughput -9.7% regression


FYI, we noticed a -9.7% regression of hackbench.throughput due to commit:

commit eadd46295d4e47aef2fc91e806282f61d4bfe2a2 ("Introduce mmap randomization")
https://github.com/0day-ci/linux william-c-roberts-intel-com/Introduce-mmap-randomization/20160727-023413

in testcase: hackbench
on test machine: 32 threads Sandy Bridge-EP with 32G memory
with following parameters:

	nr_threads: 1600%
	mode: process
	ipc: pipe
	cpufreq_governor: performance


Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.

Details are as below:
-------------------------------------------------------------------------------------------------->


To reproduce:

        git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
        cd lkp-tests
        bin/lkp install job.yaml  # job file is attached in this email
        bin/lkp run     job.yaml

=========================================================================================
compiler/cpufreq_governor/ipc/kconfig/mode/nr_threads/rootfs/tbox_group/testcase:
  gcc-6/performance/pipe/x86_64-rhel/process/1600%/debian-x86_64-2015-02-07.cgz/lkp-snb01/hackbench

commit: 
  v4.7
  eadd46295d ("Introduce mmap randomization")

            v4.7 eadd46295d4e47aef2fc91e806 
---------------- -------------------------- 
       fail:runs  %reproduction    fail:runs
           |             |             |    
         %stddev     %change         %stddev
             \          |                \  
    235060 ±  1%      -9.7%     212347 ±  0%  hackbench.throughput
 3.989e+08 ±  1%     +12.0%  4.467e+08 ±  2%  hackbench.time.involuntary_context_switches
  26892935 ±  1%     -11.6%   23769774 ±  1%  hackbench.time.minor_page_faults
      2911 ±  0%      -1.6%       2864 ±  0%  hackbench.time.percent_of_cpu_this_job_got
      1415 ±  0%      -7.1%       1315 ±  1%  hackbench.time.user_time
  1.59e+09 ±  0%      -3.5%  1.534e+09 ±  1%  hackbench.time.voluntary_context_switches
    522230 ±  0%     +99.2%    1040525 ±  0%  meminfo.PageTables
    527517 ±  3%     -16.5%     440493 ±  3%  softirqs.SCHED
     67535 ±  7%     +99.2%     134556 ± 10%  numa-vmstat.node0.nr_page_table_pages
     61527 ±  8%    +100.6%     123407 ±  9%  numa-vmstat.node1.nr_page_table_pages
      6.28 ±  1%     +16.9%       7.34 ±  2%  turbostat.CPU%c1
    171.44 ±  0%      -1.7%     168.54 ±  0%  turbostat.CorWatt
    198.59 ±  0%      -1.4%     195.72 ±  0%  turbostat.PkgWatt
      1.40 ±200%    +507.1%       8.50 ± 64%  vmstat.procs.b
      1791 ±  2%     -10.5%       1603 ±  1%  vmstat.procs.r
    155718 ±  3%     -17.2%     128979 ±  6%  vmstat.system.in
   1624167 ±  6%     +41.9%    2305196 ± 12%  numa-meminfo.node0.MemUsed
    272208 ±  6%     +99.2%     542347 ± 10%  numa-meminfo.node0.PageTables
   1443799 ±  8%     +37.7%    1988433 ± 12%  numa-meminfo.node1.MemUsed
    246214 ±  8%    +100.8%     494381 ± 10%  numa-meminfo.node1.PageTables
    129902 ±  0%    +100.5%     260449 ±  0%  proc-vmstat.nr_page_table_pages
  52961320 ±  1%     +16.4%   61651759 ±  1%  proc-vmstat.numa_hit
  52961316 ±  1%     +16.4%   61651755 ±  1%  proc-vmstat.numa_local
      5674 ±  3%     -15.0%       4822 ±  6%  proc-vmstat.numa_pte_updates
  52794094 ±  0%     +13.6%   59962331 ±  1%  proc-vmstat.pgalloc_normal
  28064259 ±  1%     -11.4%   24875700 ±  1%  proc-vmstat.pgfault
  58082481 ±  1%     +14.0%   66223801 ±  1%  proc-vmstat.pgfree
 1.044e+08 ±  1%     -22.3%   81133470 ±  6%  cpuidle.C1-SNB.time
  19769566 ±  2%     -20.9%   15634083 ±  6%  cpuidle.C1-SNB.usage
  40316906 ±  3%     +18.4%   47742318 ±  2%  cpuidle.C1E-SNB.time
    461352 ±  2%     +24.7%     575405 ±  2%  cpuidle.C1E-SNB.usage
   8978132 ±  3%     +92.1%   17248854 ±  6%  cpuidle.C3-SNB.time
     42921 ±  2%    +157.3%     110419 ±  5%  cpuidle.C3-SNB.usage
  1.13e+09 ±  1%     +19.1%  1.346e+09 ±  1%  cpuidle.C7-SNB.time
   1191984 ±  1%     +18.6%    1414163 ±  2%  cpuidle.C7-SNB.usage
 4.157e+08 ±  5%     +22.0%  5.073e+08 ±  6%  cpuidle.POLL.time
    333689 ±  2%     -19.7%     268091 ±  6%  cpuidle.POLL.usage
    252963 ±  0%     +24.4%     314715 ±  0%  slabinfo.anon_vma.active_objs
      5053 ±  0%     +24.1%       6273 ±  0%  slabinfo.anon_vma.active_slabs
    257762 ±  0%     +24.1%     319960 ±  0%  slabinfo.anon_vma.num_objs
      5053 ±  0%     +24.1%       6273 ±  0%  slabinfo.anon_vma.num_slabs
    516841 ±  0%     +23.8%     640090 ±  0%  slabinfo.anon_vma_chain.active_objs
      9773 ±  0%     +24.3%      12152 ±  0%  slabinfo.anon_vma_chain.active_slabs
    625521 ±  0%     +24.3%     777796 ±  0%  slabinfo.anon_vma_chain.num_objs
      9773 ±  0%     +24.3%      12152 ±  0%  slabinfo.anon_vma_chain.num_slabs
      3023 ±  7%     -13.4%       2617 ±  5%  slabinfo.kmalloc-2048.active_objs
      3093 ±  7%     -12.6%       2704 ±  5%  slabinfo.kmalloc-2048.num_objs
    384082 ±  0%     +15.6%     444060 ±  0%  slabinfo.vm_area_struct.active_objs
      8845 ±  0%     +15.5%      10213 ±  0%  slabinfo.vm_area_struct.active_slabs
    389216 ±  0%     +15.5%     449404 ±  0%  slabinfo.vm_area_struct.num_objs
      8845 ±  0%     +15.5%      10213 ±  0%  slabinfo.vm_area_struct.num_slabs
 7.579e+12 ±  0%      -9.0%  6.896e+12 ±  1%  perf-stat.branch-instructions
      0.23 ± 81%     +71.9%       0.40 ±  0%  perf-stat.branch-miss-rate
 2.993e+10 ±  1%      -7.2%  2.779e+10 ±  1%  perf-stat.branch-misses
 2.075e+10 ±  0%      -6.9%  1.932e+10 ±  2%  perf-stat.cache-misses
 2.076e+11 ±  0%      -3.9%  1.996e+11 ±  2%  perf-stat.cache-references
  47671486 ±  2%     -17.4%   39397933 ±  5%  perf-stat.cpu-migrations
      0.71 ± 83%    +297.9%       2.81 ±  8%  perf-stat.dTLB-load-miss-rate
 1.537e+11 ± 13%    +109.4%   3.22e+11 ±  6%  perf-stat.dTLB-load-misses
 1.268e+13 ±  0%      -9.4%  1.148e+13 ±  1%  perf-stat.dTLB-loads
      0.26 ± 82%    +243.6%       0.88 ±  6%  perf-stat.dTLB-store-miss-rate
 3.519e+10 ±  7%     +86.0%  6.546e+10 ±  5%  perf-stat.dTLB-store-misses
 8.278e+12 ±  0%      -9.7%  7.475e+12 ±  1%  perf-stat.dTLB-stores
  28076071 ±  1%     -11.4%   24882928 ±  1%  perf-stat.minor-faults
     33.81 ± 81%     +75.2%      59.24 ±  2%  perf-stat.node-load-miss-rate
 7.841e+09 ±  2%      -8.0%  7.216e+09 ±  0%  perf-stat.node-load-misses
 1.398e+10 ±  3%     -12.8%  1.219e+10 ±  2%  perf-stat.node-loads
 1.862e+09 ±  1%      -5.9%  1.753e+09 ±  2%  perf-stat.node-store-misses
  28076036 ±  1%     -11.4%   24883005 ±  1%  perf-stat.page-faults
      4.79 ± 11%     +42.4%       6.82 ± 11%  sched_debug.cfs_rq:/.load_avg.min
   3425468 ± 29%    +122.4%    7618598 ± 33%  sched_debug.cfs_rq:/.min_vruntime.stddev
      2.39 ± 34%     +42.0%       3.39 ± 19%  sched_debug.cfs_rq:/.runnable_load_avg.min
   7846188 ± 15%     +53.3%   12024520 ± 25%  sched_debug.cfs_rq:/.spread0.max
   3427472 ± 29%    +122.4%    7622936 ± 33%  sched_debug.cfs_rq:/.spread0.stddev
     26.63 ±  6%     +11.7%      29.74 ±  4%  sched_debug.cpu.cpu_load[1].avg
      3.38 ± 24%     +37.8%       4.66 ± 13%  sched_debug.cpu.cpu_load[1].min
     26.36 ±  7%     +10.9%      29.22 ±  3%  sched_debug.cpu.cpu_load[2].avg
      3.62 ± 18%     +32.0%       4.77 ± 14%  sched_debug.cpu.cpu_load[2].min
     26.20 ±  7%      +9.9%      28.79 ±  4%  sched_debug.cpu.cpu_load[3].avg
      3.71 ± 17%     +35.2%       5.02 ± 10%  sched_debug.cpu.cpu_load[3].min
     26.04 ±  7%      +9.0%      28.39 ±  4%  sched_debug.cpu.cpu_load[4].avg
      3.87 ± 15%     +29.1%       5.00 ±  9%  sched_debug.cpu.cpu_load[4].min
    618.52 ± 35%     +64.0%       1014 ± 28%  sched_debug.cpu.load.min
      1690 ±  9%     +11.2%       1879 ±  4%  sched_debug.cpu.nr_load_updates.stddev
     42.20 ±  9%     -20.8%      33.44 ± 15%  sched_debug.cpu.nr_running.avg
   1893966 ± 29%     +88.1%    3562142 ± 30%  sched_debug.cpu.nr_switches.stddev
    137.38 ± 34%     +76.2%     242.00 ± 26%  sched_debug.cpu.nr_uninterruptible.max
   -226.63 ±-62%    +159.2%    -587.52 ±-23%  sched_debug.cpu.nr_uninterruptible.min
     82.30 ± 39%    +112.4%     174.81 ± 18%  sched_debug.cpu.nr_uninterruptible.stddev
      1.96 ±200%    +405.1%       9.90 ±  0%  sched_debug.rt_rq:/.rt_runtime.stddev
      0.12 ±200%    +845.6%       1.17 ±  7%  perf-profile.cycles.__fget_light.sys_read.entry_SYSCALL_64_fastpath
      0.23 ±123%    +457.7%       1.30 ±  7%  perf-profile.cycles.__fget_light.sys_write.entry_SYSCALL_64_fastpath
      1.09 ± 16%     +22.3%       1.34 ±  7%  perf-profile.cycles.__inode_security_revalidate.selinux_file_permission.security_file_permission.rw_verify_area.vfs_write
     29.47 ± 34%    -100.0%       0.00 ± -1%  perf-profile.cycles.__read_nocancel
      0.76 ± 21%     +94.0%       1.48 ± 11%  perf-profile.cycles.__schedule.schedule.exit_to_usermode_loop.syscall_return_slowpath.entry_SYSCALL_64_fastpath
      2.29 ± 11%     +24.8%       2.86 ±  4%  perf-profile.cycles.__schedule.schedule.pipe_wait.pipe_write.__vfs_write
      0.46 ± 89%    +232.4%       1.52 ±  5%  perf-profile.cycles.__switch_to
      7.99 ± 76%    +228.6%      26.27 ±  0%  perf-profile.cycles.__vfs_read.vfs_read.sys_read.entry_SYSCALL_64_fastpath
     16.43 ± 34%    -100.0%       0.00 ± -1%  perf-profile.cycles.__vfs_read.vfs_read.sys_read.entry_SYSCALL_64_fastpath.__read_nocancel
     10.74 ± 74%    +219.8%      34.34 ±  2%  perf-profile.cycles.__vfs_write.vfs_write.sys_write.entry_SYSCALL_64_fastpath
     20.33 ± 36%    -100.0%       0.00 ± -1%  perf-profile.cycles.__vfs_write.vfs_write.sys_write.entry_SYSCALL_64_fastpath.__write_nocancel
      7.59 ± 14%     +28.7%       9.77 ± 14%  perf-profile.cycles.__wake_up_common.__wake_up_sync_key.pipe_write.__vfs_write.vfs_write
     10.41 ± 10%     +22.3%      12.73 ± 11%  perf-profile.cycles.__wake_up_sync_key.pipe_write.__vfs_write.vfs_write.sys_write
     32.88 ± 35%    -100.0%       0.00 ± -1%  perf-profile.cycles.__write_nocancel
      3.43 ± 10%     +30.3%       4.46 ±  7%  perf-profile.cycles.activate_task.ttwu_do_activate.try_to_wake_up.default_wake_function.autoremove_wake_function
      7.18 ± 15%     +29.9%       9.33 ± 14%  perf-profile.cycles.autoremove_wake_function.__wake_up_common.__wake_up_sync_key.pipe_write.__vfs_write
      4.77 ± 18%    -100.0%       0.00 ± -1%  perf-profile.cycles.call_cpuidle.cpu_startup_entry.start_secondary
      5.06 ± 16%     -93.9%       0.31 ±100%  perf-profile.cycles.cpu_startup_entry.start_secondary
      4.76 ± 18%    -100.0%       0.00 ± -1%  perf-profile.cycles.cpuidle_enter.call_cpuidle.cpu_startup_entry.start_secondary
      4.72 ± 18%    -100.0%       0.00 ± -1%  perf-profile.cycles.cpuidle_enter_state.cpuidle_enter.call_cpuidle.cpu_startup_entry.start_secondary
      0.72 ± 20%     +63.1%       1.17 ±  3%  perf-profile.cycles.deactivate_task.__schedule.schedule.pipe_wait.pipe_write
      1.63 ± 10%     +19.2%       1.94 ±  3%  perf-profile.cycles.default_wake_function.autoremove_wake_function.__wake_up_common.__wake_up_sync_key.pipe_read
      7.09 ± 15%     +29.9%       9.21 ± 14%  perf-profile.cycles.default_wake_function.autoremove_wake_function.__wake_up_common.__wake_up_sync_key.pipe_write
      1.61 ± 17%     +51.5%       2.45 ±  8%  perf-profile.cycles.dequeue_entity.dequeue_task_fair.deactivate_task.__schedule.schedule
      2.27 ± 16%     +38.3%       3.15 ±  8%  perf-profile.cycles.dequeue_task_fair.deactivate_task.__schedule.schedule.pipe_wait
      2.16 ± 14%     +49.9%       3.24 ±  7%  perf-profile.cycles.enqueue_entity.enqueue_task_fair.activate_task.ttwu_do_activate.try_to_wake_up
      2.76 ±  9%     +41.8%       3.92 ±  7%  perf-profile.cycles.enqueue_task_fair.activate_task.ttwu_do_activate.try_to_wake_up.default_wake_function
      0.57 ± 87%    +255.6%       2.02 ±  7%  perf-profile.cycles.entry_SYSCALL_64
      0.56 ± 85%    +272.8%       2.10 ±  7%  perf-profile.cycles.entry_SYSCALL_64_after_swapgs
     28.04 ± 74%    +227.8%      91.93 ±  0%  perf-profile.cycles.entry_SYSCALL_64_fastpath
     26.07 ± 34%    -100.0%       0.00 ± -1%  perf-profile.cycles.entry_SYSCALL_64_fastpath.__read_nocancel
     29.86 ± 35%    -100.0%       0.00 ± -1%  perf-profile.cycles.entry_SYSCALL_64_fastpath.__write_nocancel
      0.51 ± 88%    +221.4%       1.63 ± 11%  perf-profile.cycles.exit_to_usermode_loop.syscall_return_slowpath.entry_SYSCALL_64_fastpath
      0.60 ± 57%    +103.9%       1.22 ± 23%  perf-profile.cycles.idle_cpu.select_idle_sibling.select_task_rq_fair.try_to_wake_up.default_wake_function
      1.75 ±  6%     -26.9%       1.28 ±  3%  perf-profile.cycles.mutex_unlock.__vfs_write.vfs_write.sys_write.entry_SYSCALL_64_fastpath
      0.63 ± 54%     +97.8%       1.25 ±  9%  perf-profile.cycles.pick_next_task_fair.__schedule.schedule.pipe_wait.pipe_read
      2.98 ± 13%     +27.0%       3.78 ±  7%  perf-profile.cycles.pipe_wait.pipe_write.__vfs_write.vfs_write.sys_write
     27.29 ±  3%     +13.8%      31.05 ±  3%  perf-profile.cycles.pipe_write.__vfs_write.vfs_write.sys_write.entry_SYSCALL_64_fastpath
      4.60 ± 18%    -100.0%       0.00 ± -1%  perf-profile.cycles.poll_idle.cpuidle_enter_state.cpuidle_enter.call_cpuidle.cpu_startup_entry
      2.34 ± 76%    +252.6%       8.25 ±  7%  perf-profile.cycles.rw_verify_area.vfs_read.sys_read.entry_SYSCALL_64_fastpath
      5.54 ± 34%    -100.0%       0.00 ± -1%  perf-profile.cycles.rw_verify_area.vfs_read.sys_read.entry_SYSCALL_64_fastpath.__read_nocancel
      1.93 ± 75%    +261.3%       6.98 ±  7%  perf-profile.cycles.rw_verify_area.vfs_write.sys_write.entry_SYSCALL_64_fastpath
      4.56 ± 34%    -100.0%       0.00 ± -1%  perf-profile.cycles.rw_verify_area.vfs_write.sys_write.entry_SYSCALL_64_fastpath.__write_nocancel
      0.49 ± 88%    +223.6%       1.57 ± 11%  perf-profile.cycles.schedule.exit_to_usermode_loop.syscall_return_slowpath.entry_SYSCALL_64_fastpath
      2.40 ± 11%     +24.2%       2.98 ±  4%  perf-profile.cycles.schedule.pipe_wait.pipe_write.__vfs_write.vfs_write
      1.56 ± 14%     +80.1%       2.82 ± 13%  perf-profile.cycles.select_idle_sibling.select_task_rq_fair.try_to_wake_up.default_wake_function.autoremove_wake_function
      2.70 ± 16%     +50.0%       4.04 ± 14%  perf-profile.cycles.select_task_rq_fair.try_to_wake_up.default_wake_function.autoremove_wake_function.__wake_up_common
      5.14 ±  7%     +10.2%       5.67 ±  7%  perf-profile.cycles.selinux_file_permission.security_file_permission.rw_verify_area.vfs_read.sys_read
      5.12 ±  8%     +13.1%       5.80 ±  8%  perf-profile.cycles.selinux_file_permission.security_file_permission.rw_verify_area.vfs_write.sys_write
      5.07 ± 16%     -93.8%       0.31 ±100%  perf-profile.cycles.start_secondary
     11.81 ± 76%    +234.1%      39.47 ±  2%  perf-profile.cycles.sys_read.entry_SYSCALL_64_fastpath
     25.48 ± 34%    -100.0%       0.00 ± -1%  perf-profile.cycles.sys_read.entry_SYSCALL_64_fastpath.__read_nocancel
     14.31 ± 74%    +230.0%      47.23 ±  0%  perf-profile.cycles.sys_write.entry_SYSCALL_64_fastpath
     28.76 ± 35%    -100.0%       0.00 ± -1%  perf-profile.cycles.sys_write.entry_SYSCALL_64_fastpath.__write_nocancel
      0.53 ± 88%    +224.8%       1.72 ± 11%  perf-profile.cycles.syscall_return_slowpath.entry_SYSCALL_64_fastpath
      8.56 ± 14%     +27.6%      10.92 ± 12%  perf-profile.cycles.try_to_wake_up.default_wake_function.autoremove_wake_function.__wake_up_common.__wake_up_sync_key
      4.07 ±  9%     +28.1%       5.21 ±  8%  perf-profile.cycles.ttwu_do_activate.try_to_wake_up.default_wake_function.autoremove_wake_function.__wake_up_common
     11.13 ± 76%    +233.3%      37.10 ±  1%  perf-profile.cycles.vfs_read.sys_read.entry_SYSCALL_64_fastpath
     23.80 ± 34%    -100.0%       0.00 ± -1%  perf-profile.cycles.vfs_read.sys_read.entry_SYSCALL_64_fastpath.__read_nocancel
     13.60 ± 74%    +227.9%      44.58 ±  0%  perf-profile.cycles.vfs_write.sys_write.entry_SYSCALL_64_fastpath
     27.07 ± 35%    -100.0%       0.00 ± -1%  perf-profile.cycles.vfs_write.sys_write.entry_SYSCALL_64_fastpath.__write_nocancel
      2.10 ±  5%     +18.3%       2.48 ±  6%  perf-profile.func.cycles.__fget_light
      1.07 ±  5%     +23.6%       1.33 ±  7%  perf-profile.func.cycles.__inode_security_revalidate
      1.09 ± 35%    -100.0%       0.00 ± -1%  perf-profile.func.cycles.__read_nocancel
      1.48 ±  8%     +17.9%       1.75 ±  4%  perf-profile.func.cycles.__switch_to
      0.86 ± 37%    -100.0%       0.00 ± -1%  perf-profile.func.cycles.__write_nocancel
      2.83 ±  3%     -18.5%       2.30 ±  3%  perf-profile.func.cycles.mutex_unlock
      2.07 ±  2%     +15.8%       2.39 ±  4%  perf-profile.func.cycles.pipe_read
      2.46 ±  5%     +17.0%       2.88 ±  5%  perf-profile.func.cycles.pipe_write
      4.75 ± 19%    -100.0%       0.00 ± -1%  perf-profile.func.cycles.poll_idle
      3.47 ±  8%     +26.9%       4.40 ±  6%  perf-profile.func.cycles.selinux_file_permission
      1.26 ±  8%     +12.7%       1.42 ±  4%  perf-profile.func.cycles.switch_mm_irqs_off
      0.94 ± 12%     +24.2%       1.17 ± 12%  perf-profile.func.cycles.update_curr
      1.62 ±  7%     +15.3%       1.87 ±  8%  perf-profile.func.cycles.vfs_write




                                hackbench.throughput

  250000 ++-----------------------------------------------------------------+
         *..*..*.*..*..*..*..*.*..*..*..*..*.*..*..*..*.*..*..*..*          |
         O  O  O O  O  O  O  O O  O     O  O O  O  O  O O  O  O  O  O O  O  O
  200000 ++                                                                 |
         |                                                                  |
         |                                                                  |
  150000 ++                                                                 |
         |                                                                  |
  100000 ++                                                                 |
         |                                                                  |
         |                                                                  |
   50000 ++                                                                 |
         |                                                                  |
         |                                                                  |
       0 ++--------------------------O--------------------------------------+


                    hackbench.time.percent_of_cpu_this_job_got

  3000 *+-*--*--*--*-*--*--*--*--*--*--*--*-*--*--*--*--*--*--*--*----------+
       O  O  O  O  O O  O  O  O  O     O  O O  O  O  O  O  O  O  O O  O  O  O
  2500 ++                                                                   |
       |                                                                    |
       |                                                                    |
  2000 ++                                                                   |
       |                                                                    |
  1500 ++                                                                   |
       |                                                                    |
  1000 ++                                                                   |
       |                                                                    |
       |                                                                    |
   500 ++                                                                   |
       |                                                                    |
     0 ++---------------------------O---------------------------------------+


                           hackbench.time.minor_page_faults

    3e+07 ++----------------------------------------------------------------+
          |    .*.  .*..  .*.                .*.. .*..     .*.  .*          |
  2.5e+07 *+.*.   *.    *.   *..*..*..*.*..*.    *    *..*.   *.            |
          O  O  O O  O  O  O O  O  O    O  O  O  O O  O  O  O O  O  O  O O  O
          |                                                                 |
    2e+07 ++                                                                |
          |                                                                 |
  1.5e+07 ++                                                                |
          |                                                                 |
    1e+07 ++                                                                |
          |                                                                 |
          |                                                                 |
    5e+06 ++                                                                |
          |                                                                 |
        0 ++--------------------------O-------------------------------------+

 
	[*] bisect-good sample
	[O] bisect-bad  sample



Thanks,
Xiaolong

View attachment "config-4.7.0-00001-geadd462" of type "text/plain" (150956 bytes)

View attachment "job.yaml" of type "text/plain" (3649 bytes)

View attachment "reproduce" of type "text/plain" (2375 bytes)

Powered by blists - more mailing lists