lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180225144421.GB7144@yexl-desktop>
Date:   Sun, 25 Feb 2018 22:44:22 +0800
From:   kernel test robot <xiaolong.ye@...el.com>
To:     Shakeel Butt <shakeelb@...gle.com>
Cc:     Linus Torvalds <torvalds@...ux-foundation.org>,
        Vlastimil Babka <vbabka@...e.cz>,
        Jérôme Glisse <jglisse@...hat.com>,
        Huang Ying <ying.huang@...el.com>,
        Tim Chen <tim.c.chen@...ux.intel.com>,
        Michal Hocko <mhocko@...nel.org>,
        Greg Thelen <gthelen@...gle.com>,
        Johannes Weiner <hannes@...xchg.org>,
        Balbir Singh <bsingharora@...il.com>,
        Minchan Kim <minchan@...nel.org>, Shaohua Li <shli@...com>,
        Jan Kara <jack@...e.cz>, Nicholas Piggin <npiggin@...il.com>,
        Dan Williams <dan.j.williams@...el.com>,
        Mel Gorman <mgorman@...e.de>, Hugh Dickins <hughd@...gle.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        LKML <linux-kernel@...r.kernel.org>, lkp@...org
Subject: [lkp-robot] [mm, mlock, vmscan]  9c4e6b1a70:
  stress-ng.hdd.ops_per_sec -7.9% regression


Greeting,

FYI, we noticed a -7.9% regression of stress-ng.hdd.ops_per_sec due to commit:


commit: 9c4e6b1a7027f102990c0395296015a812525f4d ("mm, mlock, vmscan: no more skipping pagevecs")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master

in testcase: stress-ng
on test machine: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
with following parameters:

	nr_threads: 100%
	testtime: 1s
	class: io
	cpufreq_governor: performance


Details are as below:
-------------------------------------------------------------------------------------------------->


To reproduce:

        git clone https://github.com/intel/lkp-tests.git
        cd lkp-tests
        bin/lkp install job.yaml  # job file is attached in this email
        bin/lkp run     job.yaml

=========================================================================================
class/compiler/cpufreq_governor/kconfig/nr_threads/rootfs/tbox_group/testcase/testtime:
  io/gcc-7/performance/x86_64-rhel-7.2/100%/debian-x86_64-2016-08-31.cgz/lkp-bdw-ep6/stress-ng/1s

commit: 
  c3cc39118c ("mm: memcontrol: fix NR_WRITEBACK leak in memcg and system stats")
  9c4e6b1a70 ("mm, mlock, vmscan: no more skipping pagevecs")

c3cc39118c3610eb 9c4e6b1a7027f102990c039529 
---------------- -------------------------- 
         %stddev     %change         %stddev
             \          |                \  
   1242815           -14.4%    1063699        stress-ng.hdd.ops
    645914            -7.9%     594931        stress-ng.hdd.ops_per_sec
    473.69            -2.5%     461.99        stress-ng.time.system_time
  12569476            -9.4%   11392192        vmstat.memory.cache
    802679 ±  7%     +40.2%    1125287 ± 12%  cpuidle.C3.time
      2423 ±  4%     +38.7%       3360 ± 15%  cpuidle.C3.usage
  16584420            -9.1%   15071340        numa-numastat.node1.local_node
  16594335            -9.1%   15081279        numa-numastat.node1.numa_hit
      2164 ±  6%     +42.7%       3087 ± 17%  turbostat.C3
      0.10 ±  8%      +0.0        0.14 ± 15%  turbostat.C3%
      0.05 ±  8%     +33.3%       0.07 ± 17%  turbostat.CPU%c3
 1.911e+09            -1.1%   1.89e+09        perf-stat.branch-misses
 7.536e+09            -3.3%   7.29e+09        perf-stat.cache-references
 1.633e+12            -1.6%  1.607e+12        perf-stat.cpu-cycles
      0.61            +1.7%       0.62        perf-stat.ipc
 2.576e+08            -2.5%  2.513e+08        perf-stat.node-loads
    394861 ± 11%     -53.2%     184617 ± 22%  meminfo.Active
    205118 ±  4%    -100.0%       4.00        meminfo.Active(file)
  14015193 ±  4%     -99.8%      21126 ±  2%  meminfo.Inactive
  13995095 ±  4%    -100.0%     403.25 ± 11%  meminfo.Inactive(file)
 1.303e+08           -10.1%  1.171e+08        meminfo.MemAvailable
      3.00        +4.5e+08%   13428597 ±  4%  meminfo.Unevictable
      2816 ± 53%    +561.9%      18643 ±134%  sched_debug.cpu.avg_idle.min
      3.68 ±  3%     -13.5%       3.18 ±  3%  sched_debug.cpu.clock.stddev
      3.68 ±  3%     -13.5%       3.18 ±  3%  sched_debug.cpu.clock_task.stddev
      0.12 ±  3%     -19.3%       0.09 ± 21%  sched_debug.rt_rq:/.rt_time.avg
     10.04 ±  3%     -21.5%       7.88 ± 22%  sched_debug.rt_rq:/.rt_time.max
      1.06 ±  3%     -21.5%       0.84 ± 22%  sched_debug.rt_rq:/.rt_time.stddev
    204608 ± 11%     -62.2%      77341 ± 29%  numa-meminfo.node0.Active
    102698 ±  4%    -100.0%       0.00        numa-meminfo.node0.Active(file)
   6913218 ±  5%     -99.8%      13623 ± 55%  numa-meminfo.node0.Inactive
   6903513 ±  5%    -100.0%     197.75 ± 24%  numa-meminfo.node0.Inactive(file)
    190923 ± 14%     -43.5%     107793 ± 19%  numa-meminfo.node1.Active
    102681 ±  4%    -100.0%       4.00        numa-meminfo.node1.Active(file)
   7063606 ±  4%     -99.9%       7736 ±100%  numa-meminfo.node1.Inactive
   7053168 ±  4%    -100.0%     204.00 ± 45%  numa-meminfo.node1.Inactive(file)
      3.00        +2.2e+08%    6713147 ±  4%  numa-meminfo.node1.Unevictable
     25680 ±  4%    -100.0%       0.00        numa-vmstat.node0.nr_active_file
   1730227 ±  6%    -100.0%      49.00 ± 24%  numa-vmstat.node0.nr_inactive_file
     25680 ±  4%    -100.0%       0.00        numa-vmstat.node0.nr_zone_active_file
   1730144 ±  6%    -100.0%      49.00 ± 24%  numa-vmstat.node0.nr_zone_inactive_file
   7950112 ±  5%     -13.4%    6886737 ±  6%  numa-vmstat.node0.numa_hit
   7943736 ±  5%     -13.4%    6880367 ±  6%  numa-vmstat.node0.numa_local
     25670 ±  4%    -100.0%       1.00        numa-vmstat.node1.nr_active_file
   1765886 ±  5%    -100.0%      50.50 ± 45%  numa-vmstat.node1.nr_inactive_file
      0.00            +Inf%    1670961 ±  4%  numa-vmstat.node1.nr_unevictable
     25670 ±  4%    -100.0%       1.00        numa-vmstat.node1.nr_zone_active_file
   1765833 ±  5%    -100.0%      50.50 ± 45%  numa-vmstat.node1.nr_zone_inactive_file
      0.00            +Inf%    1670970 ±  4%  numa-vmstat.node1.nr_zone_unevictable
   8050425 ±  5%     -13.9%    6933412 ±  5%  numa-vmstat.node1.numa_hit
   7878053 ±  6%     -14.2%    6761044 ±  6%  numa-vmstat.node1.numa_local
     51353 ±  4%    -100.0%       1.00        proc-vmstat.nr_active_file
   3257694           -10.2%    2924036        proc-vmstat.nr_dirty_background_threshold
   6523354           -10.2%    5855222        proc-vmstat.nr_dirty_threshold
   3493404 ±  5%    -100.0%     100.25 ± 11%  proc-vmstat.nr_inactive_file
      0.00            +Inf%    3344234 ±  4%  proc-vmstat.nr_unevictable
     51353 ±  4%    -100.0%       1.00        proc-vmstat.nr_zone_active_file
   3493404 ±  5%    -100.0%     100.25 ± 11%  proc-vmstat.nr_zone_inactive_file
      0.00            +Inf%    3344234 ±  4%  proc-vmstat.nr_zone_unevictable
      1014 ± 32%     -40.5%     603.75 ± 26%  proc-vmstat.numa_hint_faults
  33019247            -9.7%   29815458        proc-vmstat.numa_hit
  33002235            -9.7%   29798428        proc-vmstat.numa_local
      1298 ± 57%     -60.9%     507.75 ± 31%  proc-vmstat.numa_pages_migrated
    213856 ± 38%     -31.2%     147128 ± 12%  proc-vmstat.numa_pte_updates
    361381           -99.8%     659.50 ±  3%  proc-vmstat.pgactivate
  33235386            -9.4%   30127871        proc-vmstat.pgfree
      1298 ± 57%     -60.9%     507.75 ± 31%  proc-vmstat.pgmigrate_success
      1.00          +3e+09%   30018233        proc-vmstat.unevictable_pgs_culled
     28.62 ± 41%     -27.8        0.77 ±100%  perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.write
     28.62 ± 41%     -27.8        0.77 ±100%  perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
     28.62 ± 41%     -27.8        0.77 ±100%  perf-profile.calltrace.cycles-pp.sys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
     28.85 ± 42%     -27.4        1.49 ±117%  perf-profile.calltrace.cycles-pp.write
     28.12 ± 44%     -27.3        0.77 ±100%  perf-profile.calltrace.cycles-pp.vfs_write.sys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
     28.12 ± 44%     -27.3        0.77 ±100%  perf-profile.calltrace.cycles-pp.__vfs_write.vfs_write.sys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
     26.31 ± 53%     -26.3        0.00        perf-profile.calltrace.cycles-pp.devkmsg_write.__vfs_write.vfs_write.sys_write.do_syscall_64
     26.31 ± 53%     -26.3        0.00        perf-profile.calltrace.cycles-pp.printk_emit.devkmsg_write.__vfs_write.vfs_write.sys_write
     26.31 ± 53%     -26.3        0.00        perf-profile.calltrace.cycles-pp.vprintk_emit.printk_emit.devkmsg_write.__vfs_write.vfs_write
     26.31 ± 53%     -26.3        0.00        perf-profile.calltrace.cycles-pp.console_unlock.vprintk_emit.printk_emit.devkmsg_write.__vfs_write
     22.00 ± 68%     -22.0        0.00        perf-profile.calltrace.cycles-pp.serial8250_console_write.console_unlock.vprintk_emit.printk_emit.devkmsg_write
     21.07 ± 71%     -21.1        0.00        perf-profile.calltrace.cycles-pp.uart_console_write.serial8250_console_write.console_unlock.vprintk_emit.printk_emit
     21.07 ± 71%     -21.1        0.00        perf-profile.calltrace.cycles-pp.serial8250_console_putchar.uart_console_write.serial8250_console_write.console_unlock.vprintk_emit
     21.07 ± 71%     -21.1        0.00        perf-profile.calltrace.cycles-pp.wait_for_xmitr.serial8250_console_putchar.uart_console_write.serial8250_console_write.console_unlock
     14.27 ± 72%     -14.3        0.00        perf-profile.calltrace.cycles-pp.io_serial_in.wait_for_xmitr.serial8250_console_putchar.uart_console_write.serial8250_console_write
      6.80 ± 73%      -6.8        0.00        perf-profile.calltrace.cycles-pp.delay_tsc.wait_for_xmitr.serial8250_console_putchar.uart_console_write.serial8250_console_write
      0.81 ±173%      +5.1        5.89 ± 40%  perf-profile.calltrace.cycles-pp.link_path_walk.path_openat.do_filp_open.do_sys_open.do_syscall_64
     33.02 ± 12%     +16.8       49.84 ±  5%  perf-profile.calltrace.cycles-pp.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
     34.45 ± 12%     +17.0       51.41 ±  5%  perf-profile.calltrace.cycles-pp.secondary_startup_64
     33.25 ± 12%     +17.7       51.00 ±  6%  perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
     33.25 ± 12%     +17.7       51.00 ±  6%  perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
     33.25 ± 12%     +17.7       51.00 ±  6%  perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
     28.62 ± 41%     -27.8        0.77 ±100%  perf-profile.children.cycles-pp.sys_write
     28.85 ± 42%     -27.4        1.49 ±117%  perf-profile.children.cycles-pp.write
     28.12 ± 44%     -27.3        0.77 ±100%  perf-profile.children.cycles-pp.vfs_write
     28.12 ± 44%     -27.3        0.77 ±100%  perf-profile.children.cycles-pp.__vfs_write
     26.31 ± 53%     -26.3        0.00        perf-profile.children.cycles-pp.devkmsg_write
     26.31 ± 53%     -26.3        0.00        perf-profile.children.cycles-pp.printk_emit
     26.31 ± 53%     -26.3        0.00        perf-profile.children.cycles-pp.vprintk_emit
     26.31 ± 53%     -26.3        0.00        perf-profile.children.cycles-pp.console_unlock
     22.24 ± 68%     -22.2        0.00        perf-profile.children.cycles-pp.wait_for_xmitr
     22.00 ± 68%     -22.0        0.00        perf-profile.children.cycles-pp.serial8250_console_write
     21.30 ± 71%     -21.3        0.00        perf-profile.children.cycles-pp.serial8250_console_putchar
     21.07 ± 71%     -21.1        0.00        perf-profile.children.cycles-pp.uart_console_write
     59.80 ± 10%     -19.5       40.33 ±  5%  perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
     59.56 ± 10%     -19.2       40.33 ±  5%  perf-profile.children.cycles-pp.do_syscall_64
     14.51 ± 71%     -14.5        0.00        perf-profile.children.cycles-pp.io_serial_in
      7.49 ± 63%      -7.5        0.00        perf-profile.children.cycles-pp.delay_tsc
      0.81 ±173%      +5.5        6.31 ± 41%  perf-profile.children.cycles-pp.link_path_walk
     34.21 ± 12%     +16.8       51.03 ±  5%  perf-profile.children.cycles-pp.cpuidle_enter_state
     34.45 ± 12%     +17.0       51.41 ±  5%  perf-profile.children.cycles-pp.secondary_startup_64
     34.45 ± 12%     +17.0       51.41 ±  5%  perf-profile.children.cycles-pp.cpu_startup_entry
     34.45 ± 12%     +17.0       51.41 ±  5%  perf-profile.children.cycles-pp.do_idle
     33.25 ± 12%     +17.7       51.00 ±  6%  perf-profile.children.cycles-pp.start_secondary
     14.51 ± 71%     -14.5        0.00        perf-profile.self.cycles-pp.io_serial_in
      7.49 ± 63%      -7.5        0.00        perf-profile.self.cycles-pp.delay_tsc


                                                                                
                                   stress-ng.hdd.ops                            
                                                                                
  1.4e+06 +-+---------------------------------------------------------------+   
          | .+..                     .+.          .+..          .+..  .+.+..|   
  1.2e+06 +-+   +.+..+..+..+    +..+.   +..+..+..+    +..+..+.+.    +.      |   
          O  O  O       O  : O  :  O  O    O     O O  O  O  O O  O  O  O O  O   
    1e+06 +-+     O  O     O    O       O     O                             |   
          |                 :  :                                            |   
   800000 +-+               :  :                                            |   
          |                 :  :                                            |   
   600000 +-+               :  :                                            |   
          |                 : :                                             |   
   400000 +-+               : :                                             |   
          |                  ::                                             |   
   200000 +-+                ::                                             |   
          |                  :                                              |   
        0 +-+---------------------------------------------------------------+   
                                                                                
                                                                                                                                                                
                              stress-ng.hdd.ops_per_sec                         
                                                                                
  700000 +-+----------------------------------------------------------------+   
         |..+..+.+..+..+..+    +..+..+..+..+.+..+..+..+.+..+..+..+..+.+..+..|   
  600000 O-+O  O O  O  O  O  O O  O  O  O  O O  O  O  O O  O  O  O  O O  O  O   
         |                :    :                                            |   
  500000 +-+               :   :                                            |   
         |                 :  :                                             |   
  400000 +-+               :  :                                             |   
         |                 :  :                                             |   
  300000 +-+                : :                                             |   
         |                  : :                                             |   
  200000 +-+                : :                                             |   
         |                  ::                                              |   
  100000 +-+                 :                                              |   
         |                   :                                              |   
       0 +-+----------------------------------------------------------------+   
                                                                                
                                                                                
[*] bisect-good sample
[O] bisect-bad  sample


Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


Thanks,
Xiaolong

View attachment "config-4.16.0-rc2-00069-g9c4e6b1" of type "text/plain" (166095 bytes)

View attachment "job-script" of type "text/plain" (6642 bytes)

View attachment "job.yaml" of type "text/plain" (4324 bytes)

View attachment "reproduce" of type "text/plain" (254 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ