lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20160311015618.GA13081@yexl-desktop>
Date:	Fri, 11 Mar 2016 09:56:18 +0800
From:	kernel test robot <xiaolong.ye@...el.com>
To:	Jaegeuk Kim <jaegeuk@...nel.org>
Cc:	LKML <linux-kernel@...r.kernel.org>, lkp@...org
Subject: [lkp] [f2fs] 24b8491251: -7.1% fsmark.app_overhead

FYI, we noticed the below changes on

https://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs dev-test
commit 24b8491251cde66879e74092167cc0f27a1f11ce ("f2fs: preallocate blocks for buffered aio writes")


=========================================================================================
compiler/cpufreq_governor/disk/filesize/fs/iterations/kconfig/nr_threads/rootfs/sync_method/tbox_group/test_size/testcase:
  gcc-4.9/performance/1BRD_48G/4M/f2fs/1x/x86_64-rhel/64t/debian-x86_64-2015-02-07.cgz/NoSync/lkp-hsx04/40G/fsmark

commit: 
  b439b103a6c9eb3417f34b4a609d4e00b4c59aca
  24b8491251cde66879e74092167cc0f27a1f11ce

b439b103a6c9eb34 24b8491251cde66879e7409216 
---------------- -------------------------- 
         %stddev     %change         %stddev
             \          |                \  
    612289 ±  1%      -7.1%     568876 ±  1%  fsmark.app_overhead
      1495 ±  2%     +25.1%       1871 ±  2%  fsmark.files_per_sec
      3422 ±  2%     +41.0%       4827 ±  4%  fsmark.time.percent_of_cpu_this_job_got
   1188385 ±  1%     -68.3%     377081 ±  9%  fsmark.time.voluntary_context_switches
    264601 ± 26%     -38.8%     161926 ± 43%  softirqs.SCHED
      4802 ±  5%     +84.1%       8841 ± 12%  proc-vmstat.pgactivate
     54154 ±  2%     -11.3%      48039 ±  4%  proc-vmstat.pgfault
     43810 ±  2%      -9.3%      39722 ±  2%  proc-vmstat.pgfree
     19.00 ± 20%     +50.0%      28.50 ±  8%  vmstat.procs.r
    208274 ±  6%     -66.3%      70257 ±  8%  vmstat.system.cs
     24846 ±  7%     +18.3%      29381 ±  6%  vmstat.system.in
      4254 ±  1%     +11.5%       4745 ±  4%  numa-meminfo.node1.KernelStack
      8893 ± 20%     -33.7%       5896 ± 14%  numa-meminfo.node3.Active(anon)
     28208 ±  7%     +21.6%      34295 ±  6%  numa-meminfo.node3.Active(file)
      8828 ± 20%     -34.7%       5762 ± 13%  numa-meminfo.node3.AnonPages
    654.25 ±  9%     -17.9%     537.00 ± 18%  numa-vmstat.node0.nr_alloc_batch
      2217 ± 20%     -33.7%       1470 ± 14%  numa-vmstat.node3.nr_active_anon
      7050 ±  7%     +21.6%       8573 ±  6%  numa-vmstat.node3.nr_active_file
      2208 ± 19%     -34.7%       1441 ± 13%  numa-vmstat.node3.nr_anon_pages
      6.97 ±  2%     -19.9%       5.59 ±  2%  time.elapsed_time
      6.97 ±  2%     -19.9%       5.59 ±  2%  time.elapsed_time.max
      3422 ±  2%     +41.0%       4827 ±  4%  time.percent_of_cpu_this_job_got
      0.46 ±  1%     -12.6%       0.40 ±  4%  time.user_time
   1188385 ±  1%     -68.3%     377081 ±  9%  time.voluntary_context_switches
     21.32 ±  3%     +36.3%      29.05 ±  5%  turbostat.%Busy
    616.25 ±  3%     +36.4%     840.75 ±  5%  turbostat.Avg_MHz
     26.38 ±  5%     -23.9%      20.06 ± 10%  turbostat.CPU%c1
      2.77 ± 20%     -59.9%       1.11 ± 12%  turbostat.CPU%c3
    309.69 ±  2%      +4.5%     323.48 ±  0%  turbostat.PkgWatt
  15951637 ± 16%    +127.1%   36232857 ± 28%  cpuidle.C1-HSW.time
  93867067 ± 12%     -37.6%   58584188 ± 20%  cpuidle.C1E-HSW.time
    337610 ±  3%     -70.0%     101408 ±  7%  cpuidle.C1E-HSW.usage
  37574978 ± 18%     -71.3%   10766602 ±  7%  cpuidle.C3-HSW.time
    140480 ± 14%     -75.2%      34776 ± 14%  cpuidle.C3-HSW.usage
 7.511e+08 ±  1%     -24.4%  5.681e+08 ±  1%  cpuidle.C6-HSW.time
    790092 ±  3%     -78.7%     168250 ± 15%  cpuidle.C6-HSW.usage
      0.00 ± -1%      +Inf%     346080 ±  9%  latency_stats.hits.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].__get_node_page.[f2fs].get_node_page.[f2fs].get_dnode_of_data.[f2fs].f2fs_map_blocks.[f2fs].f2fs_preallocate_blocks.[f2fs].f2fs_file_write_iter.[f2fs].__vfs_write.vfs_write.SyS_write
     36235 ±  4%    -100.0%       0.00 ± -1%  latency_stats.hits.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].__get_node_page.[f2fs].get_node_page.[f2fs].get_dnode_of_data.[f2fs].f2fs_reserve_block.[f2fs].f2fs_get_block.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.f2fs_file_write_iter.[f2fs]
   1110160 ±  2%    -100.0%       0.00 ± -1%  latency_stats.hits.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].__get_node_page.[f2fs].get_node_page.[f2fs].get_node_page.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.f2fs_file_write_iter.[f2fs].__vfs_write.vfs_write
     10727 ±  2%    -100.0%       0.00 ± -1%  latency_stats.hits.call_rwsem_down_write_failed.get_node_info.[f2fs].new_node_page.[f2fs].get_dnode_of_data.[f2fs].f2fs_reserve_block.[f2fs].f2fs_get_block.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.f2fs_file_write_iter.[f2fs].__vfs_write.vfs_write
      0.00 ± -1%      +Inf%     104220 ± 12%  latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].new_node_page.[f2fs].get_dnode_of_data.[f2fs].f2fs_map_blocks.[f2fs].f2fs_preallocate_blocks.[f2fs].f2fs_file_write_iter.[f2fs].__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
     88684 ±  2%    -100.0%       0.00 ± -1%  latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].new_node_page.[f2fs].get_dnode_of_data.[f2fs].f2fs_reserve_block.[f2fs].f2fs_get_block.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.f2fs_file_write_iter.[f2fs].__vfs_write.vfs_write
      0.00 ± -1%      +Inf%   62708911 ± 13%  latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].__get_node_page.[f2fs].get_node_page.[f2fs].get_dnode_of_data.[f2fs].f2fs_map_blocks.[f2fs].f2fs_preallocate_blocks.[f2fs].f2fs_file_write_iter.[f2fs].__vfs_write.vfs_write.SyS_write
   6532927 ±  7%    -100.0%       0.00 ± -1%  latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].__get_node_page.[f2fs].get_node_page.[f2fs].get_dnode_of_data.[f2fs].f2fs_reserve_block.[f2fs].f2fs_get_block.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.f2fs_file_write_iter.[f2fs]
      0.00 ± -1%      +Inf%     860616 ± 16%  latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].__get_node_page.[f2fs].get_node_page.[f2fs].get_node_page.[f2fs].f2fs_convert_inline_inode.[f2fs].f2fs_preallocate_blocks.[f2fs].f2fs_file_write_iter.[f2fs].__vfs_write.vfs_write.SyS_write
 1.663e+08 ±  0%    -100.0%       0.00 ± -1%  latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].__get_node_page.[f2fs].get_node_page.[f2fs].get_node_page.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.f2fs_file_write_iter.[f2fs].__vfs_write.vfs_write
      0.00 ± -1%      +Inf%     434819 ± 14%  latency_stats.sum.call_rwsem_down_write_failed.get_node_info.[f2fs].new_node_page.[f2fs].get_dnode_of_data.[f2fs].f2fs_map_blocks.[f2fs].f2fs_preallocate_blocks.[f2fs].f2fs_file_write_iter.[f2fs].__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
    436260 ±  2%    -100.0%       0.00 ± -1%  latency_stats.sum.call_rwsem_down_write_failed.get_node_info.[f2fs].new_node_page.[f2fs].get_dnode_of_data.[f2fs].f2fs_reserve_block.[f2fs].f2fs_get_block.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.f2fs_file_write_iter.[f2fs].__vfs_write.vfs_write
      0.00 ± -1%      +Inf%     375976 ± 14%  latency_stats.sum.call_rwsem_down_write_failed.set_node_addr.[f2fs].new_node_page.[f2fs].get_dnode_of_data.[f2fs].f2fs_map_blocks.[f2fs].f2fs_preallocate_blocks.[f2fs].f2fs_file_write_iter.[f2fs].__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
    827019 ±  5%    -100.0%       0.00 ± -1%  latency_stats.sum.call_rwsem_down_write_failed.set_node_addr.[f2fs].new_node_page.[f2fs].get_dnode_of_data.[f2fs].f2fs_reserve_block.[f2fs].f2fs_get_block.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.f2fs_file_write_iter.[f2fs].__vfs_write.vfs_write

=========================================================================================
compiler/cpufreq_governor/disk/filesize/fs/iterations/kconfig/nr_threads/rootfs/sync_method/tbox_group/test_size/testcase:
  gcc-4.9/performance/1BRD_48G/4M/f2fs/1x/x86_64-rhel/64t/debian-x86_64-2015-02-07.cgz/NoSync/lkp-hsw-ep2/40G/fsmark

commit: 
  b439b103a6c9eb3417f34b4a609d4e00b4c59aca
  24b8491251cde66879e74092167cc0f27a1f11ce

b439b103a6c9eb34 24b8491251cde66879e7409216 
---------------- -------------------------- 
         %stddev     %change         %stddev
             \          |                \  
    625.40 ±  1%      +6.7%     667.52 ±  0%  fsmark.files_per_sec
      2702 ±  7%     -36.6%       1714 ± 12%  fsmark.time.involuntary_context_switches
    770.50 ±  8%     +52.2%       1172 ± 12%  fsmark.time.percent_of_cpu_this_job_got
    133.18 ±  7%     +40.7%     187.40 ± 11%  fsmark.time.system_time
    686680 ±  3%     -67.3%     224529 ± 18%  fsmark.time.voluntary_context_switches
     80197 ± 13%     +19.3%      95683 ±  6%  numa-meminfo.node1.Active
    106550 ± 19%     -38.6%      65433 ±  5%  softirqs.SCHED
     37713 ±  0%     +17.5%      44314 ±  0%  proc-vmstat.nr_active_file
      5074 ±  4%     +30.5%       6620 ± 20%  proc-vmstat.pgactivate
    178348 ±  1%     +14.9%     204838 ±  0%  meminfo.Active
    150432 ±  0%     +17.9%     177429 ±  0%  meminfo.Active(file)
      4898 ± 21%     -12.5%       4284 ± 18%  meminfo.AnonHugePages
     15.63 ±  7%     -42.0%       9.06 ±  6%  turbostat.CPU%c1
      3.34 ± 15%     -22.2%       2.60 ±  6%  turbostat.CPU%c3
     10.18 ± 45%     +89.7%      19.30 ± 14%  turbostat.Pkg%pc2
   1161365 ±  1%      +2.6%    1192137 ±  0%  vmstat.io.bo
      3.25 ± 59%    +138.5%       7.75 ± 10%  vmstat.procs.r
     65985 ±  5%     -60.5%      26040 ± 12%  vmstat.system.cs
      8603 ±  3%     +41.5%      12173 ± 11%  vmstat.system.in
      2702 ±  7%     -36.6%       1714 ± 12%  time.involuntary_context_switches
    770.50 ±  8%     +52.2%       1172 ± 12%  time.percent_of_cpu_this_job_got
    133.18 ±  7%     +40.7%     187.40 ± 11%  time.system_time
      0.36 ±  6%     -17.1%       0.30 ±  4%  time.user_time
    686680 ±  3%     -67.3%     224529 ± 18%  time.voluntary_context_switches
  42510356 ± 16%     -56.7%   18410803 ± 34%  cpuidle.C1E-HSW.time
    187168 ±  5%     -63.9%      67587 ±  5%  cpuidle.C1E-HSW.usage
  34590282 ± 12%     -42.9%   19767899 ± 11%  cpuidle.C3-HSW.time
     88675 ± 13%     -73.0%      23973 ± 33%  cpuidle.C3-HSW.usage
 1.039e+09 ±  3%      -8.8%  9.474e+08 ±  3%  cpuidle.C6-HSW.time
    380449 ± 12%     -70.5%     112252 ± 17%  cpuidle.C6-HSW.usage
  42507793 ± 13%     -66.1%   14427275 ± 19%  cpuidle.POLL.time
      7918 ± 50%     -49.1%       4032 ± 31%  cpuidle.POLL.usage
      0.00 ± -1%      +Inf%     194173 ± 20%  latency_stats.hits.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].__get_node_page.[f2fs].get_node_page.[f2fs].get_dnode_of_data.[f2fs].f2fs_map_blocks.[f2fs].f2fs_preallocate_blocks.[f2fs].f2fs_file_write_iter.[f2fs].__vfs_write.vfs_write.SyS_write
     27391 ±  4%    -100.0%       0.00 ± -1%  latency_stats.hits.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].__get_node_page.[f2fs].get_node_page.[f2fs].get_dnode_of_data.[f2fs].f2fs_reserve_block.[f2fs].f2fs_get_block.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.f2fs_file_write_iter.[f2fs]
    621245 ±  4%    -100.0%       0.00 ± -1%  latency_stats.hits.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].__get_node_page.[f2fs].get_node_page.[f2fs].get_node_page.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.f2fs_file_write_iter.[f2fs].__vfs_write.vfs_write
      0.00 ± -1%      +Inf%      42514 ± 44%  latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].new_node_page.[f2fs].get_dnode_of_data.[f2fs].f2fs_map_blocks.[f2fs].f2fs_preallocate_blocks.[f2fs].f2fs_file_write_iter.[f2fs].__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
     71010 ±  9%    -100.0%       0.00 ± -1%  latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].new_node_page.[f2fs].get_dnode_of_data.[f2fs].f2fs_reserve_block.[f2fs].f2fs_get_block.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.f2fs_file_write_iter.[f2fs].__vfs_write.vfs_write
      0.00 ± -1%      +Inf%   29050748 ± 40%  latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].__get_node_page.[f2fs].get_node_page.[f2fs].get_dnode_of_data.[f2fs].f2fs_map_blocks.[f2fs].f2fs_preallocate_blocks.[f2fs].f2fs_file_write_iter.[f2fs].__vfs_write.vfs_write.SyS_write
   5471586 ±  7%    -100.0%       0.00 ± -1%  latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].__get_node_page.[f2fs].get_node_page.[f2fs].get_dnode_of_data.[f2fs].f2fs_reserve_block.[f2fs].f2fs_get_block.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.f2fs_file_write_iter.[f2fs]
      0.00 ± -1%      +Inf%     379285 ± 50%  latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].__get_node_page.[f2fs].get_node_page.[f2fs].get_node_page.[f2fs].f2fs_convert_inline_inode.[f2fs].f2fs_preallocate_blocks.[f2fs].f2fs_file_write_iter.[f2fs].__vfs_write.vfs_write.SyS_write
 1.014e+08 ±  0%    -100.0%       0.00 ± -1%  latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].__get_node_page.[f2fs].get_node_page.[f2fs].get_node_page.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.f2fs_file_write_iter.[f2fs].__vfs_write.vfs_write
      0.00 ± -1%      +Inf%     160093 ± 47%  latency_stats.sum.call_rwsem_down_write_failed.get_node_info.[f2fs].new_node_page.[f2fs].get_dnode_of_data.[f2fs].f2fs_map_blocks.[f2fs].f2fs_preallocate_blocks.[f2fs].f2fs_file_write_iter.[f2fs].__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
    280946 ±  3%    -100.0%       0.00 ± -1%  latency_stats.sum.call_rwsem_down_write_failed.get_node_info.[f2fs].new_node_page.[f2fs].get_dnode_of_data.[f2fs].f2fs_reserve_block.[f2fs].f2fs_get_block.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.f2fs_file_write_iter.[f2fs].__vfs_write.vfs_write
      0.00 ± -1%      +Inf%     168921 ± 36%  latency_stats.sum.call_rwsem_down_write_failed.set_node_addr.[f2fs].new_node_page.[f2fs].get_dnode_of_data.[f2fs].f2fs_map_blocks.[f2fs].f2fs_preallocate_blocks.[f2fs].f2fs_file_write_iter.[f2fs].__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
    609595 ±  5%    -100.0%       0.00 ± -1%  latency_stats.sum.call_rwsem_down_write_failed.set_node_addr.[f2fs].new_node_page.[f2fs].get_dnode_of_data.[f2fs].f2fs_reserve_block.[f2fs].f2fs_get_block.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.f2fs_file_write_iter.[f2fs].__vfs_write.vfs_write

lkp-hsx04: Brickland Haswell-EX
Memory: 512G

lkp-hsw-ep2: Brickland Haswell-EP
Memory: 128G


                        fsmark.time.voluntary_context_switches

  1.6e+06 ++----------------------------------------------------------------+
          |                                                                 |
  1.4e+06 *+.*.*.. .*..*.*..*.*..*.*..*.*..*..*                             |
          |       *                            +       .*.            .*.  .*
  1.2e+06 ++                                    *..*.*.   *..*.*..*.*.   *. |
    1e+06 ++                                                                |
          |                                                                 |
   800000 ++                                                                |
          |                                                                 |
   600000 ++                                                                |
   400000 ++                                         O    O                 |
          |            O    O      O                    O    O O  O         |
   200000 O+ O O  O O    O    O       O O  O  O O  O                        |
          |                      O                                          |
        0 ++----------------------------------------------------------------+


                            time.voluntary_context_switches

  1.6e+06 ++----------------------------------------------------------------+
          |                                                                 |
  1.4e+06 *+.*.*.. .*..*.*..*.*..*.*..*.*..*..*                             |
          |       *                            +       .*.            .*.  .*
  1.2e+06 ++                                    *..*.*.   *..*.*..*.*.   *. |
    1e+06 ++                                                                |
          |                                                                 |
   800000 ++                                                                |
          |                                                                 |
   600000 ++                                                                |
   400000 ++                                         O    O                 |
          |            O    O      O                    O    O O  O         |
   200000 O+ O O  O O    O    O       O O  O  O O  O                        |
          |                      O                                          |
        0 ++----------------------------------------------------------------+



	[*] bisect-good sample
	[O] bisect-bad  sample

To reproduce:

        git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
        cd lkp-tests
        bin/lkp install job.yaml  # job file is attached in this email
        bin/lkp run     job.yaml


Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


Thanks,
Xiaolong Ye

View attachment "job.yaml" of type "text/plain" (3646 bytes)

View attachment "reproduce" of type "text/plain" (14676 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ