lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Date:   Wed, 12 Oct 2022 09:25:17 +0800
From:   kernel test robot <yujie.liu@...el.com>
To:     Chao Yu <chao@...nel.org>
CC:     <lkp@...ts.01.org>, <lkp@...el.com>,
        Jaegeuk Kim <jaegeuk@...nel.org>,
        Ming Yan <yanming@....edu.cn>, Chao Yu <chao.yu@...o.com>,
        <linux-kernel@...r.kernel.org>,
        <linux-f2fs-devel@...ts.sourceforge.net>, <ying.huang@...el.com>,
        <feng.tang@...el.com>, <zhengjun.xing@...ux.intel.com>,
        <fengwei.yin@...el.com>
Subject: [f2fs] cfd66bb715: fxmark.ssd_f2fs_DWTL_72_directio.works/sec 130.6%
 improvement

Greeting,

FYI, we noticed a 130.6% improvement of fxmark.ssd_f2fs_DWTL_72_directio.works/sec due to commit:

commit: cfd66bb715fd11fde3338d0660cffa1396adc27d ("f2fs: fix deadloop in foreground GC")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master

in testcase: fxmark
on test machine: 128 threads 2 sockets Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz (Ice Lake) with 128G memory
with following parameters:

	disk: 1SSD
	media: ssd
	test: DWTL
	fstype: f2fs
	directio: directio
	cpufreq_governor: performance

test-description: FxMark is a filesystem benchmark that test multicore scalability.
test-url: https://github.com/sslab-gatech/fxmark


Details are as below:

=========================================================================================
compiler/cpufreq_governor/directio/disk/fstype/kconfig/media/rootfs/tbox_group/test/testcase:
  gcc-11/performance/directio/1SSD/f2fs/x86_64-rhel-8.3/ssd/debian-11.1-x86_64-20220510.cgz/lkp-icl-2sp5/DWTL/fxmark

commit: 
  25f8236213 ("f2fs: fix to do sanity check on block address in f2fs_do_zero_range()")
  cfd66bb715 ("f2fs: fix deadloop in foreground GC")

25f8236213a91efd cfd66bb715fd11fde3338d0660c 
---------------- --------------------------- 
         %stddev     %change         %stddev
             \          |                \  
      0.63 ±  2%     -27.9%       0.45 ±  3%  fxmark.ssd_f2fs_DWTL_18_directio.idle_sec
      0.01 ± 31%    -100.0%       0.00        fxmark.ssd_f2fs_DWTL_18_directio.iowait_sec
      1.32 ± 31%    -100.0%       0.00        fxmark.ssd_f2fs_DWTL_18_directio.iowait_util
      0.05           -21.9%       0.04 ±  2%  fxmark.ssd_f2fs_DWTL_18_directio.real_sec
      0.02           -61.1%       0.01        fxmark.ssd_f2fs_DWTL_18_directio.secs
     14.50 ± 10%     +24.7%      18.08 ±  6%  fxmark.ssd_f2fs_DWTL_18_directio.sys_util
     12.06 ±  5%     +32.0%      15.91 ±  2%  fxmark.ssd_f2fs_DWTL_18_directio.user_util
    595640          +157.3%    1532350        fxmark.ssd_f2fs_DWTL_18_directio.works/sec
      0.05 ± 10%    -100.0%       0.00        fxmark.ssd_f2fs_DWTL_1_directio.iowait_sec
     38.86 ±  8%    -100.0%       0.00        fxmark.ssd_f2fs_DWTL_1_directio.iowait_util
      0.12           -44.7%       0.07 ±  5%  fxmark.ssd_f2fs_DWTL_1_directio.real_sec
      0.09           -61.1%       0.04 ±  7%  fxmark.ssd_f2fs_DWTL_1_directio.secs
     38.99 ± 11%     +56.3%      60.95 ±  7%  fxmark.ssd_f2fs_DWTL_1_directio.sys_util
     22.15 ± 15%     +57.2%      34.82 ± 12%  fxmark.ssd_f2fs_DWTL_1_directio.user_util
    110377          +158.9%     285786 ±  7%  fxmark.ssd_f2fs_DWTL_1_directio.works/sec
      0.01 ± 31%    -100.0%       0.00        fxmark.ssd_f2fs_DWTL_2_directio.iowait_sec
      9.63 ± 26%    -100.0%       0.00        fxmark.ssd_f2fs_DWTL_2_directio.iowait_util
      0.06 ±  3%     -19.5%       0.05 ±  2%  fxmark.ssd_f2fs_DWTL_2_directio.real_sec
      0.03 ±  7%     -41.1%       0.02        fxmark.ssd_f2fs_DWTL_2_directio.secs
     23.54 ± 10%     +30.2%      30.66 ±  6%  fxmark.ssd_f2fs_DWTL_2_directio.user_util
    357310 ±  6%     +69.1%     604308        fxmark.ssd_f2fs_DWTL_2_directio.works/sec
      1.26 ±  9%     -26.1%       0.94 ±  2%  fxmark.ssd_f2fs_DWTL_36_directio.idle_sec
      0.02 ± 40%    -100.0%       0.00        fxmark.ssd_f2fs_DWTL_36_directio.iowait_sec
      1.13 ± 43%    -100.0%       0.00        fxmark.ssd_f2fs_DWTL_36_directio.iowait_util
      0.05 ±  7%     -18.6%       0.04        fxmark.ssd_f2fs_DWTL_36_directio.real_sec
      0.02 ± 13%     -53.9%       0.01 ±  7%  fxmark.ssd_f2fs_DWTL_36_directio.secs
     15.56 ± 11%     +39.5%      21.71 ±  6%  fxmark.ssd_f2fs_DWTL_36_directio.sys_util
     11.19 ±  7%     +17.1%      13.11 ±  7%  fxmark.ssd_f2fs_DWTL_36_directio.user_util
    566930 ± 17%    +112.6%    1205463 ±  6%  fxmark.ssd_f2fs_DWTL_36_directio.works/sec
      0.10 ±  5%     -21.7%       0.08 ±  4%  fxmark.ssd_f2fs_DWTL_4_directio.idle_sec
      0.01          -100.0%       0.00        fxmark.ssd_f2fs_DWTL_4_directio.iowait_sec
      5.27 ±  3%    -100.0%       0.00        fxmark.ssd_f2fs_DWTL_4_directio.iowait_util
      0.05           -22.3%       0.04 ±  3%  fxmark.ssd_f2fs_DWTL_4_directio.real_sec
      0.02 ±  2%     -56.9%       0.01 ±  3%  fxmark.ssd_f2fs_DWTL_4_directio.secs
    582453 ±  2%    +132.3%    1353108 ±  3%  fxmark.ssd_f2fs_DWTL_4_directio.works/sec
      1.88 ±  6%     -25.2%       1.40        fxmark.ssd_f2fs_DWTL_54_directio.idle_sec
     69.13 ±  4%      -9.5%      62.59        fxmark.ssd_f2fs_DWTL_54_directio.idle_util
      0.05 ±  2%     -17.4%       0.04        fxmark.ssd_f2fs_DWTL_54_directio.real_sec
      0.02 ± 11%     -51.6%       0.01 ±  6%  fxmark.ssd_f2fs_DWTL_54_directio.secs
     19.12 ± 18%     +27.6%      24.39 ±  3%  fxmark.ssd_f2fs_DWTL_54_directio.sys_util
      9.78 ±  9%     +19.4%      11.68 ±  9%  fxmark.ssd_f2fs_DWTL_54_directio.user_util
    505108 ± 13%    +104.2%    1031595 ±  6%  fxmark.ssd_f2fs_DWTL_54_directio.works/sec
      2.86 ±  7%     -35.1%       1.86 ±  3%  fxmark.ssd_f2fs_DWTL_72_directio.idle_sec
     70.68 ±  4%     -14.2%      60.66 ±  3%  fxmark.ssd_f2fs_DWTL_72_directio.idle_util
      0.03 ± 55%    -100.0%       0.00        fxmark.ssd_f2fs_DWTL_72_directio.iowait_sec
      0.69 ± 53%    -100.0%       0.00        fxmark.ssd_f2fs_DWTL_72_directio.iowait_util
      0.06 ±  5%     -24.3%       0.04 ±  4%  fxmark.ssd_f2fs_DWTL_72_directio.real_sec
      0.02 ± 10%     -56.4%       0.01 ± 12%  fxmark.ssd_f2fs_DWTL_72_directio.secs
     16.69 ± 19%     +50.2%      25.06 ±  7%  fxmark.ssd_f2fs_DWTL_72_directio.sys_util
     10.34 ± 12%     +26.6%      13.08 ±  3%  fxmark.ssd_f2fs_DWTL_72_directio.user_util
    423632 ±  9%    +130.6%     976850 ± 12%  fxmark.ssd_f2fs_DWTL_72_directio.works/sec
     50.20           -10.9%      44.72        fxmark.time.elapsed_time
     50.20           -10.9%      44.72        fxmark.time.elapsed_time.max
     34124 ±  8%     -51.9%      16405        fxmark.time.file_system_inputs
    682953            -3.1%     661972        fxmark.time.file_system_outputs
     10.33 ±  4%     +16.1%      12.00        fxmark.time.percent_of_cpu_this_job_got
 5.138e+09 ±  2%     -13.1%  4.462e+09        cpuidle..time
      2143 ±  4%     -23.4%       1642 ±  5%  meminfo.Active
      2.23 ±  3%      +0.2        2.47 ±  7%  mpstat.cpu.all.sys%
    500512 ± 10%     -20.3%     399144 ± 20%  numa-numastat.node0.numa_hit
    935.00 ±  3%     -51.1%     457.00 ±  3%  vmstat.io.bi
     18107 ±  3%      +9.9%      19896 ±  3%  vmstat.io.bo
      7339 ±  3%     +14.7%       8417 ±  3%  vmstat.system.cs
      1262 ± 10%     -20.5%       1004 ±  8%  numa-meminfo.node0.Active
      2619 ± 12%     +26.1%       3302 ±  8%  numa-meminfo.node0.PageTables
     28944 ± 32%     -50.5%      14326 ± 18%  numa-meminfo.node1.AnonHugePages
      2408 ± 15%     -28.2%       1730 ± 17%  numa-meminfo.node1.PageTables
    126.83 ± 23%     -76.7%      29.50 ±  6%  numa-vmstat.node0.nr_active_file
    654.00 ± 12%     +26.2%     825.67 ±  8%  numa-vmstat.node0.nr_page_table_pages
    126.83 ± 23%     -76.7%      29.50 ±  6%  numa-vmstat.node0.nr_zone_active_file
    601.67 ± 15%     -28.2%     432.00 ± 17%  numa-vmstat.node1.nr_page_table_pages
     33.51 ±128%      -0.6       32.93 ±141%  perf-profile.calltrace.cycles-pp.exit_to_user_mode_loop.exit_to_user_mode_prepare.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe
     11.25 ±130%      -8.8        2.47 ±144%  perf-profile.children.cycles-pp.mutex_lock
     33.51 ±128%      -3.6       29.94 ±141%  perf-profile.children.cycles-pp.__fput
     33.51 ±128%      -2.6       30.87 ±141%  perf-profile.children.cycles-pp.task_work_run
     33.51 ±128%      -0.6       32.93 ±141%  perf-profile.children.cycles-pp.syscall_exit_to_user_mode
     33.51 ±128%      -0.6       32.93 ±141%  perf-profile.children.cycles-pp.exit_to_user_mode_prepare
     33.51 ±128%      -0.6       32.93 ±141%  perf-profile.children.cycles-pp.exit_to_user_mode_loop
      8.20 ±121%      -5.9        2.26 ±149%  perf-profile.self.cycles-pp.mutex_lock
      6533 ±  3%      +7.7%       7037 ±  2%  perf-stat.i.context-switches
     89575            -3.5%      86448        perf-stat.i.cpu-clock
      5357            +8.9%       5835        perf-stat.i.minor-faults
     51.62            -2.8       48.79 ±  4%  perf-stat.i.node-load-miss-rate%
      5365            +8.9%       5842        perf-stat.i.page-faults
     89575            -3.5%      86448        perf-stat.i.task-clock
      6392 ±  3%      +7.6%       6876 ±  2%  perf-stat.ps.context-switches
     87853            -3.8%      84536        perf-stat.ps.cpu-clock
      5240            +8.8%       5702        perf-stat.ps.minor-faults
      5247            +8.8%       5709        perf-stat.ps.page-faults
     87853            -3.8%      84536        perf-stat.ps.task-clock
    149.83 ± 19%     -74.4%      38.33 ± 35%  proc-vmstat.nr_active_file
    163134           -11.3%     144709        proc-vmstat.nr_dirtied
      7977            +1.8%       8119        proc-vmstat.nr_mapped
    163103           -11.3%     144678        proc-vmstat.nr_written
    149.83 ± 19%     -74.4%      38.33 ± 35%  proc-vmstat.nr_zone_active_file
    241.83 ± 76%    +140.3%     581.17 ± 23%  proc-vmstat.nr_zone_write_pending
    756870            -3.0%     734345        proc-vmstat.numa_hit
    640117            -3.5%     617496        proc-vmstat.numa_local
     10158           -70.5%       2998        proc-vmstat.pgactivate
    758086            -3.0%     735540        proc-vmstat.pgalloc_normal
      3845 ±  9%     -74.2%     993.33 ±  3%  proc-vmstat.pgdeactivate
    507575            -2.6%     494609        proc-vmstat.pgfault
    548605            -4.0%     526561        proc-vmstat.pgfree
     48618           -59.1%      19882        proc-vmstat.pgpgin
    938208            -7.9%     864503        proc-vmstat.pgpgout
     31716           -15.7%      26741        proc-vmstat.pgrotated


To reproduce:

        git clone https://github.com/intel/lkp-tests.git
        cd lkp-tests
        sudo bin/lkp install job.yaml           # job file is attached in this email
        bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
        sudo bin/lkp run generated-yaml-file

        # if come across any failure that blocks the test,
        # please remove ~/.lkp and /lkp dir to run from a clean state.


Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


-- 
0-DAY CI Kernel Test Service
https://01.org/lkp

View attachment "config-5.18.0-rc4-00025-gcfd66bb715fd" of type "text/plain" (161685 bytes)

View attachment "job-script" of type "text/plain" (7977 bytes)

View attachment "job.yaml" of type "text/plain" (5396 bytes)

View attachment "reproduce" of type "text/plain" (254 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ