lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150318050319.GD30894@yliu-dev.sh.intel.com>
Date:	Wed, 18 Mar 2015 13:03:19 +0800
From:	Yuanahn Liu <yuanhan.liu@...ux.intel.com>
To:	"shli@...nel.org" <shli@...nel.org>
Cc:	NeilBrown <neilb@...e.de>, lkp@...org, lkp@...ux.intel.com,
	linux-kernel@...r.kernel.org, Jaegeuk Kim <jaegeuk@...nel.org>,
	linux-f2fs-devel@...ts.sourceforge.net
Subject: performance changes on d4b4c2cd:  37.6% fsmark.files_per_sec, -15.9%
 fsmark.files_per_sec, and few more

Hi,

FYI, we noticed performance changes on `fsmark.files_per_sec' by d4b4c2cdffab86f5c7594c44635286a6d277d5c6:

    > commit d4b4c2cdffab86f5c7594c44635286a6d277d5c6
    > Author:     shli@...nel.org <shli@...nel.org>
    > AuthorDate: Mon Dec 15 12:57:03 2014 +1100
    > Commit:     NeilBrown <neilb@...e.de>
    > CommitDate: Wed Mar 4 13:40:17 2015 +1100
    > 
    >     RAID5: batch adjacent full stripe write

c1dfe87e41d9c2926fe92f803f02c733ddbccf0b     d4b4c2cdffab86f5c7594c44635286a6d277d5c6
----------------------------------------     ----------------------------------------
run time(m)     metric_value     ±stddev     run time(m)     metric_value     ±stddev     change   testbox/benchmark/sub-testcase
--- ------  ----------------------------     --- ------  ----------------------------     -------- ------------------------------
4   15.3              33.525     ±3.0%       6   11.1              46.133     ±5.0%          37.6% ivb44/fsmark/1x-1t-3HDD-RAID5-xfs-4M-120G-NoSync
3   0.5              262.800     ±1.5%       3   0.4              307.367     ±1.2%          17.0% ivb44/fsmark/1x-1t-4BRD_12G-RAID5-f2fs-4M-30G-NoSync
3   0.5              289.900     ±0.3%       3   0.4              323.367     ±2.4%          11.5% ivb44/fsmark/1x-64t-4BRD_12G-RAID5-f2fs-4M-30G-NoSync
3   0.5              325.667     ±2.2%       3   0.5              358.800     ±1.8%          10.2% ivb44/fsmark/1x-64t-4BRD_12G-RAID5-ext4-4M-30G-NoSync
3   0.6              216.100     ±0.4%       3   0.6              230.100     ±0.4%           6.5% ivb44/fsmark/1x-64t-4BRD_12G-RAID5-f2fs-4M-30G-fsyncBeforeClose
3   0.5              309.900     ±0.3%       3   0.5              328.500     ±1.1%           6.0% ivb44/fsmark/1x-64t-4BRD_12G-RAID5-xfs-4M-30G-NoSync

3   13.8              37.000     ±0.2%       3   16.5              31.100     ±0.3%         -15.9% ivb44/fsmark/1x-1t-3HDD-RAID5-f2fs-4M-120G-NoSync

NOTE: here are some more info about those test parameters for you to
      understand the testcase better:

      1x : where 'x' means iterations or loop, corresponding to the 'L' option of fsmark
      64t: where 't' means thread
      4M : means the single file size, corresponding to the '-s' option of fsmark
      120G, 30G: means the total test size

      4BRD_12G: BRD is the ramdisk, where '4' means 4 ramdisk, and where '12G' means
                the size of one ramdisk. So, it would be 48G in total. And we made a
                raid on those ramdisk.


And FYI, here I listed more detailed changes for the maximal postive and negtive changes.


more detailed changes about ivb44/fsmark/1x-1t-3HDD-RAID5-xfs-4M-120G-NoSync
---------

c1dfe87e41d9c292  d4b4c2cdffab86f5c7594c4463  
----------------  --------------------------  
         %stddev     %change         %stddev
             \          |                \  
     33.53 ±  3%     +37.6%      46.13 ±  4%  fsmark.files_per_sec
       916 ±  3%     -27.2%        667 ±  5%  fsmark.time.elapsed_time.max
       916 ±  3%     -27.2%        667 ±  5%  fsmark.time.elapsed_time
         7 ±  5%     +37.6%         10 ±  6%  fsmark.time.percent_of_cpu_this_job_got
     92097 ±  2%     -23.1%      70865 ±  4%  fsmark.time.voluntary_context_switches
      0.04 ± 42%    +681.0%       0.27 ± 22%  turbostat.Pkg%pc3
    716062 ±  3%     -82.7%     124210 ± 21%  cpuidle.C1-IVT.usage
 6.883e+08 ±  2%     -86.8%   91146705 ± 34%  cpuidle.C1-IVT.time
      0.04 ± 30%    +145.8%       0.10 ± 25%  turbostat.CPU%c3
       404 ± 16%     -58.4%        168 ± 14%  cpuidle.POLL.usage
       159 ± 47%    +179.5%        444 ± 23%  proc-vmstat.kswapd_low_wmark_hit_quickly
     11133 ± 23%    +100.3%      22298 ± 30%  cpuidle.C3-IVT.usage
  10286681 ± 27%     +95.6%   20116924 ± 27%  cpuidle.C3-IVT.time
      7.92 ± 16%     +77.4%      14.05 ±  6%  turbostat.Pkg%pc6
      4.93 ±  3%     -38.6%       3.03 ±  2%  turbostat.CPU%c1
       916 ±  3%     -27.2%        667 ±  5%  time.elapsed_time.max
       916 ±  3%     -27.2%        667 ±  5%  time.elapsed_time
   2137390 ±  3%     -26.7%    1566752 ±  5%  proc-vmstat.pgfault
         7 ±  5%     +37.6%         10 ±  6%  time.percent_of_cpu_this_job_got
 4.309e+10 ±  3%     -26.3%  3.176e+10 ±  5%  cpuidle.C6-IVT.time
     49038 ±  2%     -23.9%      37334 ±  4%  uptime.idle
      1047 ±  2%     -23.8%        797 ±  4%  uptime.boot
     92097 ±  2%     -23.1%      70865 ±  4%  time.voluntary_context_switches
   4005888 ±  0%     +13.3%    4537685 ± 11%  meminfo.DirectMap2M
      3917 ±  2%     -16.3%       3278 ±  5%  proc-vmstat.pageoutrun
    213737 ±  1%     -13.9%     183969 ±  3%  softirqs.SCHED
     46.86 ±  1%     +16.5%      54.59 ±  1%  turbostat.Pkg%pc2
     32603 ±  3%     -11.7%      28781 ±  5%  numa-vmstat.node1.nr_unevictable
    130415 ±  3%     -11.7%     115127 ±  5%  numa-meminfo.node1.Unevictable
    256781 ±  2%      -8.8%     234146 ±  3%  softirqs.TASKLET
    253606 ±  2%      -8.9%     231108 ±  3%  softirqs.BLOCK
    119.10 ±  2%     -70.0%      35.78 ± 13%  iostat.sdc.rrqm/s
    119.86 ±  1%     -70.3%      35.64 ± 12%  iostat.sdb.rrqm/s
    117.13 ±  2%     -70.2%      34.96 ± 11%  iostat.sda.rrqm/s
       504 ±  2%     -67.6%        163 ± 12%  iostat.sdc.rkB/s
       507 ±  1%     -67.9%        163 ± 12%  iostat.sdb.rkB/s
       496 ±  2%     -67.7%        160 ± 11%  iostat.sda.rkB/s
     15392 ±  3%     +37.8%      21203 ±  5%  iostat.sdb.wrqm/s
     15393 ±  3%     +37.7%      21203 ±  5%  iostat.sdc.wrqm/s
     15392 ±  3%     +37.7%      21203 ±  5%  iostat.sda.wrqm/s
    125236 ±  3%     +37.7%     172422 ±  4%  vmstat.io.bo
    125181 ±  3%     +37.6%     172303 ±  4%  iostat.md0.wkB/s
       552 ±  3%     +37.6%        760 ±  4%  iostat.md0.w/s
     62611 ±  3%     +37.6%      86167 ±  4%  iostat.sdb.wkB/s
     62613 ±  3%     +37.6%      86167 ±  4%  iostat.sdc.wkB/s
     62613 ±  3%     +37.6%      86168 ±  4%  iostat.sda.wkB/s
     40.24 ±  1%     -18.5%      32.81 ±  2%  turbostat.CorWatt
       200 ±  0%     +22.2%        245 ±  2%  iostat.sdc.w/s
      1020 ±  2%     +21.7%       1242 ±  2%  vmstat.system.in
       200 ±  0%     +22.1%        245 ±  2%  iostat.sda.w/s
       200 ±  0%     +22.2%        245 ±  2%  iostat.sdb.w/s
     69.99 ±  0%     -12.4%      61.34 ±  2%  turbostat.PkgWatt
      3943 ±  2%      -8.9%       3593 ±  1%  vmstat.system.cs
      1.51 ±  1%      +6.1%       1.60 ±  2%  iostat.sdb.avgqu-sz
      3.21 ±  0%      +5.4%       3.39 ±  1%  turbostat.RAMWatt
    256182 ±  1%      -4.2%     245424 ±  1%  iostat.md0.avgqu-sz



more detailed changes about ivb44/fsmark/1x-1t-3HDD-RAID5-f2fs-4M-120G-NoSync
---------

c1dfe87e41d9c292  d4b4c2cdffab86f5c7594c4463  
----------------  --------------------------  
         %stddev     %change         %stddev
             \          |                \  
     37.00 ±  0%     -15.9%      31.10 ±  0%  fsmark.files_per_sec
     63414 ±  4%     +57.6%      99945 ±  1%  fsmark.time.voluntary_context_switches
       830 ±  0%     +18.8%        987 ±  0%  fsmark.time.elapsed_time
       830 ±  0%     +18.8%        987 ±  0%  fsmark.time.elapsed_time.max
         9 ±  0%     -14.8%          7 ±  6%  fsmark.time.percent_of_cpu_this_job_got
      1.48 ± 20%    +357.3%       6.75 ±  5%  turbostat.Pkg%pc6
     63414 ±  4%     +57.6%      99945 ±  1%  time.voluntary_context_switches
       109 ± 15%     -37.8%         68 ± 20%  time.involuntary_context_switches
       338 ± 17%     +57.6%        533 ±  0%  cpuidle.POLL.usage
      2691 ±  1%     -20.3%       2144 ± 12%  proc-vmstat.kswapd_high_wmark_hit_quickly
   1060792 ±  0%     +20.2%    1275544 ±  0%  cpuidle.C6-IVT.usage
 3.876e+10 ±  0%     +19.3%  4.625e+10 ±  0%  cpuidle.C6-IVT.time
       830 ±  0%     +18.8%        987 ±  0%  time.elapsed_time.max
       830 ±  0%     +18.8%        987 ±  0%  time.elapsed_time
     39984 ±  0%     +18.6%      47434 ±  0%  uptime.idle
       856 ±  0%     +18.4%       1014 ±  0%  uptime.boot
     15874 ± 12%     +20.9%      19188 ±  6%  slabinfo.anon_vma.active_objs
   1942445 ±  0%     +18.1%    2293524 ±  0%  proc-vmstat.pgfault
     15977 ± 12%     +20.1%      19188 ±  6%  slabinfo.anon_vma.num_objs
    110388 ±  9%     +13.0%     124724 ±  4%  meminfo.DirectMap4k
      3107 ±  8%     -20.9%       2459 ± 15%  numa-meminfo.node0.AnonHugePages
     18408 ± 11%     +15.0%      21165 ±  3%  slabinfo.free_nid.active_objs
     18880 ± 11%     +13.7%      21465 ±  4%  slabinfo.free_nid.num_objs
   1125535 ±  0%     -11.5%     996605 ±  1%  cpuidle.C1-IVT.usage
         9 ±  0%     -14.8%          7 ±  6%  time.percent_of_cpu_this_job_got
    198260 ±  1%     +11.7%     221366 ±  0%  softirqs.SCHED
      6.09 ±  2%     -12.2%       5.34 ±  0%  turbostat.CPU%c1
     14203 ±  2%     -13.1%      12346 ±  8%  slabinfo.kmalloc-256.num_objs
     13763 ±  3%     -13.3%      11937 ±  9%  slabinfo.kmalloc-256.active_objs
      1255 ±  6%     +10.1%       1383 ±  1%  slabinfo.RAW.num_objs
      1255 ±  6%     +10.1%       1383 ±  1%  slabinfo.RAW.active_objs
     30.37 ±  3%     +30.5%      39.62 ±  0%  iostat.sdc.rrqm/s
     31.23 ±  5%     +28.0%      39.98 ±  1%  iostat.sdb.rrqm/s
     33.37 ±  3%     +19.0%      39.72 ±  2%  iostat.sda.rrqm/s
       562 ±  0%     -15.9%        472 ±  0%  iostat.md0.w/s
     17106 ±  0%     -15.9%      14382 ±  0%  iostat.sda.wrqm/s
     17106 ±  0%     -15.9%      14382 ±  0%  iostat.sdc.wrqm/s
     17106 ±  0%     -15.9%      14382 ±  0%  iostat.sdb.wrqm/s
     69317 ±  0%     -15.9%      58284 ±  0%  iostat.sdc.wkB/s
     69316 ±  0%     -15.9%      58284 ±  0%  iostat.sda.wkB/s
     69317 ±  0%     -15.9%      58284 ±  0%  iostat.sdb.wkB/s
    138603 ±  0%     -15.9%     116543 ±  0%  iostat.md0.wkB/s
    138705 ±  0%     -15.9%     116633 ±  0%  vmstat.io.bo
       213 ±  0%     -14.5%        182 ±  0%  iostat.sdb.w/s
       213 ±  0%     -14.5%        182 ±  0%  iostat.sda.w/s
       213 ±  0%     -14.6%        182 ±  0%  iostat.sdc.w/s
      4731 ±  0%     -12.7%       4131 ±  0%  vmstat.system.cs
      1133 ±  2%     -12.3%        993 ±  0%  vmstat.system.in
      3.02 ±  3%      -8.6%       2.76 ±  3%  iostat.sdc.avgqu-sz
      3.29 ±  2%      -9.4%       2.98 ±  3%  iostat.sdb.avgqu-sz
        25 ± 19%     -21.3%         19 ±  2%  turbostat.Avg_MHz
      3.10 ±  1%      -9.4%       2.81 ±  1%  iostat.sda.avgqu-sz
     44.45 ±  1%      -5.6%      41.94 ±  2%  turbostat.CorWatt
      0.75 ± 19%     -20.1%       0.60 ±  4%  turbostat.%Busy
     74.92 ±  1%      -4.9%      71.23 ±  2%  turbostat.PkgWatt
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ