lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20161018025900.GE6107@yexl-desktop>
Date:   Tue, 18 Oct 2016 10:59:00 +0800
From:   kernel test robot <xiaolong.ye@...el.com>
To:     Jens Axboe <axboe@...com>
Cc:     LKML <linux-kernel@...r.kernel.org>, Jens Axboe <axboe@...com>,
        Jens Axboe <axboe@...nel.dk>, lkp@...org
Subject: [lkp] [writeback]  1a2af575cf:  fio.latency_10us% -20.4% improvement


FYI, we noticed a -20.4% improvement of fio.latency_10us% due to commit:

commit 1a2af575cf12136298f751b656adc2fa8456f804 ("writeback: throttle buffered writeback")
https://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux-block.git wb-buf-throttle

in testcase: fio-basic
on test machine: 16 threads Intel(R) Xeon(R) CPU D-1541 @ 2.10GHz with 8G memory
with following parameters:

	runtime: 300s
	nr_task: 8
	disk: 1SSD
	fs: ext4
	rw: write
	bs: 4k
	ioengine: sync
	test_size: 400g
	cpufreq_governor: performance

Fio is a tool that will spawn a number of threads or processes doing a particular type of I/O action as specified by the user.


Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.

Details are as below:
-------------------------------------------------------------------------------------------------->


To reproduce:

        git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
        cd lkp-tests
        bin/lkp install job.yaml  # job file is attached in this email
        bin/lkp run     job.yaml

=========================================================================================
bs/compiler/cpufreq_governor/disk/fs/ioengine/kconfig/nr_task/rootfs/runtime/rw/tbox_group/test_size/testcase:
  4k/gcc-6/performance/1SSD/ext4/sync/x86_64-rhel-7.2/8/debian-x86_64-2016-08-31.cgz/300s/write/lkp-bdw-de1/400g/fio-basic

commit: 
  1513fc26a0 ("wbt: add general throttling mechanism")
  1a2af575cf ("writeback: throttle buffered writeback")

1513fc26a0f3c20e 1a2af575cf12136298f751b656 
---------------- -------------------------- 
         %stddev     %change         %stddev
             \          |                \  
      4.38 ±  3%     -20.4%       3.49 ±  3%  fio.latency_10us%
      0.01 ±  0%    -100.0%       0.00 ± -1%  fio.latency_250ms%
      0.01 ±  0%    -100.0%       0.00 ± -1%  fio.latency_500ms%
     91086 ±  0%      +2.3%      93150 ±  0%  fio.time.voluntary_context_switches
      4.00 ±  0%     -25.0%       3.00 ±  0%  fio.write_clat_95%_us
      1770 ±  0%      -9.5%       1603 ±  0%  fio.write_clat_stddev
     55001 ±  0%      -1.5%      54177 ±  0%  interrupts.CAL:Function_call_interrupts
   1135308 ±  0%     +10.7%    1256908 ±  0%  meminfo.Dirty
    137881 ±  0%     -87.6%      17158 ±  1%  meminfo.Writeback
  97812142 ±  6%     +28.1%  1.253e+08 ±  9%  cpuidle.C1E-BDW.time
    178531 ±  3%     +17.6%     210005 ±  5%  cpuidle.C1E-BDW.usage
  71044513 ±  7%     -22.4%   55107352 ± 10%  cpuidle.POLL.time
      6.45 ±  1%      -4.2%       6.18 ±  1%  turbostat.%Busy
    140.25 ±  2%      -6.4%     131.25 ±  2%  turbostat.Avg_MHz
     16.97 ±  6%     +12.3%      19.07 ±  5%  turbostat.CPU%c6
    135.51 ±  0%     -88.2%      16.03 ±  0%  iostat.sda.avgqu-sz
    300.94 ±  0%     -88.1%      35.87 ±  0%  iostat.sda.await
    301.11 ±  0%     -88.1%      35.97 ±  0%  iostat.sda.w_await
     27.90 ±  0%      -2.1%      27.32 ±  0%  iostat.sda.wrqm/s
    283778 ±  0%     +10.7%     314221 ±  0%  proc-vmstat.nr_dirty
     34550 ±  0%     -87.5%       4308 ±  0%  proc-vmstat.nr_writeback
    123061 ± 37%     -98.0%       2515 ± 35%  proc-vmstat.pgscan_direct
    123061 ± 37%     -98.0%       2515 ± 35%  proc-vmstat.pgsteal_direct
    841.25 ±  3%     -27.5%     610.00 ±  7%  slabinfo.blkdev_requests.active_objs
    880.00 ±  2%     -30.7%     610.00 ±  7%  slabinfo.blkdev_requests.num_objs
    514.25 ±  2%     -25.3%     384.25 ±  4%  slabinfo.kmalloc-4096.active_objs
    531.25 ±  2%     -25.5%     396.00 ±  4%  slabinfo.kmalloc-4096.num_objs
    277718 ±  0%     -91.7%      23019 ±  3%  latency_stats.avg.do_get_write_access.jbd2_journal_get_write_access.__ext4_journal_get_write_access.ext4_reserve_inode_write.ext4_mark_inode_dirty.ext4_dirty_inode.__mark_inode_dirty.generic_update_time.file_update_time.__generic_file_write_iter.ext4_file_write_iter.__vfs_write
    277718 ±  0%     -86.7%      36903 ±  0%  latency_stats.avg.max
     26918 ±  5%     -66.1%       9126 ±  3%  latency_stats.avg.wait_transaction_locked.add_transaction_credits.start_this_handle.jbd2__journal_start.__ext4_journal_start_sb.ext4_dirty_inode.__mark_inode_dirty.generic_update_time.file_update_time.__generic_file_write_iter.ext4_file_write_iter.__vfs_write
    417291 ± 10%     -87.8%      50997 ±  6%  latency_stats.max.do_get_write_access.jbd2_journal_get_write_access.__ext4_journal_get_write_access.ext4_reserve_inode_write.ext4_mark_inode_dirty.ext4_dirty_inode.__mark_inode_dirty.generic_update_time.file_update_time.__generic_file_write_iter.ext4_file_write_iter.__vfs_write
    417291 ± 10%     -85.9%      58843 ±  1%  latency_stats.max.max
  69764347 ±  6%     -89.7%    7217067 ±  8%  latency_stats.sum.do_get_write_access.jbd2_journal_get_write_access.__ext4_journal_get_write_access.ext4_reserve_inode_write.ext4_mark_inode_dirty.ext4_dirty_inode.__mark_inode_dirty.generic_update_time.file_update_time.__generic_file_write_iter.ext4_file_write_iter.__vfs_write
    112945 ± 36%     -84.0%      18037 ±109%  latency_stats.sum.lru_add_drain_all.SyS_fadvise64.entry_SYSCALL_64_fastpath
   6133127 ±  8%     -79.0%    1286420 ±  7%  latency_stats.sum.wait_transaction_locked.add_transaction_credits.start_this_handle.jbd2__journal_start.__ext4_journal_start_sb.ext4_dirty_inode.__mark_inode_dirty.generic_update_time.file_update_time.__generic_file_write_iter.ext4_file_write_iter.__vfs_write
    447.79 ±  3%     +33.5%     597.67 ±  8%  sched_debug.cfs_rq:/.load_avg.max
     49.21 ±  8%     +67.5%      82.41 ± 20%  sched_debug.cfs_rq:/.load_avg.stddev
    536913 ±  3%     +10.3%     592288 ±  6%  sched_debug.cpu.avg_idle.min
     93.46 ± 22%     -40.2%      55.92 ± 13%  sched_debug.cpu.cpu_load[2].max
     22.64 ± 22%     -38.9%      13.83 ± 12%  sched_debug.cpu.cpu_load[2].stddev
     83.75 ± 21%     -35.7%      53.88 ±  7%  sched_debug.cpu.cpu_load[3].max
     20.09 ± 21%     -34.7%      13.11 ±  6%  sched_debug.cpu.cpu_load[3].stddev
      8881 ±  3%     +37.6%      12220 ±  1%  sched_debug.cpu.ttwu_local.max
      1306 ± 17%     +42.3%       1859 ±  6%  sched_debug.cpu.ttwu_local.stddev
      0.32 ±  2%      +8.1%       0.35 ±  2%  perf-stat.branch-miss-rate%
 3.346e+08 ±  2%      +4.7%  3.503e+08 ±  2%  perf-stat.branch-misses
  5.84e+09 ±  0%      -1.9%   5.73e+09 ±  0%  perf-stat.cache-misses
  5.84e+09 ±  0%      -1.9%   5.73e+09 ±  0%  perf-stat.cache-references
 6.502e+11 ±  2%      -7.9%  5.987e+11 ±  2%  perf-stat.cpu-cycles
      0.12 ±  2%      +3.2%       0.12 ±  0%  perf-stat.dTLB-load-miss-rate%
 1.712e+08 ±  0%      +2.3%  1.751e+08 ±  0%  perf-stat.dTLB-load-misses
  21716662 ±  0%      +2.7%   22312790 ±  0%  perf-stat.dTLB-store-misses
     64.73 ±  0%      -2.4%      63.14 ±  0%  perf-stat.iTLB-load-miss-rate%
  43799790 ±  0%      -1.6%   43082890 ±  0%  perf-stat.iTLB-load-misses
  23870161 ±  0%      +5.4%   25151226 ±  2%  perf-stat.iTLB-loads
      0.82 ±  1%      +5.9%       0.87 ±  2%  perf-stat.ipc
      0.86 ± 10%     +30.8%       1.12 ± 10%  perf-profile.calltrace.cycles-pp.__add_to_page_cache_locked.add_to_page_cache_lru.pagecache_get_page.grab_cache_page_write_begin.ext4_da_write_begin
      2.28 ±  1%     +27.1%       2.90 ±  4%  perf-profile.calltrace.cycles-pp.__block_commit_write.block_write_end.generic_write_end.ext4_da_write_end.generic_perform_write
      0.78 ±  0%     +26.4%       0.98 ±  6%  perf-profile.calltrace.cycles-pp.__set_page_dirty.mark_buffer_dirty.__block_commit_write.block_write_end.generic_write_end
      2.35 ±  2%     +27.8%       3.00 ±  3%  perf-profile.calltrace.cycles-pp.block_write_end.generic_write_end.ext4_da_write_end.generic_perform_write.__generic_file_write_iter
     20.47 ± 11%     -28.3%      14.68 ±  6%  perf-profile.calltrace.cycles-pp.call_cpuidle.cpu_startup_entry.rest_init.start_kernel.x86_64_start_reservations
     20.54 ± 11%     -28.1%      14.77 ±  7%  perf-profile.calltrace.cycles-pp.cpu_startup_entry.rest_init.start_kernel.x86_64_start_reservations.x86_64_start_kernel
     20.47 ± 11%     -28.3%      14.68 ±  6%  perf-profile.calltrace.cycles-pp.cpuidle_enter.call_cpuidle.cpu_startup_entry.rest_init.start_kernel
     17.98 ± 10%     -32.7%      12.11 ±  7%  perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.call_cpuidle.cpu_startup_entry.rest_init
      3.60 ±  3%     +17.6%       4.23 ±  4%  perf-profile.calltrace.cycles-pp.ext4_da_write_end.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter.__vfs_write
      2.91 ±  3%     +21.2%       3.52 ±  2%  perf-profile.calltrace.cycles-pp.generic_write_end.ext4_da_write_end.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter
      1.32 ±  3%     +27.0%       1.68 ±  4%  perf-profile.calltrace.cycles-pp.mark_buffer_dirty.__block_commit_write.block_write_end.generic_write_end.ext4_da_write_end
     19.44 ± 15%     -30.1%      13.59 ± 14%  perf-profile.calltrace.cycles-pp.poll_idle.cpuidle_enter_state.cpuidle_enter.call_cpuidle.cpu_startup_entry
     20.54 ± 11%     -28.1%      14.77 ±  7%  perf-profile.calltrace.cycles-pp.rest_init.start_kernel.x86_64_start_reservations.x86_64_start_kernel
     20.54 ± 11%     -28.1%      14.77 ±  7%  perf-profile.calltrace.cycles-pp.start_kernel.x86_64_start_reservations.x86_64_start_kernel
     20.54 ± 11%     -28.1%      14.77 ±  7%  perf-profile.calltrace.cycles-pp.x86_64_start_kernel
     20.54 ± 11%     -28.1%      14.77 ±  7%  perf-profile.calltrace.cycles-pp.x86_64_start_reservations.x86_64_start_kernel
      0.86 ± 10%     +30.8%       1.12 ± 10%  perf-profile.children.cycles-pp.__add_to_page_cache_locked
      2.29 ±  1%     +27.1%       2.91 ±  3%  perf-profile.children.cycles-pp.__block_commit_write
      0.79 ±  2%     +25.0%       0.99 ±  6%  perf-profile.children.cycles-pp.__set_page_dirty
      3.54 ±  5%     +14.3%       4.05 ±  8%  perf-profile.children.cycles-pp.apic_timer_interrupt
      2.38 ±  3%     +26.8%       3.01 ±  4%  perf-profile.children.cycles-pp.block_write_end
      3.60 ±  3%     +17.7%       4.23 ±  4%  perf-profile.children.cycles-pp.ext4_da_write_end
      2.94 ±  3%     +20.3%       3.54 ±  2%  perf-profile.children.cycles-pp.generic_write_end
      1.32 ±  3%     +27.6%       1.69 ±  4%  perf-profile.children.cycles-pp.mark_buffer_dirty
     19.44 ± 15%     -30.1%      13.59 ± 14%  perf-profile.children.cycles-pp.poll_idle
     20.54 ± 11%     -28.1%      14.77 ±  7%  perf-profile.children.cycles-pp.rest_init
      3.43 ±  5%     +14.3%       3.92 ±  8%  perf-profile.children.cycles-pp.smp_apic_timer_interrupt
     20.54 ± 11%     -28.1%      14.77 ±  7%  perf-profile.children.cycles-pp.start_kernel
      0.86 ± 16%     +19.2%       1.03 ±  3%  perf-profile.children.cycles-pp.tick_nohz_irq_exit
     20.54 ± 11%     -28.1%      14.77 ±  7%  perf-profile.children.cycles-pp.x86_64_start_kernel
     20.54 ± 11%     -28.1%      14.77 ±  7%  perf-profile.children.cycles-pp.x86_64_start_reservations
      0.94 ±  4%     +25.5%       1.18 ±  9%  perf-profile.self.cycles-pp.__block_commit_write
     19.44 ± 15%     -30.1%      13.59 ± 14%  perf-profile.self.cycles-pp.poll_idle





Thanks,
Xiaolong

View attachment "config-4.8.0-14612-g1a2af57" of type "text/plain" (153678 bytes)

View attachment "job-script" of type "text/plain" (6995 bytes)

View attachment "job.yaml" of type "text/plain" (4557 bytes)

View attachment "reproduce" of type "text/plain" (431 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ