lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Wed, 15 Feb 2017 09:34:08 +0800 From: kernel test robot <xiaolong.ye@...el.com> To: Johannes Weiner <hannes@...xchg.org> Cc: Stephen Rothwell <sfr@...b.auug.org.au>, Minchan Kim <minchan@...nel.org>, Michal Hocko <mhocko@...e.com>, Mel Gorman <mgorman@...e.de>, Hillf Danton <hillf.zj@...baba-inc.com>, Rik van Riel <riel@...hat.com>, Andrew Morton <akpm@...ux-foundation.org>, LKML <linux-kernel@...r.kernel.org>, lkp@...org Subject: [lkp-robot] [mm] e932fd3f17: 24% improvement of fio.write_bw_MBps Greeting, FYI, we noticed a 24% improvement of fio.write_bw_MBps due to commit: commit: e932fd3f1772e1fbc0b90dcfb4fbc689729a48f8 ("mm: vmscan: kick flushers when we encounter dirty pages on the LRU") https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git master in testcase: fio-basic on test machine: 16 threads Intel(R) Xeon(R) CPU D-1541 @ 2.10GHz with 8G memory with following parameters: runtime: 300s disk: 1SSD fs: btrfs nr_task: 8 rw: randwrite bs: 4k ioengine: sync test_size: 400g cpufreq_governor: performance test-description: Fio is a tool that will spawn a number of threads or processes doing a particular type of I/O action as specified by the user. test-url: https://github.com/axboe/fio Details are as below: --------------------------------------------------------------------------------------------------> To reproduce: git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git cd lkp-tests bin/lkp install job.yaml # job file is attached in this email bin/lkp run job.yaml testcase/path_params/tbox_group/run: fio-basic/300s-1SSD-btrfs-8-randwrite-4k-sync-400g-performance/lkp-bdw-de1 2b837027cdc60ac3 e932fd3f1772e1fbc0b90dcfb4 ---------------- -------------------------- %stddev change %stddev \ | \ 4.07 ± 17% 70% 6.92 ± 23% fio.latency_10us% 0.29 ± 12% 28% 0.37 ± 10% fio.latency_100us% 66.58 ± 5% 24% 82.64 fio.write_bw_MBps 17043 ± 5% 24% 21155 fio.write_iops 43.93 ± 3% 11% 48.92 ± 3% fio.latency_250us% 10465 ± 5% -14% 8975 ± 10% fio.write_clat_stddev 469 ± 5% -20% 377 fio.write_clat_mean_us 570 ± 15% -26% 423 ± 3% fio.write_clat_90%_us 883 ± 23% -41% 519 ± 9% fio.write_clat_95%_us 6.24 ± 20% -44% 3.48 ± 11% fio.latency_750us% 2414 ± 14% -50% 1199 ± 27% fio.write_clat_99%_us 0.06 ± 24% -65% 0.02 ± 34% fio.latency_10ms% 2.64 ± 29% -67% 0.86 ± 38% fio.latency_2ms% 2.03 ± 33% -69% 0.62 ± 44% fio.latency_1000us% 40927209 ± 5% 11% 45440361 ± 35% fio.time.file_system_outputs 4913388 ± 4% 6% 5228420 ± 35% fio.time.voluntary_context_switches 56.25 ± 3% 5% 58.87 ± 35% fio.time.system_time 22 ± 4% 4% 23 ± 35% fio.time.percent_of_cpu_this_job_got fail:runs %reproduction fail:runs | | | :10 11% 1:9 last_state.OOM :10 11% 1:9 last_state.is_incomplete_run :10 11% 1:9 dmesg.invoked_oom-killer:gfp_mask=0x :10 11% 1:9 dmesg.Mem-Info :10 11% 1:9 dmesg.Out_of_memory:Kill_process 562 ± 7% 24% 696 ± 3% turbostat.Avg_MHz 22.79 ± 7% 23% 28.09 ± 3% turbostat.%Busy 26.58 5% 28.03 turbostat.PkgWatt 10.78 4% 11.16 turbostat.RAMWatt 1659 ±300% 2e+05 187515 ± 9% latency_stats.avg.balance_dirty_pages.balance_dirty_pages_ratelimited.__btrfs_btree_balance_dirty.[btrfs].btrfs_btree_balance_dirty.[btrfs].__btrfs_buffered_write.[btrfs].btrfs_file_write_iter.[btrfs].__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath 1659 ±300% 2e+05 203157 latency_stats.max.balance_dirty_pages.balance_dirty_pages_ratelimited.__btrfs_btree_balance_dirty.[btrfs].btrfs_btree_balance_dirty.[btrfs].__btrfs_buffered_write.[btrfs].btrfs_file_write_iter.[btrfs].__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath 6217883 ±157% 1e+08 1.231e+08 ± 51% latency_stats.sum.balance_dirty_pages.balance_dirty_pages_ratelimited.__btrfs_buffered_write.[btrfs].btrfs_file_write_iter.[btrfs].__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath 1659 ±300% 3e+06 3100846 ± 57% latency_stats.sum.balance_dirty_pages.balance_dirty_pages_ratelimited.__btrfs_btree_balance_dirty.[btrfs].btrfs_btree_balance_dirty.[btrfs].__btrfs_buffered_write.[btrfs].btrfs_file_write_iter.[btrfs].__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath 182633 ± 8% 12% 204509 ± 5% vmstat.io.bo 314624 ± 6% 23% 387499 ± 3% vmstat.system.cs 36487 12% 40702 vmstat.system.in 16960 ± 5% 25% 21138 iostat.sda.w/s 46.04 ± 11% 16% 53.46 ± 7% iostat.sda.avgqu-sz 204068 ± 7% 11% 225809 ± 4% iostat.sda.wkB/s 8928 ± 8% 27% 11302 ± 3% perf-stat.instructions-per-iTLB-miss 2.7e+12 ± 7% 24% 3.344e+12 ± 4% perf-stat.cpu-cycles 95320324 ± 6% 23% 1.171e+08 ± 3% perf-stat.context-switches 3.384e+11 ± 6% 22% 4.135e+11 ± 3% perf-stat.branch-instructions 1.724e+12 ± 6% 21% 2.093e+12 ± 3% perf-stat.instructions 4.565e+11 ± 7% 21% 5.54e+11 perf-stat.dTLB-loads 1.656e+09 ± 5% 20% 1.993e+09 ± 4% perf-stat.iTLB-loads 2.698e+11 ± 5% 20% 3.225e+11 perf-stat.dTLB-stores 3.769e+10 ± 5% 16% 4.39e+10 ± 3% perf-stat.cache-misses 3.769e+10 ± 5% 16% 4.39e+10 ± 3% perf-stat.cache-references 3.547e+09 ± 4% 15% 4.086e+09 perf-stat.branch-misses 502899 ± 3% 13% 565994 ± 3% perf-stat.cpu-migrations 4.906e+08 ± 3% 6% 5.212e+08 ± 3% perf-stat.dTLB-load-misses 1.05 -6% 0.99 perf-stat.branch-miss-rate% 0.02 ± 3% -10% 0.02 perf-stat.dTLB-store-miss-rate% 0.11 ± 8% -13% 0.09 ± 3% perf-stat.dTLB-load-miss-rate% 10.49 ± 6% -19% 8.51 ± 3% perf-stat.iTLB-load-miss-rate% [*] bisect-good sample [O] bisect-bad sample Disclaimer: Results have been estimated based on internal Intel analysis and are provided for informational purposes only. Any difference in system hardware or software design or configuration may affect actual performance. Thanks, Xiaolong View attachment "config-4.10.0-rc7-00263-ge932fd3" of type "text/plain" (155610 bytes) View attachment "job-script" of type "text/plain" (7031 bytes) View attachment "job.yaml" of type "text/plain" (4690 bytes) View attachment "reproduce" of type "text/plain" (431 bytes)
Powered by blists - more mailing lists