lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Tue, 10 Oct 2017 16:10:54 +0100 From: John Garry <john.garry@...wei.com> To: Ming Lei <ming.lei@...hat.com> CC: Jens Axboe <axboe@...com>, <linux-block@...r.kernel.org>, "Christoph Hellwig" <hch@...radead.org>, Mike Snitzer <snitzer@...hat.com>, <dm-devel@...hat.com>, Bart Van Assche <bart.vanassche@...disk.com>, "Laurence Oberman" <loberman@...hat.com>, Paolo Valente <paolo.valente@...aro.org>, Oleksandr Natalenko <oleksandr@...alenko.name>, Tom Nguyen <tom81094@...il.com>, <linux-kernel@...r.kernel.org>, <linux-scsi@...r.kernel.org>, Omar Sandoval <osandov@...com>, Linuxarm <linuxarm@...wei.com> Subject: Re: [PATCH V5 00/14] blk-mq-sched: improve sequential I/O performance(part 1) On 10/10/2017 14:45, Ming Lei wrote: > Hi John, > > All change in V6.2 is blk-mq/scsi-mq only, which shouldn't > affect non SCSI_MQ, so I suggest you to compare the perf > between deadline and mq-deadline, like Johannes mentioned. > >> > >> > V6.2 series with default SCSI_MQ >> > read, rw, write IOPS >> > 700K, 130K/128K, 640K > If possible, could you provide your fio script and log on both > non SCSI_MQ(deadline) and SCSI_MQ(mq_deadline)? Maybe some clues > can be figured out. > > Also, I just put another patch on V6.2 branch, which may improve > a bit too. You may try that in your test. > > https://github.com/ming1/linux/commit/e31e2eec46c9b5ae7cfa181e9b77adad2c6a97ce > > -- Ming . Hi Ming Lei, OK, I have tested deadline vs mq-deadline for your v6.2 branch and 4.12-rc2. Unfortunately I don't have time now to test your experimental patches. 4.14-rc2 without default SCSI_MQ, deadline scheduler read, rw, write IOPS 920K, 115K/115K, 806K 4.14-rc2 with default SCSI_MQ, mq-deadline scheduler read, rw, write IOPS 280K, 99K/99K, 300K V6.2 series without default SCSI_MQ, deadline scheduler read, rw, write IOPS 919K, 117K/117K, 806K V6.2 series with default SCSI_MQ, mq-deadline scheduler read, rw, write IOPS 688K, 128K/128K, 630K I think that the non-mq results look a bit more sensible - that is, consistent results. Here's my script sample: [global] rw=rW direct=1 ioengine=libaio iodepth=2048 numjobs=1 bs=4k ;size=10240000m ;zero_buffers=1 group_reporting=1 group_reporting=1 ;ioscheduler=noop cpumask=0xff ;cpus_allowed=0-3 ;gtod_reduce=1 ;iodepth_batch=2 ;iodepth_batch_complete=2 runtime=100000000 ;thread loops = 10000 [job1] filename=/dev/sdb: [job1] filename=/dev/sdc: [job1] filename=/dev/sdd: [job1] filename=/dev/sde: [job1] filename=/dev/sdf: [job1] filename=/dev/sdg: John
Powered by blists - more mailing lists