[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20171009042835.GA19029@ming.t460p>
Date: Mon, 9 Oct 2017 12:28:37 +0800
From: Ming Lei <ming.lei@...hat.com>
To: Christoph Hellwig <hch@...radead.org>
Cc: Jens Axboe <axboe@...com>, linux-block@...r.kernel.org,
Mike Snitzer <snitzer@...hat.com>, dm-devel@...hat.com,
Bart Van Assche <bart.vanassche@...disk.com>,
Laurence Oberman <loberman@...hat.com>,
Paolo Valente <paolo.valente@...aro.org>,
Oleksandr Natalenko <oleksandr@...alenko.name>,
Tom Nguyen <tom81094@...il.com>, linux-kernel@...r.kernel.org,
Omar Sandoval <osandov@...com>
Subject: Re: [PATCH V5 8/8] blk-mq: improve bio merge from blk-mq sw queue
On Tue, Oct 03, 2017 at 02:21:43AM -0700, Christoph Hellwig wrote:
> This looks generally good to me, but I really worry about the impact
> on very high iops devices. Did you try this e.g. for random reads
> from unallocated blocks on an enterprise NVMe SSD?
Looks no such impact, please see the following data
in the fio test(libaio, direct, bs=4k, 64jobs, randread, none scheduler)
[root@...rageqe-62 results]# ../parse_fio 4.14.0-rc2.no_blk_mq_perf+-nvme-64jobs-mq-none.log 4.14.0-rc2.BLK_MQ_PERF_V5+-nvme-64jobs-mq-none.log
---------------------------------------
IOPS(K) | NONE | NONE
---------------------------------------
randread | 650.98 | 653.15
---------------------------------------
OR:
If you worry about this impact, can we simply disable merge on NVMe
for none scheduler? It is basically impossible to merge NVMe's
request/bio when none is used, but it can be doable in case of kyber
scheduler.
--
Ming
Powered by blists - more mailing lists