[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <A1D98E0E70C35541AEBDE192A520C5434DC0B9@AMSPEX01CL03.citrite.net>
Date: Wed, 12 Aug 2015 16:46:10 +0000
From: Rafal Mielniczuk <rafal.mielniczuk@...rix.com>
To: Bob Liu <bob.liu@...cle.com>, Jens Axboe <axboe@...com>
CC: Marcus Granado <Marcus.Granado@...rix.com>,
Arianna Avanzini <avanzini.arianna@...il.com>,
Felipe Franciosi <felipe.franciosi@...rix.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"Christoph Hellwig" <hch@...radead.org>,
David Vrabel <david.vrabel@...rix.com>,
"xen-devel@...ts.xenproject.org" <xen-devel@...ts.xenproject.org>,
"boris.ostrovsky@...cle.com" <boris.ostrovsky@...cle.com>,
Jonathan Davies <Jonathan.Davies@...rix.com>
Subject: Re: [Xen-devel] [PATCH RFC v2 0/5] Multi-queue support for
xen-blkfront and xen-blkback
On 12/08/15 11:17, Bob Liu wrote:
> On 08/12/2015 01:32 AM, Jens Axboe wrote:
>> On 08/11/2015 03:45 AM, Rafal Mielniczuk wrote:
>>> On 11/08/15 07:08, Bob Liu wrote:
>>>> On 08/10/2015 11:52 PM, Jens Axboe wrote:
>>>>> On 08/10/2015 05:03 AM, Rafal Mielniczuk wrote:
> ...
>>>>>> Hello,
>>>>>>
>>>>>> We rerun the tests for sequential reads with the identical settings but with Bob Liu's multiqueue patches reverted from dom0 and guest kernels.
>>>>>> The results we obtained were *better* than the results we got with multiqueue patches applied:
>>>>>>
>>>>>> fio_threads io_depth block_size 1-queue_iops 8-queue_iops *no-mq-patches_iops*
>>>>>> 8 32 512 158K 264K 321K
>>>>>> 8 32 1K 157K 260K 328K
>>>>>> 8 32 2K 157K 258K 336K
>>>>>> 8 32 4K 148K 257K 308K
>>>>>> 8 32 8K 124K 207K 188K
>>>>>> 8 32 16K 84K 105K 82K
>>>>>> 8 32 32K 50K 54K 36K
>>>>>> 8 32 64K 24K 27K 16K
>>>>>> 8 32 128K 11K 13K 11K
>>>>>>
>>>>>> We noticed that the requests are not merged by the guest when the multiqueue patches are applied,
>>>>>> which results in a regression for small block sizes (RealSSD P320h's optimal block size is around 32-64KB).
>>>>>>
>>>>>> We observed similar regression for the Dell MZ-5EA1000-0D3 100 GB 2.5" Internal SSD
>>>>>>
>>>>>> As I understand blk-mq layer bypasses I/O scheduler which also effectively disables merges.
>>>>>> Could you explain why it is difficult to enable merging in the blk-mq layer?
>>>>>> That could help closing the performance gap we observed.
>>>>>>
>>>>>> Otherwise, the tests shows that the multiqueue patches does not improve the performance,
>>>>>> at least when it comes to sequential read/writes operations.
>>>>> blk-mq still provides merging, there should be no difference there. Does the xen patches set BLK_MQ_F_SHOULD_MERGE?
>>>>>
>>>> Yes.
>>>> Is it possible that xen-blkfront driver dequeue requests too fast after we have multiple hardware queues?
>>>> Because new requests don't have the chance merging with old requests which were already dequeued and issued.
>>>>
>>> For some reason we don't see merges even when we set multiqueue to 1.
>>> Below are some stats from the guest system when doing sequential 4KB reads:
>>>
>>> $ fio --name=test --ioengine=libaio --direct=1 --rw=read --numjobs=8
>>> --iodepth=32 --time_based=1 --runtime=300 --bs=4KB
>>> --filename=/dev/xvdb
>>>
>>> $ iostat -xt 5 /dev/xvdb
>>> avg-cpu: %user %nice %system %iowait %steal %idle
>>> 0.50 0.00 2.73 85.14 2.00 9.63
>>>
>>> Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s
>>> avgrq-sz avgqu-sz await r_await w_await svctm %util
>>> xvdb 0.00 0.00 156926.00 0.00 627704.00 0.00
>>> 8.00 30.06 0.19 0.19 0.00 0.01 100.48
>>>
>>> $ cat /sys/block/xvdb/queue/scheduler
>>> none
>>>
>>> $ cat /sys/block/xvdb/queue/nomerges
>>> 0
>>>
>>> Relevant bits from the xenstore configuration on the dom0:
>>>
>>> /local/domain/0/backend/vbd/2/51728/dev = "xvdb"
>>> /local/domain/0/backend/vbd/2/51728/backend-kind = "vbd"
>>> /local/domain/0/backend/vbd/2/51728/type = "phy"
>>> /local/domain/0/backend/vbd/2/51728/multi-queue-max-queues = "1"
>>>
>>> /local/domain/2/device/vbd/51728/multi-queue-num-queues = "1"
>>> /local/domain/2/device/vbd/51728/ring-ref = "9"
>>> /local/domain/2/device/vbd/51728/event-channel = "60"
>> If you add --iodepth-batch=16 to that fio command line? Both mq and non-mq relies on plugging to get
>> batching in the use case above, otherwise IO is dispatched immediately. O_DIRECT is immediate.
>> I'd be more interested in seeing a test case with buffered IO of a file system on top of the xvdb device,
>> if we're missing merging for that case, then that's a much bigger issue.
>>
>
> I was using the null block driver for xen blk-mq test.
>
> There were not merges happen any more even after patch:
> https://lkml.org/lkml/2015/7/13/185
> (Which just converted xen block driver to use blk-mq apis)
>
> Will try a file system soon.
>
I have more results for the guest with and without the patch
https://lkml.org/lkml/2015/7/13/185
applied to the latest stable kernel (4.1.5).
Command line used was:
fio --name=test --ioengine=libaio --rw=read --numjobs=8 \
--iodepth=32 --time_based=1 --runtime=300 --bs=4KB \
--filename=/dev/xvdb --direct=(0 and 1) --iodepth_batch=16
without patch (--direct=1):
xvdb: ios=18696304/0, merge=75763177/0, ticks=11323872/0, in_queue=11344352, util=100.00%
with patch (--direct=1):
xvdb: ios=43709976/0, merge=97/0, ticks=8851972/0, in_queue=8902928, util=100.00%
without patch buffered (--direct=0):
xvdb: ios=1079051/0, merge=76/0, ticks=749364/0, in_queue=748840, util=94.60
with patch buffered (--direct=0):
xvdb: ios=1132932/0, merge=0/0, ticks=689108/0, in_queue=688488, util=93.32%
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists