[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2d4ad724-f9da-4502-9079-432935f5719d@linux.alibaba.com>
Date: Mon, 16 Dec 2024 10:01:21 +0800
From: Ferry Meng <mengferry@...ux.alibaba.com>
To: "Michael S . Tsirkin" <mst@...hat.com>, Jason Wang <jasowang@...hat.com>,
linux-block@...r.kernel.org, Jens Axboe <axboe@...nel.dk>,
virtualization@...ts.linux.dev
Cc: linux-kernel@...r.kernel.org, io-uring@...r.kernel.org,
Joseph Qi <joseph.qi@...ux.alibaba.com>,
Jeffle Xu <jefflexu@...ux.alibaba.com>
Subject: Re: [PATCH 0/3][RFC] virtio-blk: add io_uring passthrough support for
virtio-blk
On 12/3/24 8:14 PM, Ferry Meng wrote:
> We seek to develop a more flexible way to use virtio-blk and bypass the block
> layer logic in order to accomplish certain performance optimizations. As a
> result, we referred to the implementation of io_uring passthrough in NVMe
> and implemented it in the virtio-blk driver. This patch series adds io_uring
> passthrough support for virtio-blk devices, resulting in lower submit latency
> and increased flexibility when utilizing virtio-blk.
>
> To test this patch series, I changed fio's code:
> 1. Added virtio-blk support to engines/io_uring.c.
> 2. Added virtio-blk support to the t/io_uring.c testing tool.
> Link: https://github.com/jdmfr/fio
>
> Using t/io_uring-vblk, the performance of virtio-blk based on uring-cmd
> scales better than block device access. (such as below, Virtio-Blk with QEMU,
> 1-depth fio)
> (passthru) read: IOPS=17.2k, BW=67.4MiB/s (70.6MB/s)
> slat (nsec): min=2907, max=43592, avg=3981.87, stdev=595.10
> clat (usec): min=38, max=285,avg=53.47, stdev= 8.28
> lat (usec): min=44, max=288, avg=57.45, stdev= 8.28
> (block) read: IOPS=15.3k, BW=59.8MiB/s (62.7MB/s)
> slat (nsec): min=3408, max=35366, avg=5102.17, stdev=790.79
> clat (usec): min=35, max=343, avg=59.63, stdev=10.26
> lat (usec): min=43, max=349, avg=64.73, stdev=10.21
>
> Testing the virtio-blk device with fio using 'engines=io_uring_cmd'
> and 'engines=io_uring' also demonstrates improvements in submit latency.
> (passthru) taskset -c 0 t/io_uring-vblk -b4096 -d8 -c4 -s4 -p0 -F1 -B0 -O0 -n1 -u1 /dev/vdcc0
> IOPS=189.80K, BW=741MiB/s, IOS/call=4/3
> IOPS=187.68K, BW=733MiB/s, IOS/call=4/3
> (block) taskset -c 0 t/io_uring-vblk -b4096 -d8 -c4 -s4 -p0 -F1 -B0 -O0 -n1 -u0 /dev/vdc
> IOPS=101.51K, BW=396MiB/s, IOS/call=4/3
> IOPS=100.01K, BW=390MiB/s, IOS/call=4/4
>
> The performance overhead of submitting IO can be decreased by 25% overall
> with this patch series. The implementation primarily references 'nvme io_uring
> passthrough', supporting io_uring_cmd through a separate character interface
> (temporarily named /dev/vdXc0). Since this is an early version, many
> details need to be taken into account and redesigned, like:
> ● Currently, it only considers READ/WRITE scenarios, some more complex operations
> not included like discard or zone ops.(Normal sqe64 is sufficient, in my opinion;
> following upgrades, sqe128 and cqe32 might not be needed).
> ● ......
>
> I would appreciate any useful recommendations.
>
> Ferry Meng (3):
> virtio-blk: add virtio-blk chardev support.
> virtio-blk: add uring_cmd support for I/O passthru on chardev.
> virtio-blk: add uring_cmd iopoll support.
>
> drivers/block/virtio_blk.c | 325 +++++++++++++++++++++++++++++++-
> include/uapi/linux/virtio_blk.h | 16 ++
> 2 files changed, 336 insertions(+), 5 deletions(-)
Hi, Micheal & Jason :
What about yours' opinion? As virtio-blk maintainer. Looking forward to
your reply.
Thanks
Powered by blists - more mailing lists