[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJSP0QXU_uNqL-9LmLRkDdPPSdUAGdesQ2DFuCMHnjyEuREvXQ@mail.gmail.com>
Date: Mon, 16 Dec 2024 11:13:57 -0500
From: Stefan Hajnoczi <stefanha@...il.com>
To: Christoph Hellwig <hch@...radead.org>, Jens Axboe <axboe@...nel.dk>
Cc: Ferry Meng <mengferry@...ux.alibaba.com>, "Michael S . Tsirkin" <mst@...hat.com>,
Jason Wang <jasowang@...hat.com>, linux-block@...r.kernel.org,
virtualization@...ts.linux.dev, linux-kernel@...r.kernel.org,
io-uring@...r.kernel.org, Joseph Qi <joseph.qi@...ux.alibaba.com>,
Jeffle Xu <jefflexu@...ux.alibaba.com>
Subject: Re: [PATCH 0/3][RFC] virtio-blk: add io_uring passthrough support for virtio-blk
On Mon, 16 Dec 2024 at 10:54, Christoph Hellwig <hch@...radead.org> wrote:
>
> Hacking passthrough into virtio_blk seems like not very good layering.
> If you have a use case where you want to use the core kernel virtio code
> but not the protocol drivers we'll probably need a virtqueue passthrough
> option of some kind.
I think people are finding that submitting I/O via uring_cmd is faster
than traditional io_uring. The use case isn't really passthrough, it's
bypass :).
That's why I asked Jens to weigh in on whether there is a generic
block layer solution here. If uring_cmd is faster then maybe a generic
uring_cmd I/O interface can be defined without tying applications to
device-specific commands. Or maybe the traditional io_uring code path
can be optimized so that bypass is no longer attractive.
The virtio-level virtqueue passthrough idea is interesting for use
cases that mix passthrough applications with non-passthrough
applications. VFIO isn't enough because it prevents sharing and
excludes non-passthrough applications. Something similar to VDPA
might be able to pass through just a subset of virtqueues that
userspace could access via the vhost_vdpa driver. This approach
doesn't scale if many applications are running at the same time
because the number of virtqueues is finite and often the same as the
number of CPUs.
Stefan
Powered by blists - more mailing lists