[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2745077.ukKBhl4x9b@silver>
Date: Sun, 03 Apr 2022 16:00:56 +0200
From: Christian Schoenebeck <linux_oss@...debyte.com>
To: Dominique Martinet <asmadeus@...ewreck.org>
Cc: v9fs-developer@...ts.sourceforge.net, netdev@...r.kernel.org,
Eric Van Hensbergen <ericvh@...il.com>,
Latchesar Ionkov <lucho@...kov.net>,
Greg Kurz <groug@...d.org>, Vivek Goyal <vgoyal@...hat.com>,
Nikolay Kichukov <nikolay@...um.net>
Subject: Re: [PATCH v4 12/12] net/9p: allocate appropriate reduced message buffers
On Sonntag, 3. April 2022 14:37:55 CEST Dominique Martinet wrote:
> Christian Schoenebeck wrote on Sun, Apr 03, 2022 at 01:29:53PM +0200:
> > So maybe I should just exclude the 9p RDMA transport from this 9p message
> > size reduction change in v5 until somebody had a chance to test this
> > change with RDMA.
>
> Yes, I'm pretty certain it won't work so we'll want to exclude it unless
> we can extend the RDMA protocol to address buffers.
OK, agreed. It only needs a minor adjustment to this patch 12 to exclude the
RDMA transport (+2 lines or so). So no big deal.
> > On the long-term I can imagine to add RDMA transport support on QEMU 9p
> > side.
> What would you expect it to be used for?
There are several potential use cases that would come to my mind, e.g:
- Separating storage hardware from host hardware. With virtio we are
constrained to the same machine.
- Maybe also a candidate to achieve what the 9p 'synth' driver in QEMU tried
to achieve? That 'synth' driver is running in a separate process from the
QEMU process, with the goal to increase safety. However currently it is
more or less abondened as it is extremely slow, as 9p requests have to be
dispatched like:
guest -> QEMU (9p server) -> synth daemon -> QEMU (9p server) -> guest
Maybe we could rid of those costly extra hops with RDMA, not sure though.
- Maybe also an alternative to virtio on the same machine: there are also some
shortcomings in virtio that are tedious to address (see e.g. current
struggle with pure formal negotiation issues just to relax the virtio spec
regarding its "Queue Size" requirements so that we could achieve higher
message sizes). I'm also not a big fan of virtio's assumption that guest
should guess in advance host's response size.
- Maybe as transport for macOS guest support in future? Upcoming QEMU 7.0 adds
support for macOS 9p hosts, which revives the plan to add support for 9p
to macOS guests as well. The question still is what transport to use for
macOS guests then.
However I currently don't know any details inside RDMA yet, and as you already
outlined, it probably has some shortcomings that would need to be revised with
protocol changes as well.
Best regards,
Christian Schoenebeck
Powered by blists - more mailing lists