[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1953222.pKi1t3aLRd@silver>
Date: Sun, 03 Apr 2022 13:29:53 +0200
From: Christian Schoenebeck <linux_oss@...debyte.com>
To: Dominique Martinet <asmadeus@...ewreck.org>
Cc: v9fs-developer@...ts.sourceforge.net, netdev@...r.kernel.org,
Eric Van Hensbergen <ericvh@...il.com>,
Latchesar Ionkov <lucho@...kov.net>,
Greg Kurz <groug@...d.org>, Vivek Goyal <vgoyal@...hat.com>,
Nikolay Kichukov <nikolay@...um.net>
Subject: Re: [PATCH v4 12/12] net/9p: allocate appropriate reduced message buffers
On Samstag, 2. April 2022 16:05:36 CEST Dominique Martinet wrote:
> Christian Schoenebeck wrote on Thu, Dec 30, 2021 at 02:23:18PM +0100:
> > So far 'msize' was simply used for all 9p message types, which is far
> > too much and slowed down performance tremendously with large values
> > for user configurable 'msize' option.
> >
> > Let's stop this waste by using the new p9_msg_buf_size() function for
> > allocating more appropriate, smaller buffers according to what is
> > actually sent over the wire.
>
> By the way, thinking of protocols earlier made me realize this won't
> work on RDMA transport...
>
> unlike virtio/tcp/xen, RDMA doesn't "mailbox" messages: there's a pool
> of posted buffers, and once a message has been received it looks for the
> header in the received message and associates it with the matching
> request, but there's no guarantee a small message will use a small
> buffer...
>
> This is also going to need some thought, perhaps just copying small
> buffers and recycling the buffer if a large one was used? but there
> might be a window with no buffer available and I'm not sure what'd
> happen, and don't have any RDMA hardware available to test this right
> now so this will be fun.
>
>
> I'm not shooting this down (it's definitely interesting), but we might
> need to make it optional until someone with RDMA hardware can validate a
> solution.
So maybe I should just exclude the 9p RDMA transport from this 9p message size
reduction change in v5 until somebody had a chance to test this change with
RDMA.
Which makes me wonder, what is that exact hardware, hypervisor, OS that
supports 9p & RDMA?
On the long-term I can imagine to add RDMA transport support on QEMU 9p side.
There is already RDMA code in QEMU, however it is only used for migration by
QEMU so far I think.
Best regards,
Christian Schoenebeck
Powered by blists - more mailing lists