[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <7969175.Y4Flz6HuuJ@silver>
Date: Fri, 08 Jul 2022 15:00:45 +0200
From: Christian Schoenebeck <linux_oss@...debyte.com>
To: Dominique Martinet <asmadeus@...ewreck.org>,
Eric Van Hensbergen <ericvh@...il.com>
Cc: Greg Kurz <groug@...d.org>, Latchesar Ionkov <lucho@...kov.net>,
Nikolay Kichukov <nikolay@...um.net>, netdev@...r.kernel.org,
v9fs-developer@...ts.sourceforge.net
Subject: Re: [PATCH v4 00/12] remove msize limit in virtio transport
On Freitag, 8. Juli 2022 13:40:36 CEST Dominique Martinet wrote:
> Christian Schoenebeck wrote on Fri, Jul 08, 2022 at 01:18:40PM +0200:
> > On Freitag, 8. Juli 2022 04:26:40 CEST Eric Van Hensbergen wrote:
[...]
> https://github.com/kvmtool/kvmtool indeed has a 9p server, I think I
> used to run it ages ago.
> I'll give it a fresh spin, thanks for the reminder.
>
> For this one it defines VIRTQUEUE_NUM to 128, so not quite 1024.
Yes, and it does *not* limit client supplied 'msize' either. It just always
sends the same 'msize' value as-is back to client. :/ So I would expect it to
error (or worse) if client tries msize > 512kB.
> > > > I found https://github.com/moby/hyperkit for OSX but that doesn't
> > > > really
> > > > help me, and can't see much else relevant in a quick search
> >
> > So that appears to be a 9p (@virtio-PCI) client for xhyve,
>
> oh the 9p part is client code?
> the readme says it's a server:
> "It includes a complete hypervisor, based on xhyve/bhyve"
> but I can't run it anyway, so I didn't check very hard.
Hmm, I actually just interpreted this for it to be a client:
fprintf(stderr, "virtio-9p: unexpected EOF writing to server-- did the 9P
server crash?\n");
But looking at it again, it seems you are right, at least I see that it also
handles even 9p message type numbers, but only Twrite and Tflush? I don't see
any Tversion or msize handling in general. [shrug]
> > with max. 256kB buffers <=> max. 68 virtio descriptors (memory segments)
[1]:
> huh...
>
> Well, as long as msize is set I assume it'll work out anyway?
If server would limit 'msize' appropriately, yes. But as the kvmtool example
shows, that should probably not taken for granted.
> How does virtio queue size work with e.g. parallel messages?
Simple question, complicated to answer.
>From virtio spec PoV (and current virtio <= v1.2), the negotiated virtio queue
size defines both the max. amount of parallel (round-trip) messages *and* the
max. amount of virtio descriptors (memory segments) of *all* currently active/
parallel messages in total. I "think" because of virtio's origin for
virtualized network devices?
So yes, if you are very strict what the virtio spec <= v1.2 says, and say you
have a virtio queue size of 128 (e.g. hard coded by QEMU, kvmtool), and say
client would send out the first 9p request with 128 memory segments, then the
next (i.e. second) parallel 9p request sent to server would already exceed the
theoretically allowed max. amount of virtio descriptors.
But in practice, I don't see that this theoretical limitation would exist in
actual 9p virtio server implementations. At least in all server
implementations I saw so far, they all seem to handle the max. virtio
descriptors amount for each request separately.
> Anyway, even if the negotiation part gets done servers won't all get
> implemented in a day, so we need to think of other servers a bit..
OTOH kernel should have the name of the hypervisor/emulator somewhere, right?
So Linux client's max. virtio descriptors could probably made dependent on
that name?
Best regards,
Christian Schoenebeck
Powered by blists - more mailing lists