[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <YTU7FJuooYSjISlq@codewreck.org>
Date: Mon, 6 Sep 2021 06:48:04 +0900
From: Dominique Martinet <asmadeus@...ewreck.org>
To: Christian Schoenebeck <linux_oss@...debyte.com>
Cc: v9fs-developer@...ts.sourceforge.net, netdev@...r.kernel.org,
Eric Van Hensbergen <ericvh@...il.com>,
Latchesar Ionkov <lucho@...kov.net>, Greg Kurz <groug@...d.org>
Subject: Re: [PATCH 2/2] net/9p: increase default msize to 128k
Christian Schoenebeck wrote on Sun, Sep 05, 2021 at 03:33:11PM +0200:
> > Subject: [PATCH] net/9p: increase tcp max msize to 1MB
>
> Yes, makes sense.
>
> Is the TCP transport driver of the Linux 9p client somewhat equally mature to
> the virtio transport driver? Because I still have macOS support for 9p in QEMU
> on my TODO list and accordingly a decision for a specific transport type for
> macOS will be needed.
I'd say it should be just about as stable as virtio.. It's definitely
getting a lot of tests like syzcaller as it's easy to open an arbitrary
fd and pass that to kernel for fuzzing.
Performance-wise you won't have zero-copy, and the monolitic memory
blocks requirement is the same, so it won't be as good as virtio but it
can probably be used. The problem will more be one of setting -- for
user networking we can always use qemu's networking implementation, but
for passthrough or tap qemu won't easily be discoverable from the VM,
I'm not sure about that.
Does AF_VSOCK work on macos? that could probably easily be added to
trans_fd...
> For the next series mentioned by me (getting rid of the 512k msize capping) I
> first need to readup on the Linux kernel code. According to the discussion we
> already had about that issue, the reason for that capping was that the amount
> of virtio elements is currently hard coded in the Linux client's virtio
> transport:
>
> On Samstag, 27. Februar 2021 01:03:40 CEST Dominique Martinet wrote:
> > Christian Schoenebeck wrote on Fri, Feb 26, 2021 at 02:49:12PM +0100:
> > > Right now the client uses a hard coded amount of 128 elements. So what
> > > about replacing VIRTQUEUE_NUM by a variable which is initialized with a
> > > value according to the user's requested 'msize' option at init time?
> > >
> > > According to the virtio specs the max. amount of elements in a virtqueue
> > > is
> > > 32768. So 32768 * 4k = 128M as new upper limit would already be a
> > > significant improvement and would not require too many changes to the
> > > client code, right?
> > The current code inits the chan->sg at probe time (when driver is
> > loader) and not mount time, and it is currently embedded in the chan
> > struct, so that would need allocating at mount time (p9_client_create ;
> > either resizing if required or not sharing) but it doesn't sound too
> > intrusive yes.
> >
> > I don't see more adherenences to VIRTQUEUE_NUM that would hurt trying.
>
> So probably as a first step I would only re-allocate the virtio elements
> according to the user supplied (i.e. large) 'msize' value if the 9p driver is
> not in use at that point, and stick to capping otherwise. That should probably
> be fine for many users and would avoid need for some synchronization measures
> and the traps it might bring. But again, I still need to read more of the
> Linux client code first.
Right, looking at it again p9_virtio_create would already allow that so
you just need to pick the most appropriate channel or create a new one
if required, synchronization shouldn't be too much of a problem.
The 9p code is sometimes impressive in its foresight ;)
> BTW just in case I have not mentioned it before: there are some developer docs
> for the QEMU 9p server implementation now:
> https://wiki.qemu.org/Documentation/9p
Wow, that's an impressive wiki page.
I've never been good with documentation (be it writing or using it), but
hopefully it'll help people make the first step -- good work!
--
Dominique
Powered by blists - more mailing lists