lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1915472.2DI3jHSlUk@silver>
Date:   Sun, 05 Sep 2021 15:33:11 +0200
From:   Christian Schoenebeck <linux_oss@...debyte.com>
To:     Dominique Martinet <asmadeus@...ewreck.org>
Cc:     v9fs-developer@...ts.sourceforge.net, netdev@...r.kernel.org,
        Eric Van Hensbergen <ericvh@...il.com>,
        Latchesar Ionkov <lucho@...kov.net>, Greg Kurz <groug@...d.org>
Subject: Re: [PATCH 2/2] net/9p: increase default msize to 128k

On Sonntag, 5. September 2021 01:31:50 CEST Dominique Martinet wrote:
> Hi Christian,
> 
> thanks for the follow up, this has been on my todolist forever and
> despite how simple it is I never took the time for it...
> 
> 
> I've applied both patches to my -next branch... We're a bit tight for
> this merge window (v5.15) but I'll send it to linux mid next week anyway.

That would be great!

> I've also added this patch (sorry for laziness) for TCP, other transports
> are OK iirc:
> 
> From 657e35583c70bed86526cb8eb207abe3d55ea4ea Mon Sep 17 00:00:00 2001
> From: Dominique Martinet <asmadeus@...ewreck.org>
> Date: Sun, 5 Sep 2021 08:29:22 +0900
> Subject: [PATCH] net/9p: increase tcp max msize to 1MB
> 
> Historically TCP has been limited to 64K buffers, but increasing msize
> provides huge performance benefits especially as latency increase so allow
> for bigger buffers.
> 
> Ideally further improvements could change the allocation from the current
> contiguous chunk in slab (kmem_cache) to some scatter-gather compatible
> API...
> 
> Signed-off-by: Dominique Martinet <asmadeus@...ewreck.org>
> 
> diff --git a/net/9p/trans_fd.c b/net/9p/trans_fd.c
> index f4dd0456beaf..007bbcc68010 100644
> --- a/net/9p/trans_fd.c
> +++ b/net/9p/trans_fd.c
> @@ -34,7 +34,7 @@
>  #include <linux/syscalls.h> /* killme */
> 
>  #define P9_PORT 564
> -#define MAX_SOCK_BUF (64*1024)
> +#define MAX_SOCK_BUF (1024*1024)
>  #define MAXPOLLWADDR   2
> 
>  static struct p9_trans_module p9_tcp_trans;

Yes, makes sense.

Is the TCP transport driver of the Linux 9p client somewhat equally mature to 
the virtio transport driver? Because I still have macOS support for 9p in QEMU 
on my TODO list and accordingly a decision for a specific transport type for 
macOS will be needed.

For the next series mentioned by me (getting rid of the 512k msize capping) I 
first need to readup on the Linux kernel code. According to the discussion we 
already had about that issue, the reason for that capping was that the amount 
of virtio elements is currently hard coded in the Linux client's virtio 
transport:

On Samstag, 27. Februar 2021 01:03:40 CEST Dominique Martinet wrote:
> Christian Schoenebeck wrote on Fri, Feb 26, 2021 at 02:49:12PM +0100:
> > Right now the client uses a hard coded amount of 128 elements. So what
> > about replacing VIRTQUEUE_NUM by a variable which is initialized with a
> > value according to the user's requested 'msize' option at init time?
> > 
> > According to the virtio specs the max. amount of elements in a virtqueue
> > is
> > 32768. So 32768 * 4k = 128M as new upper limit would already be a
> > significant improvement and would not require too many changes to the
> > client code, right?
> The current code inits the chan->sg at probe time (when driver is
> loader) and not mount time, and it is currently embedded in the chan
> struct, so that would need allocating at mount time (p9_client_create ;
> either resizing if required or not sharing) but it doesn't sound too
> intrusive yes.
> 
> I don't see more adherenences to VIRTQUEUE_NUM that would hurt trying.

So probably as a first step I would only re-allocate the virtio elements 
according to the user supplied (i.e. large) 'msize' value if the 9p driver is 
not in use at that point, and stick to capping otherwise. That should probably 
be fine for many users and would avoid need for some synchronization measures 
and the traps it might bring. But again, I still need to read more of the 
Linux client code first.

BTW just in case I have not mentioned it before: there are some developer docs 
for the QEMU 9p server implementation now:
https://wiki.qemu.org/Documentation/9p

Best regards,
Christian Schoenebeck


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ