lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sat, 20 Nov 2021 12:20:35 +0100
From:   Nikolay Kichukov <nikolay@...um.net>
To:     Christian Schoenebeck <linux_oss@...debyte.com>,
        v9fs-developer@...ts.sourceforge.net
Cc:     netdev@...r.kernel.org,
        Dominique Martinet <asmadeus@...ewreck.org>,
        Eric Van Hensbergen <ericvh@...il.com>,
        Latchesar Ionkov <lucho@...kov.net>,
        Greg Kurz <groug@...d.org>, Vivek Goyal <vgoyal@...hat.com>
Subject: Re: [PATCH v3 6/7] 9p/trans_virtio: support larger msize values

Thanks for the patches and sorry for top-posting.

I've tested them on GNU/Gentoo Linux, kernel 5.15.3 on amd64
architecture on both guest and KVM host.

The patches from this series, v3 have been applied to the host kernel
and also to the guest kernel. Guest kernel is clang compiled and host
kernel is compiled with gcc-11.

The host also received the qemu patches:
https://github.com/cschoenebeck/qemu/commit/04a7f9e55e0930b87805f7c97851eea4610e78fc
https://github.com/cschoenebeck/qemu/commit/b565bccb00afe8b73d529bbc3a38682996dac5c7
https://github.com/cschoenebeck/qemu/commit/669ced09b3b6070d478acce51810591b78ab0ccd

Qemu version on the host is 6.0.0-r54.

When the client mounts the share via virtio, requested msize is:
10485760 or 104857600

however the mount succeeds with:
msize=507904 in the end as per the /proc filesystem. This is less than
the previous maximum value.

However, performance-wise, I do see an improvement in throughput,
perhaps related to the qemu patches or some combination.

In addition to the above, when the kernel on the guest boots and loads
9pfs support, the attached memory allocation failure trace is generated.

Is anyone else seeing similar and was anybody able to get msize set to
10MB via virtio protocol with these patches?

Thank you,
-Nikolay

On Wed, 2021-09-22 at 18:00 +0200, Christian Schoenebeck wrote:
> The virtio transport supports by default a 9p 'msize' of up to
> approximately 500 kB. This patch adds support for larger 'msize'
> values by resizing the amount of scatter/gather lists if required.
> 
> Signed-off-by: Christian Schoenebeck <linux_oss@...debyte.com>
> ---
>  net/9p/trans_virtio.c | 61 +++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 61 insertions(+)
> 
> diff --git a/net/9p/trans_virtio.c b/net/9p/trans_virtio.c
> index e478a34326f1..147ebf647a95 100644
> --- a/net/9p/trans_virtio.c
> +++ b/net/9p/trans_virtio.c
> @@ -203,6 +203,31 @@ static struct virtqueue_sg *vq_sg_alloc(unsigned
> int nsgl)
>         return vq_sg;
>  }
>  
> +/**
> + * vq_sg_resize - resize passed virtqueue scatter/gather lists to the
> passed
> + * amount of lists
> + * @_vq_sg: scatter/gather lists to be resized
> + * @nsgl: new amount of scatter/gather lists
> + */
> +static int vq_sg_resize(struct virtqueue_sg **_vq_sg, unsigned int
> nsgl)
> +{
> +       struct virtqueue_sg *vq_sg;
> +
> +       BUG_ON(!_vq_sg || !nsgl);
> +       vq_sg = *_vq_sg;
> +       if (vq_sg->nsgl == nsgl)
> +               return 0;
> +
> +       /* lazy resize implementation for now */
> +       vq_sg = vq_sg_alloc(nsgl);
> +       if (!vq_sg)
> +               return -ENOMEM;
> +
> +       kfree(*_vq_sg);
> +       *_vq_sg = vq_sg;
> +       return 0;
> +}
> +
>  /**
>   * p9_virtio_close - reclaim resources of a channel
>   * @client: client instance
> @@ -774,6 +799,10 @@ p9_virtio_create(struct p9_client *client, const
> char *devname, char *args)
>         struct virtio_chan *chan;
>         int ret = -ENOENT;
>         int found = 0;
> +#if !defined(CONFIG_ARCH_NO_SG_CHAIN)
> +       size_t npages;
> +       size_t nsgl;
> +#endif
>  
>         if (devname == NULL)
>                 return -EINVAL;
> @@ -796,6 +825,38 @@ p9_virtio_create(struct p9_client *client, const
> char *devname, char *args)
>                 return ret;
>         }
>  
> +       /*
> +        * if user supplied an 'msize' option that's larger than what
> this
> +        * transport supports by default, then try to allocate more sg
> lists
> +        */
> +       if (client->msize > client->trans_maxsize) {
> +#ifdef CONFIG_ARCH_NO_SG_CHAIN
> +               pr_info("limiting 'msize' to %d because architecture
> does not "
> +                       "support chained scatter gather lists\n",
> +                       client->trans_maxsize);
> +#else
> +               npages = DIV_ROUND_UP(client->msize, PAGE_SIZE);
> +               if (npages > chan->p9_max_pages) {
> +                       npages = chan->p9_max_pages;
> +                       pr_info("limiting 'msize' as it would exceed the
> max. "
> +                               "of %lu pages allowed on this system\n",
> +                               chan->p9_max_pages);
> +               }
> +               nsgl = DIV_ROUND_UP(npages, SG_USER_PAGES_PER_LIST);
> +               if (nsgl > chan->vq_sg->nsgl) {
> +                       /*
> +                        * if resize fails, no big deal, then just
> +                        * continue with default msize instead
> +                        */
> +                       if (!vq_sg_resize(&chan->vq_sg, nsgl)) {
> +                               client->trans_maxsize =
> +                                       PAGE_SIZE *
> +                                       ((nsgl * SG_USER_PAGES_PER_LIST)
> - 3);
> +                       }
> +               }
> +#endif /* CONFIG_ARCH_NO_SG_CHAIN */
> +       }
> +
>         client->trans = (void *)chan;
>         client->status = Connected;
>         chan->client = client;


View attachment "9p-msize.txt" of type "text/plain" (3403 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ