[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <PH0PR12MB54812EC7F4711C1EA4CAA119DC419@PH0PR12MB5481.namprd12.prod.outlook.com>
Date: Wed, 7 Sep 2022 14:08:18 +0000
From: Parav Pandit <parav@...dia.com>
To: "Michael S. Tsirkin" <mst@...hat.com>, Gavin Li <gavinl@...dia.com>
CC: "stephen@...workplumber.org" <stephen@...workplumber.org>,
"davem@...emloft.net" <davem@...emloft.net>,
"jesse.brandeburg@...el.com" <jesse.brandeburg@...el.com>,
"kuba@...nel.org" <kuba@...nel.org>,
"sridhar.samudrala@...el.com" <sridhar.samudrala@...el.com>,
"jasowang@...hat.com" <jasowang@...hat.com>,
"loseweigh@...il.com" <loseweigh@...il.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"virtualization@...ts.linux-foundation.org"
<virtualization@...ts.linux-foundation.org>,
"virtio-dev@...ts.oasis-open.org" <virtio-dev@...ts.oasis-open.org>,
Gavi Teitz <gavi@...dia.com>,
Xuan Zhuo <xuanzhuo@...ux.alibaba.com>,
Si-Wei Liu <si-wei.liu@...cle.com>
Subject: RE: [PATCH v5 2/2] virtio-net: use mtu size as buffer length for big
packets
> From: Michael S. Tsirkin <mst@...hat.com>
> Sent: Wednesday, September 7, 2022 5:27 AM
>
> On Wed, Sep 07, 2022 at 04:08:54PM +0800, Gavin Li wrote:
> >
> > On 9/7/2022 1:31 PM, Michael S. Tsirkin wrote:
> > > External email: Use caution opening links or attachments
> > >
> > >
> > > On Thu, Sep 01, 2022 at 05:10:38AM +0300, Gavin Li wrote:
> > > > Currently add_recvbuf_big() allocates MAX_SKB_FRAGS segments for
> > > > big packets even when GUEST_* offloads are not present on the
> device.
> > > > However, if guest GSO is not supported, it would be sufficient to
> > > > allocate segments to cover just up the MTU size and no further.
> > > > Allocating the maximum amount of segments results in a large waste
> > > > of buffer space in the queue, which limits the number of packets
> > > > that can be buffered and can result in reduced performance.
>
> actually how does this waste space? Is this because your device does not
> have INDIRECT?
VQ is 256 entries deep.
Driver posted total of 256 descriptors.
Each descriptor points to a page of 4K.
These descriptors are chained as 4K * 16.
So total packets that can be serviced are 256/16 = 16.
So effective queue depth = 16.
So, when GSO is off, for 9K mtu, packet buffer needed = 3 pages. (12k).
So, 13 descriptors (= 13 x 4K =52K) per packet buffer is wasted.
After this improvement, these 13 descriptors are available, increasing the effective queue depth = 256/3 = 85.
[..]
> > > >
> > > > MTU(Bytes)/Bandwidth (Gbit/s)
> > > > Before After
> > > > 1500 22.5 22.4
> > > > 9000 12.8 25.9
>
>
> is this buffer space?
Above performance numbers are showing improvement in bandwidth. In Gbps/sec.
> just the overhead of allocating/freeing the buffers?
> of using INDIRECT?
The effective queue depth is so small, device cannot receive all the packets at given bw-delay product.
> > >
> > > Which configurations were tested?
> > I tested it with DPDK vDPA + qemu vhost. Do you mean the feature set
> > of the VM?
>
The configuration of interest is mtu, not the backend.
Which is different mtu as shown in above perf numbers.
> > > Did you test devices without VIRTIO_NET_F_MTU ?
> > No. It will need code changes.
No. It doesn't need any code changes. This is misleading/vague.
This patch doesn't have any relation to a device which doesn't offer VIRTIO_NET_F_MTU.
Just the code restructuring is touching this area, that may require some existing tests.
I assume virtio tree will have some automation tests for such a device?
> > > >
> > > > @@ -3853,12 +3866,10 @@ static int virtnet_probe(struct
> > > > virtio_device *vdev)
> > > >
> > > > dev->mtu = mtu;
> > > > dev->max_mtu = mtu;
> > > > -
> > > > - /* TODO: size buffers correctly in this case. */
> > > > - if (dev->mtu > ETH_DATA_LEN)
> > > > - vi->big_packets = true;
> > > > }
> > > >
> > > > + virtnet_set_big_packets_fields(vi, mtu);
> > > > +
> > > If VIRTIO_NET_F_MTU is off, then mtu is uninitialized.
> > > You should move it to within if () above to fix.
> > mtu was initialized to 0 at the beginning of probe if VIRTIO_NET_F_MTU
> > is off.
> >
> > In this case, big_packets_num_skbfrags will be set according to guest gso.
> >
> > If guest gso is supported, it will be set to MAX_SKB_FRAGS else
> > zero---- do you
> >
> > think this is a bug to be fixed?
>
>
> yes I think with no mtu this should behave as it did historically.
>
Michael is right.
It should behave as today. There is no new bug introduced by this patch.
dev->mtu and dev->max_mtu is set only when VIRTIO_NET_F_MTU is offered with/without this patch.
Please have mtu related fix/change in different patch.
> > >
> > > > if (vi->any_header_sg)
> > > > dev->needed_headroom = vi->hdr_len;
> > > >
> > > > --
> > > > 2.31.1
Powered by blists - more mailing lists