[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1010358496.165709.1662626474492.JavaMail.zimbra@sjtu.edu.cn>
Date: Thu, 8 Sep 2022 16:41:14 +0800 (CST)
From: Guo Zhi <qtxuning1999@...u.edu.cn>
To: jasowang <jasowang@...hat.com>
Cc: eperezma <eperezma@...hat.com>, sgarzare <sgarzare@...hat.com>,
Michael Tsirkin <mst@...hat.com>,
netdev <netdev@...r.kernel.org>,
linux-kernel <linux-kernel@...r.kernel.org>,
kvm list <kvm@...r.kernel.org>,
virtualization <virtualization@...ts.linux-foundation.org>
Subject: Re: [RFC v3 3/7] vsock: batch buffers in tx
----- Original Message -----
> From: "jasowang" <jasowang@...hat.com>
> To: "Guo Zhi" <qtxuning1999@...u.edu.cn>, "eperezma" <eperezma@...hat.com>, "sgarzare" <sgarzare@...hat.com>, "Michael
> Tsirkin" <mst@...hat.com>
> Cc: "netdev" <netdev@...r.kernel.org>, "linux-kernel" <linux-kernel@...r.kernel.org>, "kvm list" <kvm@...r.kernel.org>,
> "virtualization" <virtualization@...ts.linux-foundation.org>
> Sent: Wednesday, September 7, 2022 12:27:40 PM
> Subject: Re: [RFC v3 3/7] vsock: batch buffers in tx
> 在 2022/9/1 13:54, Guo Zhi 写道:
>> Vsock uses buffers in order, and for tx driver doesn't have to
>> know the length of the buffer. So we can do a batch for vsock if
>> in order negotiated, only write one used ring for a batch of buffers
>>
>> Signed-off-by: Guo Zhi <qtxuning1999@...u.edu.cn>
>> ---
>> drivers/vhost/vsock.c | 12 ++++++++++--
>> 1 file changed, 10 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
>> index 368330417bde..e08fbbb5439e 100644
>> --- a/drivers/vhost/vsock.c
>> +++ b/drivers/vhost/vsock.c
>> @@ -497,7 +497,7 @@ static void vhost_vsock_handle_tx_kick(struct vhost_work
>> *work)
>> struct vhost_vsock *vsock = container_of(vq->dev, struct vhost_vsock,
>> dev);
>> struct virtio_vsock_pkt *pkt;
>> - int head, pkts = 0, total_len = 0;
>> + int head, pkts = 0, total_len = 0, add = 0;
>> unsigned int out, in;
>> bool added = false;
>>
>> @@ -551,10 +551,18 @@ static void vhost_vsock_handle_tx_kick(struct vhost_work
>> *work)
>> else
>> virtio_transport_free_pkt(pkt);
>>
>> - vhost_add_used(vq, head, 0);
>> + if (!vhost_has_feature(vq, VIRTIO_F_IN_ORDER)) {
>> + vhost_add_used(vq, head, 0);
>
>
> I'd do this step by step.
>
> 1) switch to use vhost_add_used_n() for vsock, less copy_to_user()
> better performance
> 2) do in-order on top
>
>
LGTM!, I think it is the correct way.
>> + } else {
>> + vq->heads[add].id = head;
>> + vq->heads[add++].len = 0;
>
>
> How can we guarantee that we are in the boundary of the heads array?
>
> Btw in the case of in-order we don't need to record the heads, instead
> we just need to know the head of the last buffer, it reduces the stress
> on the cache.
>
> Thanks
>
Yeah, I will change this and only copy last head for in order feature.
Thanks
>
>> + }
>> added = true;
>> } while(likely(!vhost_exceeds_weight(vq, ++pkts, total_len)));
>>
>> + /* If in order feature negotiaged, we can do a batch to increase performance
>> */
>> + if (vhost_has_feature(vq, VIRTIO_F_IN_ORDER) && added)
>> + vhost_add_used_n(vq, vq->heads, add);
>> no_more_replies:
>> if (added)
>> vhost_signal(&vsock->dev, vq);
Powered by blists - more mailing lists