[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZBThOG/nISvqbllq@bullseye>
Date: Fri, 17 Mar 2023 21:52:56 +0000
From: Bobby Eshleman <bobbyeshleman@...il.com>
To: Arseniy Krasnov <avkrasnov@...rdevices.ru>
Cc: Stefan Hajnoczi <stefanha@...hat.com>,
Stefano Garzarella <sgarzare@...hat.com>,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>,
Bobby Eshleman <bobby.eshleman@...edance.com>,
kvm@...r.kernel.org, virtualization@...ts.linux-foundation.org,
netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
kernel@...rdevices.ru, oxffffaa@...il.com
Subject: Re: [RFC PATCH v1] virtio/vsock: allocate multiple skbuffs on tx
On Fri, Mar 17, 2023 at 01:38:39PM +0300, Arseniy Krasnov wrote:
> This adds small optimization for tx path: instead of allocating single
> skbuff on every call to transport, allocate multiple skbuffs until
> credit space allows, thus trying to send as much as possible data without
> return to af_vsock.c.
Hey Arseniy, I really like this optimization. I have a few
questions/comments below.
>
> Signed-off-by: Arseniy Krasnov <AVKrasnov@...rdevices.ru>
> ---
> net/vmw_vsock/virtio_transport_common.c | 45 +++++++++++++++++--------
> 1 file changed, 31 insertions(+), 14 deletions(-)
>
> diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
> index 6564192e7f20..cda587196475 100644
> --- a/net/vmw_vsock/virtio_transport_common.c
> +++ b/net/vmw_vsock/virtio_transport_common.c
> @@ -196,7 +196,8 @@ static int virtio_transport_send_pkt_info(struct vsock_sock *vsk,
> const struct virtio_transport *t_ops;
> struct virtio_vsock_sock *vvs;
> u32 pkt_len = info->pkt_len;
> - struct sk_buff *skb;
> + u32 rest_len;
> + int ret;
>
> info->type = virtio_transport_get_type(sk_vsock(vsk));
>
> @@ -216,10 +217,6 @@ static int virtio_transport_send_pkt_info(struct vsock_sock *vsk,
>
> vvs = vsk->trans;
>
> - /* we can send less than pkt_len bytes */
> - if (pkt_len > VIRTIO_VSOCK_MAX_PKT_BUF_SIZE)
> - pkt_len = VIRTIO_VSOCK_MAX_PKT_BUF_SIZE;
> -
> /* virtio_transport_get_credit might return less than pkt_len credit */
> pkt_len = virtio_transport_get_credit(vvs, pkt_len);
>
> @@ -227,17 +224,37 @@ static int virtio_transport_send_pkt_info(struct vsock_sock *vsk,
> if (pkt_len == 0 && info->op == VIRTIO_VSOCK_OP_RW)
> return pkt_len;
>
> - skb = virtio_transport_alloc_skb(info, pkt_len,
> - src_cid, src_port,
> - dst_cid, dst_port);
> - if (!skb) {
> - virtio_transport_put_credit(vvs, pkt_len);
> - return -ENOMEM;
> - }
> + rest_len = pkt_len;
>
> - virtio_transport_inc_tx_pkt(vvs, skb);
> + do {
> + struct sk_buff *skb;
> + size_t skb_len;
> +
> + skb_len = min_t(u32, VIRTIO_VSOCK_MAX_PKT_BUF_SIZE, rest_len);
> +
> + skb = virtio_transport_alloc_skb(info, skb_len,
> + src_cid, src_port,
> + dst_cid, dst_port);
> + if (!skb) {
> + ret = -ENOMEM;
> + goto out;
> + }
In this case, if a previous round of the loop succeeded with send_pkt(),
I think that we may still want to return the number of bytes that have
successfully been sent so far?
>
> - return t_ops->send_pkt(skb);
> + virtio_transport_inc_tx_pkt(vvs, skb);
> +
> + ret = t_ops->send_pkt(skb);
> +
> + if (ret < 0)
> + goto out;
Ditto here.
> +
> + rest_len -= skb_len;
> + } while (rest_len);
> +
> + return pkt_len;
> +
> +out:
> + virtio_transport_put_credit(vvs, rest_len);
> + return ret;
> }
>
> static bool virtio_transport_inc_rx_pkt(struct virtio_vsock_sock *vvs,
> --
> 2.25.1
Powered by blists - more mailing lists