[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c18d4b9220a85f8087eda15526771dac5f8b4c0a.camel@redhat.com>
Date: Thu, 11 Apr 2024 09:09:49 +0200
From: Paolo Abeni <pabeni@...hat.com>
To: Luigi Leonardi <luigi.leonardi@...look.com>, mst@...hat.com,
xuanzhuo@...ux.alibaba.com, virtualization@...ts.linux.dev,
sgarzare@...hat.com, netdev@...r.kernel.org, kuba@...nel.org,
stefanha@...hat.com, davem@...emloft.net, edumazet@...gle.com,
kvm@...r.kernel.org, jasowang@...hat.com
Subject: Re: [PATCH net-next v2 2/3] vsock/virtio: add SIOCOUTQ support for
all virtio based transports
On Mon, 2024-04-08 at 15:37 +0200, Luigi Leonardi wrote:
> This patch introduce support for stream_bytes_unsent and
> seqpacket_bytes_unsent ioctl for virtio_transport, vhost_vsock
> and vsock_loopback.
>
> For all transports the unsent bytes counter is incremented
> in virtio_transport_send_pkt_info.
>
> In the virtio_transport (G2H) the counter is decremented each time the host
> notifies the guest that it consumed the skbuffs.
> In vhost-vsock (H2G) the counter is decremented after the skbuff is queued
> in the virtqueue.
> In vsock_loopback the counter is decremented after the skbuff is
> dequeued.
>
> Signed-off-by: Luigi Leonardi <luigi.leonardi@...look.com>
I think this deserve an explicit ack from Stefano, and Stefano can't
review patches in the next few weeks. If it's not urgent this will have
to wait a bit.
> ---
> drivers/vhost/vsock.c | 4 ++-
> include/linux/virtio_vsock.h | 7 ++++++
> net/vmw_vsock/virtio_transport.c | 4 ++-
> net/vmw_vsock/virtio_transport_common.c | 33 +++++++++++++++++++++++++
> net/vmw_vsock/vsock_loopback.c | 7 ++++++
> 5 files changed, 53 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
> index ec20ecff85c7..dba8b3ea37bf 100644
> --- a/drivers/vhost/vsock.c
> +++ b/drivers/vhost/vsock.c
> @@ -244,7 +244,7 @@ vhost_transport_do_send_pkt(struct vhost_vsock *vsock,
> restart_tx = true;
> }
>
> - consume_skb(skb);
> + virtio_transport_consume_skb_sent(skb, true);
> }
> } while(likely(!vhost_exceeds_weight(vq, ++pkts, total_len)));
> if (added)
> @@ -451,6 +451,8 @@ static struct virtio_transport vhost_transport = {
> .notify_buffer_size = virtio_transport_notify_buffer_size,
> .notify_set_rcvlowat = virtio_transport_notify_set_rcvlowat,
>
> + .unsent_bytes = virtio_transport_bytes_unsent,
> +
> .read_skb = virtio_transport_read_skb,
> },
>
> diff --git a/include/linux/virtio_vsock.h b/include/linux/virtio_vsock.h
> index c82089dee0c8..dbb22d45d203 100644
> --- a/include/linux/virtio_vsock.h
> +++ b/include/linux/virtio_vsock.h
> @@ -134,6 +134,8 @@ struct virtio_vsock_sock {
> u32 peer_fwd_cnt;
> u32 peer_buf_alloc;
>
> + atomic_t bytes_unsent;
This will add 2 atomic operations per packet, possibly on contended
cachelines. Have you considered leveraging the existing transport-level
lock to protect the counter updates?
Thanks
Paolo
Powered by blists - more mailing lists