[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4b66f91f23a3eb91c3232bc68f45bdb799217c40.camel@redhat.com>
Date: Wed, 14 Dec 2022 11:58:47 +0100
From: Paolo Abeni <pabeni@...hat.com>
To: Bobby Eshleman <bobby.eshleman@...edance.com>
Cc: Bobby Eshleman <bobbyeshleman@...il.com>,
Cong Wang <cong.wang@...edance.com>,
Jiang Wang <jiang.wang@...edance.com>,
Krasnov Arseniy <oxffffaa@...il.com>,
Stefan Hajnoczi <stefanha@...hat.com>,
Stefano Garzarella <sgarzare@...hat.com>,
"Michael S. Tsirkin" <mst@...hat.com>,
Jason Wang <jasowang@...hat.com>,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>, kvm@...r.kernel.org,
virtualization@...ts.linux-foundation.org, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH net-next v7] virtio/vsock: replace virtio_vsock_pkt with
sk_buff
On Tue, 2022-12-13 at 19:28 +0000, Bobby Eshleman wrote:
> diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
> index 5703775af129..2a5994b029b2 100644
> --- a/drivers/vhost/vsock.c
> +++ b/drivers/vhost/vsock.c
> @@ -51,8 +51,7 @@ struct vhost_vsock {
> struct hlist_node hash;
>
> struct vhost_work send_pkt_work;
> - spinlock_t send_pkt_list_lock;
> - struct list_head send_pkt_list; /* host->guest pending packets */
> + struct sk_buff_head send_pkt_queue; /* host->guest pending packets */
>
> atomic_t queued_replies;
>
> @@ -108,40 +107,33 @@ vhost_transport_do_send_pkt(struct vhost_vsock *vsock,
> vhost_disable_notify(&vsock->dev, vq);
>
> do {
> - struct virtio_vsock_pkt *pkt;
> + struct virtio_vsock_hdr *hdr;
> + size_t iov_len, payload_len;
> struct iov_iter iov_iter;
> + u32 flags_to_restore = 0;
> + struct sk_buff *skb;
> unsigned out, in;
> size_t nbytes;
> - size_t iov_len, payload_len;
> int head;
> - u32 flags_to_restore = 0;
>
> - spin_lock_bh(&vsock->send_pkt_list_lock);
> - if (list_empty(&vsock->send_pkt_list)) {
> - spin_unlock_bh(&vsock->send_pkt_list_lock);
> + spin_lock(&vsock->send_pkt_queue.lock);
> + skb = __skb_dequeue(&vsock->send_pkt_queue);
> + spin_unlock(&vsock->send_pkt_queue.lock);
Here you use a plain spin_lock(), but every other lock has the _bh()
variant. A few lines above this functions acquires a mutex, so this is
process context (and not BH context): I guess you should use _bh()
here, too.
[...]
> diff --git a/include/linux/virtio_vsock.h b/include/linux/virtio_vsock.h
> index 35d7eedb5e8e..0385df976d41 100644
> --- a/include/linux/virtio_vsock.h
> +++ b/include/linux/virtio_vsock.h
> @@ -3,10 +3,116 @@
> #define _LINUX_VIRTIO_VSOCK_H
>
> #include <uapi/linux/virtio_vsock.h>
> +#include <linux/bits.h>
> #include <linux/socket.h>
> #include <net/sock.h>
> #include <net/af_vsock.h>
>
> +#define VIRTIO_VSOCK_SKB_HEADROOM (sizeof(struct virtio_vsock_hdr))
> +
> +enum virtio_vsock_skb_flags {
> + VIRTIO_VSOCK_SKB_FLAGS_REPLY = BIT(0),
> + VIRTIO_VSOCK_SKB_FLAGS_TAP_DELIVERED = BIT(1),
> +};
It looks like the above enum is not used anymore, you can drop it.
[...]
> @@ -121,20 +108,18 @@ static void vsock_loopback_work(struct work_struct *work)
> {
> struct vsock_loopback *vsock =
> container_of(work, struct vsock_loopback, pkt_work);
> - LIST_HEAD(pkts);
> + struct sk_buff_head pkts;
> + struct sk_buff *skb;
> +
> + skb_queue_head_init(&pkts);
>
> spin_lock_bh(&vsock->pkt_list_lock);
> - list_splice_init(&vsock->pkt_list, &pkts);
> + skb_queue_splice_init(&vsock->pkt_queue, &pkts);
> spin_unlock_bh(&vsock->pkt_list_lock);
>
> - while (!list_empty(&pkts)) {
> - struct virtio_vsock_pkt *pkt;
> -
> - pkt = list_first_entry(&pkts, struct virtio_vsock_pkt, list);
> - list_del_init(&pkt->list);
> -
> - virtio_transport_deliver_tap_pkt(pkt);
> - virtio_transport_recv_pkt(&loopback_transport, pkt);
> + while ((skb = skb_dequeue(&pkts))) {
Minor nit: since this code has complete ownership of the pkts queue,
you can use the lockless dequeue variant here:
while ((skb = __skb_dequeue(&pkts))) {
> + virtio_transport_deliver_tap_pkt(skb);
> + virtio_transport_recv_pkt(&loopback_transport, skb);
> }
> }
>
Other then that LGTM. @Michael: feel free to take this via your tree,
once that the above feedback has been addressed, thanks!
Paolo
Powered by blists - more mailing lists