[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <fumoatu4z5pvqntnqa6hnuripfa4zrtb5m2rsfkfsdn23pn5y5@f7yy23tjxwwa>
Date: Mon, 26 Jun 2023 18:23:32 +0200
From: Stefano Garzarella <sgarzare@...hat.com>
To: Arseniy Krasnov <AVKrasnov@...rdevices.ru>,
Bobby Eshleman <bobby.eshleman@...edance.com>
Cc: Stefan Hajnoczi <stefanha@...hat.com>,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>,
"Michael S. Tsirkin" <mst@...hat.com>,
Jason Wang <jasowang@...hat.com>,
Bobby Eshleman <bobby.eshleman@...edance.com>,
kvm@...r.kernel.org, virtualization@...ts.linux-foundation.org,
netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
kernel@...rdevices.ru, oxffffaa@...il.com
Subject: Re: [RFC PATCH v1 1/4] virtio/vsock: rework MSG_PEEK for SOCK_STREAM
On Sun, Jun 18, 2023 at 09:24:48AM +0300, Arseniy Krasnov wrote:
>This reworks current implementation of MSG_PEEK logic:
>1) Replaces 'skb_queue_walk_safe()' with 'skb_queue_walk()'. There is
> no need in the first one, as there are no removes of skb in loop.
>2) Removes nested while loop - MSG_PEEK logic could be implemented
> without it: just iterate over skbs without removing it and copy
> data from each until destination buffer is not full.
>
>Signed-off-by: Arseniy Krasnov <AVKrasnov@...rdevices.ru>
>---
> net/vmw_vsock/virtio_transport_common.c | 41 ++++++++++++-------------
> 1 file changed, 19 insertions(+), 22 deletions(-)
Great clean up!
LGTM, but @Bobby can you also take a look?
Thanks,
Stefano
>
>diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
>index b769fc258931..2ee40574c339 100644
>--- a/net/vmw_vsock/virtio_transport_common.c
>+++ b/net/vmw_vsock/virtio_transport_common.c
>@@ -348,37 +348,34 @@ virtio_transport_stream_do_peek(struct vsock_sock *vsk,
> size_t len)
> {
> struct virtio_vsock_sock *vvs = vsk->trans;
>- size_t bytes, total = 0, off;
>- struct sk_buff *skb, *tmp;
>- int err = -EFAULT;
>+ struct sk_buff *skb;
>+ size_t total = 0;
>+ int err;
>
> spin_lock_bh(&vvs->rx_lock);
>
>- skb_queue_walk_safe(&vvs->rx_queue, skb, tmp) {
>- off = 0;
>+ skb_queue_walk(&vvs->rx_queue, skb) {
>+ size_t bytes;
>
>- if (total == len)
>- break;
>+ bytes = len - total;
>+ if (bytes > skb->len)
>+ bytes = skb->len;
>
>- while (total < len && off < skb->len) {
>- bytes = len - total;
>- if (bytes > skb->len - off)
>- bytes = skb->len - off;
>+ spin_unlock_bh(&vvs->rx_lock);
>
>- /* sk_lock is held by caller so no one else can dequeue.
>- * Unlock rx_lock since memcpy_to_msg() may sleep.
>- */
>- spin_unlock_bh(&vvs->rx_lock);
>+ /* sk_lock is held by caller so no one else can dequeue.
>+ * Unlock rx_lock since memcpy_to_msg() may sleep.
>+ */
>+ err = memcpy_to_msg(msg, skb->data, bytes);
>+ if (err)
>+ goto out;
>
>- err = memcpy_to_msg(msg, skb->data + off, bytes);
>- if (err)
>- goto out;
>+ total += bytes;
>
>- spin_lock_bh(&vvs->rx_lock);
>+ spin_lock_bh(&vvs->rx_lock);
>
>- total += bytes;
>- off += bytes;
>- }
>+ if (total == len)
>+ break;
> }
>
> spin_unlock_bh(&vvs->rx_lock);
>--
>2.25.1
>
Powered by blists - more mailing lists