[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <k47d2h7dwn26eti2p6nv2fupuybabvbexwinvxv7jnfbn6o3ep@cqtbaqlqyfrq>
Date: Wed, 23 Apr 2025 18:34:18 +0200
From: Stefano Garzarella <sgarzare@...hat.com>
To: Luigi Leonardi <leonardi@...hat.com>, Michal Luczaj <mhal@...x.co>
Cc: "David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>, Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>, Simon Horman <horms@...nel.org>,
"Michael S. Tsirkin" <mst@...hat.com>, Jason Wang <jasowang@...hat.com>,
Xuan Zhuo <xuanzhuo@...ux.alibaba.com>, Eugenio Pérez <eperezma@...hat.com>,
Stefan Hajnoczi <stefanha@...hat.com>, virtualization@...ts.linux.dev, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Subject: Re: [PATCH net-next v2 1/3] vsock: Linger on unsent data
On Wed, Apr 23, 2025 at 05:53:12PM +0200, Luigi Leonardi wrote:
>Hi Michal,
>
>On Mon, Apr 21, 2025 at 11:50:41PM +0200, Michal Luczaj wrote:
>>Currently vsock's lingering effectively boils down to waiting (or timing
>>out) until packets are consumed or dropped by the peer; be it by receiving
>>the data, closing or shutting down the connection.
>>
>>To align with the semantics described in the SO_LINGER section of man
>>socket(7) and to mimic AF_INET's behaviour more closely, change the logic
>>of a lingering close(): instead of waiting for all data to be handled,
>>block until data is considered sent from the vsock's transport point of
>>view. That is until worker picks the packets for processing and decrements
>>virtio_vsock_sock::bytes_unsent down to 0.
>>
>>Note that such lingering is limited to transports that actually implement
>>vsock_transport::unsent_bytes() callback. This excludes Hyper-V and VMCI,
>>under which no lingering would be observed.
>>
>>The implementation does not adhere strictly to man page's interpretation of
>>SO_LINGER: shutdown() will not trigger the lingering. This follows AF_INET.
>>
>>Signed-off-by: Michal Luczaj <mhal@...x.co>
>>---
>>net/vmw_vsock/virtio_transport_common.c | 13 +++++++++++--
>>1 file changed, 11 insertions(+), 2 deletions(-)
>>
>>diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
>>index 7f7de6d8809655fe522749fbbc9025df71f071bd..aeb7f3794f7cfc251dde878cb44fdcc54814c89c 100644
>>--- a/net/vmw_vsock/virtio_transport_common.c
>>+++ b/net/vmw_vsock/virtio_transport_common.c
>>@@ -1196,12 +1196,21 @@ static void virtio_transport_wait_close(struct sock *sk, long timeout)
>>{
>> if (timeout) {
>> DEFINE_WAIT_FUNC(wait, woken_wake_function);
>>+ ssize_t (*unsent)(struct vsock_sock *vsk);
>>+ struct vsock_sock *vsk = vsock_sk(sk);
>>+
>>+ /* Some transports (Hyper-V, VMCI) do not implement
>>+ * unsent_bytes. For those, no lingering on close().
>>+ */
>>+ unsent = vsk->transport->unsent_bytes;
>>+ if (!unsent)
>>+ return;
>
>IIUC if `unsent_bytes` is not implemented, virtio_transport_wait_close
>basically does nothing. My concern is that we are breaking the
>userspace due to a change in the behavior: Before this patch, with a
>vmci/hyper-v transport, this function would wait for SOCK_DONE to be
>set, but not anymore.
Wait, we are in virtio_transport_common.c, why we are talking about
Hyper-V and VMCI?
I asked to check `vsk->transport->unsent_bytes` in the v1, because this
code was part of af_vsock.c, but now we are back to virtio code, so I'm
confused...
Stefano
>
>>
>> add_wait_queue(sk_sleep(sk), &wait);
>>
>> do {
>>- if (sk_wait_event(sk, &timeout,
>>- sock_flag(sk, SOCK_DONE), &wait))
>>+ if (sk_wait_event(sk, &timeout, unsent(vsk) == 0,
>>+ &wait))
>> break;
>> } while (!signal_pending(current) && timeout);
>>
>>
>>-- 2.49.0
>>
>
>Thanks,
>Luigi
>
Powered by blists - more mailing lists