[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20250706-siocinq-v5-1-8d0b96a87465@antgroup.com>
Date: Sun, 06 Jul 2025 12:36:29 +0800
From: Xuewei Niu <niuxuewei97@...il.com>
To: "K. Y. Srinivasan" <kys@...rosoft.com>,
Haiyang Zhang <haiyangz@...rosoft.com>, Wei Liu <wei.liu@...nel.org>,
Dexuan Cui <decui@...rosoft.com>, Stefano Garzarella <sgarzare@...hat.com>,
"David S. Miller" <davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>,
Simon Horman <horms@...nel.org>
Cc: linux-hyperv@...r.kernel.org, virtualization@...ts.linux.dev,
netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
Xuewei Niu <niuxuewei.nxw@...group.com>, fupan.lfp@...group.com
Subject: [PATCH net-next v5 1/4] hv_sock: Return the readable bytes in
hvs_stream_has_data()
When hv_sock was originally added, __vsock_stream_recvmsg() and
vsock_stream_has_data() actually only needed to know whether there
is any readable data or not, so hvs_stream_has_data() was written to
return 1 or 0 for simplicity.
However, now hvs_stream_has_data() should return the readable bytes
because vsock_data_ready() -> vsock_stream_has_data() needs to know the
actual bytes rather than a boolean value of 1 or 0.
The SIOCINQ ioctl support also needs hvs_stream_has_data() to return
the readable bytes.
Let hvs_stream_has_data() return the readable bytes of the payload in
the next host-to-guest VMBus hv_sock packet.
Note: there may be multpile incoming hv_sock packets pending in the
VMBus channel's ringbuffer, but so far there is not a VMBus API that
allows us to know all the readable bytes in total without reading and
caching the payload of the multiple packets, so let's just return the
readable bytes of the next single packet. In the future, we'll either
add a VMBus API that allows us to know the total readable bytes without
touching the data in the ringbuffer, or the hv_sock driver needs to
understand the VMBus packet format and parse the packets directly.
Signed-off-by: Dexuan Cui <decui@...rosoft.com>
---
net/vmw_vsock/hyperv_transport.c | 17 ++++++++++++++---
1 file changed, 14 insertions(+), 3 deletions(-)
diff --git a/net/vmw_vsock/hyperv_transport.c b/net/vmw_vsock/hyperv_transport.c
index 31342ab502b4fc35feb812d2c94e0e35ded73771..432fcbbd14d4f44bd2550be8376e42ce65122758 100644
--- a/net/vmw_vsock/hyperv_transport.c
+++ b/net/vmw_vsock/hyperv_transport.c
@@ -694,15 +694,26 @@ static ssize_t hvs_stream_enqueue(struct vsock_sock *vsk, struct msghdr *msg,
static s64 hvs_stream_has_data(struct vsock_sock *vsk)
{
struct hvsock *hvs = vsk->trans;
+ bool need_refill;
s64 ret;
if (hvs->recv_data_len > 0)
- return 1;
+ return hvs->recv_data_len;
switch (hvs_channel_readable_payload(hvs->chan)) {
case 1:
- ret = 1;
- break;
+ need_refill = !hvs->recv_desc;
+ if (!need_refill)
+ return -EIO;
+
+ hvs->recv_desc = hv_pkt_iter_first(hvs->chan);
+ if (!hvs->recv_desc)
+ return -ENOBUFS;
+
+ ret = hvs_update_recv_data(hvs);
+ if (ret)
+ return ret;
+ return hvs->recv_data_len;
case 0:
vsk->peer_shutdown |= SEND_SHUTDOWN;
ret = 0;
--
2.34.1
Powered by blists - more mailing lists