[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aWEnYm6ePitdHPQe@sgarzare-redhat>
Date: Fri, 9 Jan 2026 17:18:19 +0100
From: Stefano Garzarella <sgarzare@...hat.com>
To: Michal Luczaj <mhal@...x.co>
Cc: "Michael S. Tsirkin" <mst@...hat.com>,
Jason Wang <jasowang@...hat.com>, Xuan Zhuo <xuanzhuo@...ux.alibaba.com>,
Eugenio Pérez <eperezma@...hat.com>, Stefan Hajnoczi <stefanha@...hat.com>,
"David S. Miller" <davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>, Simon Horman <horms@...nel.org>,
Arseniy Krasnov <avkrasnov@...utedevices.com>, kvm@...r.kernel.org, virtualization@...ts.linux.dev,
netdev@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/2] vsock/virtio: Coalesce only linear skb
On Thu, Jan 08, 2026 at 10:54:54AM +0100, Michal Luczaj wrote:
>Vsock/virtio common tries to coalesce buffers in rx queue: if a linear skb
>(with a spare tail room) is followed by a small skb (length limited by
>GOOD_COPY_LEN = 128), an attempt is made to join them.
>
>Since the introduction of MSG_ZEROCOPY support, assumption that a small skb
>will always be linear is incorrect (see loopback transport). In the
>zerocopy case, data is lost and the linear skb is appended with
>uninitialized kernel memory.
>
>Ensure only linear skbs are coalesced. Note that skb_tailroom(last_skb) > 0
>guarantees last_skb is linear.
>
>Fixes: 581512a6dc93 ("vsock/virtio: MSG_ZEROCOPY flag support")
>Signed-off-by: Michal Luczaj <mhal@...x.co>
>---
> net/vmw_vsock/virtio_transport_common.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
>diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
>index dcc8a1d5851e..cf35eb7190cc 100644
>--- a/net/vmw_vsock/virtio_transport_common.c
>+++ b/net/vmw_vsock/virtio_transport_common.c
>@@ -1375,7 +1375,8 @@ virtio_transport_recv_enqueue(struct vsock_sock *vsk,
> * of a new message.
> */
> if (skb->len < skb_tailroom(last_skb) &&
>- !(le32_to_cpu(last_hdr->flags) & VIRTIO_VSOCK_SEQ_EOM)) {
>+ !(le32_to_cpu(last_hdr->flags) & VIRTIO_VSOCK_SEQ_EOM) &&
>+ !skb_is_nonlinear(skb)) {
Why here? I mean we can do the check even early, something like this:
--- a/net/vmw_vsock/virtio_transport_common.c
+++ b/net/vmw_vsock/virtio_transport_common.c
@@ -1361,7 +1361,8 @@ virtio_transport_recv_enqueue(struct vsock_sock *vsk,
* to avoid wasting memory queueing the entire buffer with a small
* payload.
*/
- if (len <= GOOD_COPY_LEN && !skb_queue_empty(&vvs->rx_queue)) {
+ if (len <= GOOD_COPY_LEN && !skb_queue_empty(&vvs->rx_queue) &&
+ !skb_is_nonlinear(skb)) {
struct virtio_vsock_hdr *last_hdr;
struct sk_buff *last_skb;
I would also add the reason in the comment before that to make it clear.
Thanks for the fix!
Stefano
> memcpy(skb_put(last_skb, skb->len), skb->data, skb->len);
> free_pkt = true;
> last_hdr->flags |= hdr->flags;
>
>--
>2.52.0
>
Powered by blists - more mailing lists