lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-Id: <20250805130945.471732-37-sashal@kernel.org>
Date: Tue,  5 Aug 2025 09:09:12 -0400
From: Sasha Levin <sashal@...nel.org>
To: patches@...ts.linux.dev,
	stable@...r.kernel.org
Cc: Will Deacon <will@...nel.org>,
	Stefano Garzarella <sgarzare@...hat.com>,
	"Michael S . Tsirkin" <mst@...hat.com>,
	Sasha Levin <sashal@...nel.org>,
	stefanha@...hat.com,
	jasowang@...hat.com,
	kvm@...r.kernel.org,
	virtualization@...ts.linux.dev,
	netdev@...r.kernel.org
Subject: [PATCH AUTOSEL 6.16-6.1] vsock/virtio: Resize receive buffers so that each SKB fits in a 4K page

From: Will Deacon <will@...nel.org>

[ Upstream commit 03a92f036a04fed2b00d69f5f46f1a486e70dc5c ]

When allocating receive buffers for the vsock virtio RX virtqueue, an
SKB is allocated with a 4140 data payload (the 44-byte packet header +
VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE). Even when factoring in the SKB
overhead, the resulting 8KiB allocation thanks to the rounding in
kmalloc_reserve() is wasteful (~3700 unusable bytes) and results in a
higher-order page allocation on systems with 4KiB pages just for the
sake of a few hundred bytes of packet data.

Limit the vsock virtio RX buffers to 4KiB per SKB, resulting in much
better memory utilisation and removing the need to allocate higher-order
pages entirely.

Reviewed-by: Stefano Garzarella <sgarzare@...hat.com>
Signed-off-by: Will Deacon <will@...nel.org>
Message-Id: <20250717090116.11987-5-will@...nel.org>
Signed-off-by: Michael S. Tsirkin <mst@...hat.com>
Signed-off-by: Sasha Levin <sashal@...nel.org>
---

LLM Generated explanations, may be completely bogus:

Let me analyze what we know about this commit:

**Backport Status: YES**

This commit is a good candidate for stable backporting based on the
following analysis:

## Reasons for backporting:

1. **Memory efficiency fix**: The commit addresses a significant memory
   waste issue where ~3700 bytes out of 8KiB allocations were unusable.
   This is a ~46% memory waste for every RX buffer allocation.

2. **Avoids higher-order page allocations**: The original code forces
   the kernel to allocate higher-order pages (8KiB) on systems with 4KiB
   pages, which can lead to:
   - Memory fragmentation issues
   - Allocation failures under memory pressure
   - Degraded system performance

3. **Simple, contained fix**: The change is minimal and well-contained:
   - Changes `VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE` from a hardcoded 4096 to
     `SKB_WITH_OVERHEAD(1024 * 4)`
   - Removes the addition of `VIRTIO_VSOCK_SKB_HEADROOM` in
     `virtio_vsock_rx_fill()`
   - The SKB_WITH_OVERHEAD macro (defined as `((X) -
     SKB_DATA_ALIGN(sizeof(struct skb_shared_info)))`) ensures the
     entire SKB fits in a 4K page

4. **No functional changes**: This is purely an optimization that:
   - Doesn't change the protocol behavior
   - Doesn't introduce new features
   - Maintains backward compatibility
   - Only affects memory allocation patterns

5. **Clear performance benefit**: This provides immediate benefits to
   all vsock users by:
   - Reducing memory consumption by ~46% per RX buffer
   - Eliminating pressure on higher-order page allocations
   - Improving overall system memory efficiency

6. **Low risk**: The change is straightforward with minimal risk of
   regression:
   - The logic remains the same
   - Only the buffer sizing calculation changes
   - Has been reviewed by the vsock maintainer (Stefano Garzarella)

The commit meets all the stable tree criteria: it fixes a real problem
(memory waste and higher-order allocations), is minimal in scope,
doesn't introduce new features, and has clear benefits with low risk of
regression.

 include/linux/virtio_vsock.h     | 7 ++++++-
 net/vmw_vsock/virtio_transport.c | 2 +-
 2 files changed, 7 insertions(+), 2 deletions(-)

diff --git a/include/linux/virtio_vsock.h b/include/linux/virtio_vsock.h
index 36fb3edfa403..6c00687539cf 100644
--- a/include/linux/virtio_vsock.h
+++ b/include/linux/virtio_vsock.h
@@ -111,7 +111,12 @@ static inline size_t virtio_vsock_skb_len(struct sk_buff *skb)
 	return (size_t)(skb_end_pointer(skb) - skb->head);
 }
 
-#define VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE	(1024 * 4)
+/* Dimension the RX SKB so that the entire thing fits exactly into
+ * a single 4KiB page. This avoids wasting memory due to alloc_skb()
+ * rounding up to the next page order and also means that we
+ * don't leave higher-order pages sitting around in the RX queue.
+ */
+#define VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE	SKB_WITH_OVERHEAD(1024 * 4)
 #define VIRTIO_VSOCK_MAX_BUF_SIZE		0xFFFFFFFFUL
 #define VIRTIO_VSOCK_MAX_PKT_BUF_SIZE		(1024 * 64)
 
diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
index f0e48e6911fc..f01f9e878106 100644
--- a/net/vmw_vsock/virtio_transport.c
+++ b/net/vmw_vsock/virtio_transport.c
@@ -307,7 +307,7 @@ virtio_transport_cancel_pkt(struct vsock_sock *vsk)
 
 static void virtio_vsock_rx_fill(struct virtio_vsock *vsock)
 {
-	int total_len = VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE + VIRTIO_VSOCK_SKB_HEADROOM;
+	int total_len = VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE;
 	struct scatterlist pkt, *p;
 	struct virtqueue *vq;
 	struct sk_buff *skb;
-- 
2.39.5


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ