[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251022160623.51191-1-minhquangbui99@gmail.com>
Date: Wed, 22 Oct 2025 23:06:23 +0700
From: Bui Quang Minh <minhquangbui99@...il.com>
To: netdev@...r.kernel.org
Cc: "Michael S. Tsirkin" <mst@...hat.com>,
Jason Wang <jasowang@...hat.com>,
Xuan Zhuo <xuanzhuo@...ux.alibaba.com>,
Eugenio Pérez <eperezma@...hat.com>,
Andrew Lunn <andrew+netdev@...n.ch>,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>,
Gavin Li <gavinl@...dia.com>,
Gavi Teitz <gavi@...dia.com>,
Parav Pandit <parav@...dia.com>,
virtualization@...ts.linux.dev,
linux-kernel@...r.kernel.org,
Bui Quang Minh <minhquangbui99@...il.com>,
stable@...r.kernel.org
Subject: [PATCH net v4] virtio-net: fix received length check in big packets
Since commit 4959aebba8c0 ("virtio-net: use mtu size as buffer length
for big packets"), when guest gso is off, the allocated size for big
packets is not MAX_SKB_FRAGS * PAGE_SIZE anymore but depends on
negotiated MTU. The number of allocated frags for big packets is stored
in vi->big_packets_num_skbfrags.
Because the host announced buffer length can be malicious (e.g. the host
vhost_net driver's get_rx_bufs is modified to announce incorrect
length), we need a check in virtio_net receive path. Currently, the
check is not adapted to the new change which can lead to NULL page
pointer dereference in the below while loop when receiving length that
is larger than the allocated one.
This commit fixes the received length check corresponding to the new
change.
Fixes: 4959aebba8c0 ("virtio-net: use mtu size as buffer length for big packets")
Cc: stable@...r.kernel.org
Signed-off-by: Bui Quang Minh <minhquangbui99@...il.com>
---
Changes in v4:
- Remove unrelated changes, add more comments
Changes in v3:
- Convert BUG_ON to WARN_ON_ONCE
Changes in v2:
- Remove incorrect give_pages call
---
drivers/net/virtio_net.c | 16 +++++++++++++---
1 file changed, 13 insertions(+), 3 deletions(-)
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index a757cbcab87f..0ffe78b3fd8d 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -852,7 +852,7 @@ static struct sk_buff *page_to_skb(struct virtnet_info *vi,
{
struct sk_buff *skb;
struct virtio_net_common_hdr *hdr;
- unsigned int copy, hdr_len, hdr_padded_len;
+ unsigned int copy, hdr_len, hdr_padded_len, max_remaining_len;
struct page *page_to_free = NULL;
int tailroom, shinfo_size;
char *p, *hdr_p, *buf;
@@ -915,13 +915,23 @@ static struct sk_buff *page_to_skb(struct virtnet_info *vi,
* This is here to handle cases when the device erroneously
* tries to receive more than is possible. This is usually
* the case of a broken device.
+ *
+ * The number of allocated pages for big packet is
+ * vi->big_packets_num_skbfrags + 1, the start of first page is
+ * for virtio header, the remaining is for data. We need to ensure
+ * the remaining len does not go out of the allocated pages.
+ * Please refer to add_recvbuf_big for more details on big packet
+ * buffer allocation.
*/
- if (unlikely(len > MAX_SKB_FRAGS * PAGE_SIZE)) {
+ BUG_ON(offset >= PAGE_SIZE);
+ max_remaining_len = (unsigned int)PAGE_SIZE - offset;
+ max_remaining_len += vi->big_packets_num_skbfrags * PAGE_SIZE;
+ if (unlikely(len > max_remaining_len)) {
net_dbg_ratelimited("%s: too much data\n", skb->dev->name);
dev_kfree_skb(skb);
return NULL;
}
- BUG_ON(offset >= PAGE_SIZE);
+
while (len) {
unsigned int frag_size = min((unsigned)PAGE_SIZE - offset, len);
skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, page, offset,
--
2.43.0
Powered by blists - more mailing lists