[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250924060843.2280499-1-tavip@google.com>
Date: Wed, 24 Sep 2025 06:08:42 +0000
From: Octavian Purdila <tavip@...gle.com>
To: kuba@...nel.org
Cc: davem@...emloft.net, edumazet@...gle.com, pabeni@...hat.com,
horms@...nel.org, ast@...nel.org, daniel@...earbox.net, hawk@...nel.org,
john.fastabend@...il.com, sdf@...ichev.me, uniyu@...gle.com,
ahmed.zaki@...el.com, aleksander.lobakin@...el.com, toke@...hat.com,
lorenzo@...nel.org, netdev@...r.kernel.org, bpf@...r.kernel.org,
Octavian Purdila <tavip@...gle.com>, syzbot+ff145014d6b0ce64a173@...kaller.appspotmail.com
Subject: [PATCH net] xdp: use multi-buff only if receive queue supports page pool
When a BPF program that supports BPF_F_XDP_HAS_FRAGS is issuing
bpf_xdp_adjust_tail and a large packet is injected via /dev/net/tun a
crash occurs due to detecting a bad page state (page_pool leak).
This is because xdp_buff does not record the type of memory and
instead relies on the netdev receive queue xdp info. Since the TUN/TAP
driver is using a MEM_TYPE_PAGE_SHARED memory model buffer, shrinking
will eventually call page_frag_free. But with current multi-buff
support for BPF_F_XDP_HAS_FRAGS programs buffers are allocated via the
page pool.
To fix this issue check that the receive queue memory mode is of
MEM_TYPE_PAGE_POOL before using multi-buffs.
Reported-by: syzbot+ff145014d6b0ce64a173@...kaller.appspotmail.com
Closes: https://lore.kernel.org/netdev/6756c37b.050a0220.a30f1.019a.GAE@google.com/
Fixes: e6d5dbdd20aa ("xdp: add multi-buff support for xdp running in generic mode")
Signed-off-by: Octavian Purdila <tavip@...gle.com>
---
net/core/dev.c | 15 ++++++++++-----
1 file changed, 10 insertions(+), 5 deletions(-)
diff --git a/net/core/dev.c b/net/core/dev.c
index 8d49b2198d07..b195ee3068c2 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -5335,13 +5335,18 @@ static int
netif_skb_check_for_xdp(struct sk_buff **pskb, const struct bpf_prog *prog)
{
struct sk_buff *skb = *pskb;
+ struct netdev_rx_queue *rxq;
int err, hroom, troom;
- local_lock_nested_bh(&system_page_pool.bh_lock);
- err = skb_cow_data_for_xdp(this_cpu_read(system_page_pool.pool), pskb, prog);
- local_unlock_nested_bh(&system_page_pool.bh_lock);
- if (!err)
- return 0;
+ rxq = netif_get_rxqueue(skb);
+ if (rxq->xdp_rxq.mem.type == MEM_TYPE_PAGE_POOL) {
+ local_lock_nested_bh(&system_page_pool.bh_lock);
+ err = skb_cow_data_for_xdp(this_cpu_read(system_page_pool.pool),
+ pskb, prog);
+ local_unlock_nested_bh(&system_page_pool.bh_lock);
+ if (!err)
+ return 0;
+ }
/* In case we have to go down the path and also linearize,
* then lets do the pskb_expand_head() work just once here.
--
2.51.0.534.gc79095c0ca-goog
Powered by blists - more mailing lists