[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20251003140243.2534865-2-maciej.fijalkowski@intel.com>
Date: Fri, 3 Oct 2025 16:02:42 +0200
From: Maciej Fijalkowski <maciej.fijalkowski@...el.com>
To: bpf@...r.kernel.org,
ast@...nel.org,
daniel@...earbox.net,
hawk@...nel.org,
ilias.apalodimas@...aro.org,
toke@...hat.com,
lorenzo@...nel.org
Cc: netdev@...r.kernel.org,
magnus.karlsson@...el.com,
andrii@...nel.org,
stfomichev@...il.com,
aleksander.lobakin@...el.com,
Maciej Fijalkowski <maciej.fijalkowski@...el.com>,
syzbot+ff145014d6b0ce64a173@...kaller.appspotmail.com,
Ihor Solodrai <ihor.solodrai@...ux.dev>,
Octavian Purdila <tavip@...gle.com>
Subject: [PATCH bpf 1/2] xdp: update xdp_rxq_info's mem type in XDP generic hook
Currently, generic XDP hook uses xdp_rxq_info from netstack Rx queues
which do not have its XDP memory model registered. There is a case when
XDP program calls bpf_xdp_adjust_tail() BPF helper that releases
underlying memory. This happens when it consumes enough amount of bytes
and when XDP buffer has fragments. For this action the memory model
knowledge passed to XDP program is crucial so that core can call
suitable function for freeing/recycling the page.
For netstack queues it defaults to MEM_TYPE_PAGE_SHARED (0) due to lack
of mem model registration. The problem we're fixing here is when kernel
copied the skb to new buffer backed by system's page_pool and XDP buffer
is built around it. Then when bpf_xdp_adjust_tail() calls
__xdp_return(), it acts incorrectly due to mem type not being set to
MEM_TYPE_PAGE_POOL and causes a page leak.
For this purpose introduce a small helper, xdp_update_mem_type(), that
could be used on other callsites such as veth which are open to this
problem as well. Here we call it right before executing XDP program in
generic XDP hook.
This problem was triggered by syzbot as well as AF_XDP test suite which
is about to be integrated to BPF CI.
Reported-by: syzbot+ff145014d6b0ce64a173@...kaller.appspotmail.com
Closes: https://lore.kernel.org/netdev/6756c37b.050a0220.a30f1.019a.GAE@google.com/
Fixes: e6d5dbdd20aa ("xdp: add multi-buff support for xdp running in generic mode")
Tested-by: Ihor Solodrai <ihor.solodrai@...ux.dev>
Co-developed-by: Octavian Purdila <tavip@...gle.com>
Signed-off-by: Octavian Purdila <tavip@...gle.com> # whole analysis, testing, initiating a fix
Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@...el.com> # commit msg and proposed more robust fix
---
include/net/xdp.h | 7 +++++++
net/core/dev.c | 2 ++
2 files changed, 9 insertions(+)
diff --git a/include/net/xdp.h b/include/net/xdp.h
index f288c348a6c1..5568e41cc191 100644
--- a/include/net/xdp.h
+++ b/include/net/xdp.h
@@ -336,6 +336,13 @@ xdp_update_skb_shared_info(struct sk_buff *skb, u8 nr_frags,
skb->pfmemalloc |= pfmemalloc;
}
+static inline void
+xdp_update_mem_type(struct xdp_buff *xdp)
+{
+ xdp->rxq->mem.type = page_pool_page_is_pp(virt_to_page(xdp->data)) ?
+ MEM_TYPE_PAGE_POOL : MEM_TYPE_PAGE_SHARED;
+}
+
/* Avoids inlining WARN macro in fast-path */
void xdp_warn(const char *msg, const char *func, const int line);
#define XDP_WARN(msg) xdp_warn(msg, __func__, __LINE__)
diff --git a/net/core/dev.c b/net/core/dev.c
index 93a25d87b86b..076cd4a4b73f 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -5269,6 +5269,8 @@ u32 bpf_prog_run_generic_xdp(struct sk_buff *skb, struct xdp_buff *xdp,
orig_bcast = is_multicast_ether_addr_64bits(eth->h_dest);
orig_eth_type = eth->h_proto;
+ xdp_update_mem_type(xdp);
+
act = bpf_prog_run_xdp(xdp_prog, xdp);
/* check if bpf_xdp_adjust_head was used */
--
2.43.0
Powered by blists - more mailing lists