[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250924170914.20aac680@kernel.org>
Date: Wed, 24 Sep 2025 17:09:14 -0700
From: Jakub Kicinski <kuba@...nel.org>
To: Octavian Purdila <tavip@...gle.com>
Cc: davem@...emloft.net, edumazet@...gle.com, pabeni@...hat.com,
horms@...nel.org, ast@...nel.org, daniel@...earbox.net, hawk@...nel.org,
john.fastabend@...il.com, sdf@...ichev.me, uniyu@...gle.com,
ahmed.zaki@...el.com, aleksander.lobakin@...el.com, toke@...hat.com,
lorenzo@...nel.org, netdev@...r.kernel.org, bpf@...r.kernel.org,
syzbot+ff145014d6b0ce64a173@...kaller.appspotmail.com
Subject: Re: [PATCH net] xdp: use multi-buff only if receive queue supports
page pool
On Wed, 24 Sep 2025 06:08:42 +0000 Octavian Purdila wrote:
> When a BPF program that supports BPF_F_XDP_HAS_FRAGS is issuing
> bpf_xdp_adjust_tail and a large packet is injected via /dev/net/tun a
> crash occurs due to detecting a bad page state (page_pool leak).
>
> This is because xdp_buff does not record the type of memory and
> instead relies on the netdev receive queue xdp info. Since the TUN/TAP
> driver is using a MEM_TYPE_PAGE_SHARED memory model buffer, shrinking
> will eventually call page_frag_free. But with current multi-buff
> support for BPF_F_XDP_HAS_FRAGS programs buffers are allocated via the
> page pool.
>
> To fix this issue check that the receive queue memory mode is of
> MEM_TYPE_PAGE_POOL before using multi-buffs.
This can also happen on veth, right? And veth re-stamps the Rx queues.
Powered by blists - more mailing lists