[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGWr4cQCp4OwF8ESCk4QtEmPUCkhgVXZitp5esDc++rgxUhO8A@mail.gmail.com>
Date: Thu, 25 Sep 2025 00:53:53 -0700
From: Octavian Purdila <tavip@...gle.com>
To: Jakub Kicinski <kuba@...nel.org>
Cc: davem@...emloft.net, edumazet@...gle.com, pabeni@...hat.com,
horms@...nel.org, ast@...nel.org, daniel@...earbox.net, hawk@...nel.org,
john.fastabend@...il.com, sdf@...ichev.me, ahmed.zaki@...el.com,
aleksander.lobakin@...el.com, toke@...hat.com, lorenzo@...nel.org,
netdev@...r.kernel.org, bpf@...r.kernel.org,
syzbot+ff145014d6b0ce64a173@...kaller.appspotmail.com,
Kuniyuki Iwashima <kuniyu@...gle.com>
Subject: Re: [PATCH net] xdp: use multi-buff only if receive queue supports
page pool
On Wed, Sep 24, 2025 at 5:09 PM Jakub Kicinski <kuba@...nel.org> wrote:
>
> On Wed, 24 Sep 2025 06:08:42 +0000 Octavian Purdila wrote:
> > When a BPF program that supports BPF_F_XDP_HAS_FRAGS is issuing
> > bpf_xdp_adjust_tail and a large packet is injected via /dev/net/tun a
> > crash occurs due to detecting a bad page state (page_pool leak).
> >
> > This is because xdp_buff does not record the type of memory and
> > instead relies on the netdev receive queue xdp info. Since the TUN/TAP
> > driver is using a MEM_TYPE_PAGE_SHARED memory model buffer, shrinking
> > will eventually call page_frag_free. But with current multi-buff
> > support for BPF_F_XDP_HAS_FRAGS programs buffers are allocated via the
> > page pool.
> >
> > To fix this issue check that the receive queue memory mode is of
> > MEM_TYPE_PAGE_POOL before using multi-buffs.
>
> This can also happen on veth, right? And veth re-stamps the Rx queues.
I am not sure if re-stamps will have ill effects.
The allocation and deallocation for this issue happens while
processing the same packet (receive skb -> skb_pp_cow_data ->
page_pool alloc ... __bpf_prog_run -> bpf_xdp_adjust_tail).
IIUC, if the veth re-stamps the RX queue to MEM_TYPE_PAGE_POOL
skb_pp_cow_data will proceed to allocate from page_pool and
bpf_xdp_adjust_tail will correctly free from page_pool.
Powered by blists - more mailing lists