[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <e9f13992d679d08d193cb40c15c70fbf4adfe9d94a6235b20858fbe161be58a1@mail.kernel.org>
Date: Wed, 29 Oct 2025 22:59:36 +0000 (UTC)
From: bot+bpf-ci@...nel.org
To: maciej.fijalkowski@...el.com,bpf@...r.kernel.org,ast@...nel.org,daniel@...earbox.net,hawk@...nel.org
Cc: netdev@...r.kernel.org,magnus.karlsson@...el.com,aleksander.lobakin@...el.com,ilias.apalodimas@...aro.org,toke@...hat.com,lorenzo@...nel.org,kuba@...nel.org,maciej.fijalkowski@...el.com,ast@...nel.org,andrii@...nel.org,daniel@...earbox.net,martin.lau@...nel.org,eddyz87@...il.com,yonghong.song@...ux.dev,clm@...a.com,ihor.solodrai@...ux.dev
Subject: Re: [PATCH v5 bpf 2/2] veth: update mem type in xdp_buff
```
commit 96d6e22337a81e8606de33d22173ff80ab961bc8
Author: Maciej Fijalkowski <maciej.fijalkowski@...el.com>
veth: update mem type in xdp_buff
The commit fixes a page_pool leak by properly detecting page_pool memory
and using appropriate reference counting APIs. The xdp_buff mem type is
now dynamically set based on skb->pp_recycle instead of always using
MEM_TYPE_PAGE_SHARED.
Closes: https://lore.kernel.org/bpf/CAADnVQ+bBofJDfieyOYzSmSujSfJwDTQhiz3aJw7hE+4E2_iPA@mail.gmail.com/
> diff --git a/drivers/net/veth.c b/drivers/net/veth.c
> index a3046142c..187f30e2c 100644
> --- a/drivers/net/veth.c
> +++ b/drivers/net/veth.c
[ ... ]
> @@ -733,7 +733,7 @@ static void veth_xdp_rcv_bulk_skb(struct veth_rq *rq, void **frames,
> }
> }
>
> -static void veth_xdp_get(struct xdp_buff *xdp)
> +static void veth_xdp_get_shared(struct xdp_buff *xdp)
> {
> struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp);
> int i;
> @@ -746,12 +746,33 @@ static void veth_xdp_get(struct xdp_buff *xdp)
> __skb_frag_ref(&sinfo->frags[i]);
> }
>
> +static void veth_xdp_get_pp(struct xdp_buff *xdp)
> +{
> + struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp);
> + int i;
> +
> + page_pool_ref_page(virt_to_page(xdp->data));
> + if (likely(!xdp_buff_has_frags(xdp)))
> + return;
> +
> + for (i = 0; i < sinfo->nr_frags; i++) {
> + skb_frag_t *frag = &sinfo->frags[i];
> +
> + page_pool_ref_page(netmem_to_page(frag->netmem));
> + }
> +}
> +
> +static void veth_xdp_get(struct xdp_buff *xdp)
> +{
> + xdp->rxq->mem.type == MEM_TYPE_PAGE_POOL ?
> + veth_xdp_get_pp(xdp) : veth_xdp_get_shared(xdp);
> +}
> +
[ ... ]
> @@ -822,24 +831,24 @@ static struct sk_buff *veth_xdp_rcv_skb(struct veth_rq *rq,
> case XDP_TX:
> veth_xdp_get(xdp);
> consume_skb(skb);
> - xdp->rxq->mem = rq->xdp_mem;
> if (unlikely(veth_xdp_tx(rq, xdp, bq) < 0)) {
> trace_xdp_exception(rq->dev, xdp_prog, act);
> stats->rx_drops++;
> goto err_xdp;
> }
Can veth_xdp_rcv_skb() leak page references in the XDP_TX error path?
When veth_xdp_tx() fails, the code jumps to err_xdp without releasing
the page references acquired by veth_xdp_get(). The skb was already
consumed, so the pages still have the extra references from either
veth_xdp_get_pp() or veth_xdp_get_shared().
veth_xdp_tx() can return -EOVERFLOW when xdp_convert_buff_to_frame()
returns NULL, which is a reachable error path.
> stats->xdp_tx++;
> + rq->xdp_rxq.mem = rq->xdp_mem;
> rcu_read_unlock();
> goto xdp_xmit;
> case XDP_REDIRECT:
> veth_xdp_get(xdp);
> consume_skb(skb);
> - xdp->rxq->mem = rq->xdp_mem;
> if (xdp_do_redirect(rq->dev, xdp, xdp_prog)) {
> stats->rx_drops++;
> goto err_xdp;
> }
Similar question for the XDP_REDIRECT case - when xdp_do_redirect()
fails, does this also leak the page references taken by veth_xdp_get()?
> stats->xdp_redirect++;
> + rq->xdp_rxq.mem = rq->xdp_mem;
> rcu_read_unlock();
> goto xdp_xmit;
> default:
```
---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md
In-Reply-To-Subject: `veth: update mem type in xdp_buff`
CI run summary: https://github.com/kernel-patches/bpf/actions/runs/18923812102
Powered by blists - more mailing lists