[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20260119193626.1255791-1-kuba@kernel.org>
Date: Mon, 19 Jan 2026 11:36:26 -0800
From: Jakub Kicinski <kuba@...nel.org>
To: pvalerio@...hat.com
Cc: Jakub Kicinski <kuba@...nel.org>,
davem@...emloft.net,
andrew+netdev@...n.ch,
pabeni@...hat.com,
lorenzo@...nel.org,
claudiu.beznea@...on.dev,
nicolas.ferre@...rochip.com,
theo.lebrun@...tlin.com,
netdev@...r.kernel.org,
edumazet@...gle.com
Subject: Re: [net-next,8/8] cadence: macb: introduce xmit support
This is an AI-generated review of your patch. The human sending this
email has considered the AI review valid, or at least plausible.
For local reproduction steps see https://netdev-ai.bots.linux.dev/ai-local.html
---
cadence: macb: introduce xmit support
This patch adds XDP_TX verdict support and introduces the ndo_xdp_xmit
function for XDP redirection. It updates macb_tx_unmap() to handle both
skbs and xdp frames.
> diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
> index afd8c0f2d895..32f8629bcb25 100644
> --- a/drivers/net/ethernet/cadence/macb_main.c
> +++ b/drivers/net/ethernet/cadence/macb_main.c
[ ... ]
> @@ -983,7 +994,7 @@ static void macb_tx_unmap(struct macb *bp, struct macb_tx_buff *tx_buff,
> }
>
> if (tx_buff->ptr) {
> - napi_consume_skb(tx_buff->ptr, budget);
> + release_buff(tx_buff->ptr, tx_buff->type, budget);
> tx_buff->ptr = NULL;
> }
> }
[ ... ]
> +static int macb_xdp_submit_frame(struct macb *bp, struct xdp_frame *xdpf,
> + struct net_device *dev, bool dma_map,
> + dma_addr_t addr)
> +{
[ ... ]
> + if (dma_map) {
> + mapping = dma_map_single(&bp->pdev->dev,
> + xdpf->data,
> + xdpf->len, DMA_TO_DEVICE);
> + if (unlikely(dma_mapping_error(&bp->pdev->dev, mapping))) {
> + err = -ENOMEM;
> + goto unlock;
> + }
> + } else {
> + mapping = addr;
> + dma_sync_single_for_device(&bp->pdev->dev, mapping,
> + xdpf->len, DMA_BIDIRECTIONAL);
> + }
[ ... ]
> + tx_buff->ptr = xdpf;
> + tx_buff->type = buff_type;
> + tx_buff->mapping = mapping;
^^^^^^^
For XDP_TX (dma_map=false), the mapping here is the page pool's DMA address
passed via the addr parameter. Later, macb_tx_unmap() unconditionally calls
dma_unmap_single() when tx_buff->mapping is non-zero.
Should macb_tx_unmap() skip the DMA unmap for MACB_TYPE_XDP_TX buffers?
Page pool manages its own DMA mappings, and calling dma_unmap_single() on
them appears incorrect. For comparison, stmmac_tx_clean() explicitly
checks "buf_type != STMMAC_TXBUF_T_XDP_TX" before unmapping.
[ ... ]
> + case XDP_TX:
> + xdpf = xdp_convert_buff_to_frame(&xdp);
> +
> + if (!xdpf || macb_xdp_submit_frame(queue->bp, xdpf, dev, false,
> + addr))
> + act = XDP_DROP;
> + goto out;
When xdp_convert_buff_to_frame() returns NULL, this sets act = XDP_DROP but
then unconditionally executes "goto out", which skips the page_pool_put_full_page()
call that handles XDP_DROP. Could this leak the page when
xdp_convert_buff_to_frame() fails due to insufficient headroom?
When macb_xdp_submit_frame() fails, it internally calls release_buff() which
returns the page, so that path seems fine. But the xdpf == NULL case appears
to need a "break" instead of falling through to "goto out".
Powered by blists - more mailing lists