[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20260125214516.3384252-1-kuba@kernel.org>
Date: Sun, 25 Jan 2026 13:45:16 -0800
From: Jakub Kicinski <kuba@...nel.org>
To: wei.fang@....com
Cc: Jakub Kicinski <kuba@...nel.org>,
john.fastabend@...il.com,
linux-kernel@...r.kernel.org,
shenwei.wang@....com,
daniel@...earbox.net,
frank.li@....com,
netdev@...r.kernel.org,
xiaoning.wang@....com,
ast@...nel.org,
sdf@...ichev.me,
imx@...ts.linux.dev,
davem@...emloft.net,
edumazet@...gle.com,
hawk@...nel.org,
bpf@...r.kernel.org,
pabeni@...hat.com,
andrew+netdev@...n.ch
Subject: Re: [v4,net-next,12/15] net: fec: move xdp_rxq_info* APIs out of fec_enet_create_page_pool()
This is an AI-generated review of your patch. The human sending this
email has considered the AI review valid, or at least plausible.
For local reproduction steps see https://netdev-ai.bots.linux.dev/ai-local.html
---
net: fec: add fec_alloc_rxq_buffers_pp() to allocate buffers from page pool
This commit extracts fec_alloc_rxq_buffers_pp() from fec_enet_alloc_rxq_buffers()
to prepare for XDP zero copy mode support. It also adds fec_free_rxq_buffers()
helper for cleanup.
> diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
> index c5ba532e00a8..867694d6d54d 100644
> --- a/drivers/net/ethernet/freescale/fec_main.c
> +++ b/drivers/net/ethernet/freescale/fec_main.c
[ ... ]
> +static int
> +fec_enet_alloc_rxq_buffers(struct net_device *ndev, unsigned int queue)
> +{
> + struct fec_enet_private *fep = netdev_priv(ndev);
> + struct fec_enet_priv_rx_q *rxq;
> + int err;
> +
> + rxq = fep->rx_queue[queue];
> + err = fec_alloc_rxq_buffers_pp(fep, rxq);
> + if (err)
> + return err;
> +
> + err = fec_xdp_rxq_info_reg(fep, rxq);
> + if (err) {
> + fec_free_rxq_buffers(rxq);
> +
> + return err;
> + }
> +
> + return 0;
> +}
Could there be a memory leak here on systems with multiple RX queues? The
old code had:
err_alloc:
fec_enet_free_buffers(ndev);
return -ENOMEM;
which freed all queues (both RX and TX). The new code calls
fec_free_rxq_buffers(rxq), which only frees the current queue.
If fec_enet_alloc_buffers() calls this function in a loop for multiple
queues and queue N fails after queues 0 through N-1 succeeded, only queue
N gets cleaned up. The caller fec_enet_alloc_buffers() just returns
-ENOMEM without freeing the previously allocated queues:
for (i = 0; i < fep->num_rx_queues; i++)
if (fec_enet_alloc_rxq_buffers(ndev, i))
return -ENOMEM;
And fec_enet_open() jumps to err_enet_alloc which does not call
fec_enet_free_buffers(), leaving the page pools and pages from the
successfully allocated queues leaked.
Powered by blists - more mailing lists