[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20260125214521.3384305-1-kuba@kernel.org>
Date: Sun, 25 Jan 2026 13:45:21 -0800
From: Jakub Kicinski <kuba@...nel.org>
To: wei.fang@....com
Cc: Jakub Kicinski <kuba@...nel.org>,
john.fastabend@...il.com,
linux-kernel@...r.kernel.org,
shenwei.wang@....com,
daniel@...earbox.net,
frank.li@....com,
netdev@...r.kernel.org,
xiaoning.wang@....com,
ast@...nel.org,
sdf@...ichev.me,
imx@...ts.linux.dev,
davem@...emloft.net,
edumazet@...gle.com,
hawk@...nel.org,
bpf@...r.kernel.org,
pabeni@...hat.com,
andrew+netdev@...n.ch
Subject: Re: [v4,net-next,15/15] net: fec: add AF_XDP zero-copy support
This is an AI-generated review of your patch. The human sending this
email has considered the AI review valid, or at least plausible.
For local reproduction steps see https://netdev-ai.bots.linux.dev/ai-local.html
---
net: fec: add AF_XDP zero-copy support
This patch adds AF_XDP zero-copy support for both TX and RX on the FEC
driver, enabling higher performance packet processing through XSK pools.
> diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
> index c19ae60b04d7..d3e54dcf6a4c 100644
> --- a/drivers/net/ethernet/freescale/fec_main.c
> +++ b/drivers/net/ethernet/freescale/fec_main.c
[ ... ]
> +static int fec_setup_xsk_pool(struct net_device *ndev,
> + struct xsk_buff_pool *pool,
> + u16 queue)
> +{
> + struct fec_enet_private *fep = netdev_priv(ndev);
> + bool is_run = netif_running(ndev);
> + struct fec_enet_priv_rx_q *rxq;
> + struct fec_enet_priv_tx_q *txq;
> + bool enable = !!pool;
> + int err;
> +
> + if (queue >= fep->num_rx_queues || queue >= fep->num_tx_queues)
> + return -ERANGE;
> +
> + if (is_run) {
> + napi_disable(&fep->napi);
> + netif_tx_disable(ndev);
> + synchronize_rcu();
> + fec_enet_free_buffers(ndev);
> + }
> +
> + rxq = fep->rx_queue[queue];
> + txq = fep->tx_queue[queue];
> +
> + if (enable) {
> + err = xsk_pool_dma_map(pool, &fep->pdev->dev, 0);
> + if (err) {
> + netdev_err(ndev, "Failed to map xsk pool\n");
> + return err;
^^^^^^^^^^
If xsk_pool_dma_map() fails here when the device was running (is_run ==
true), the function returns without restoring the device state. At this
point NAPI is disabled, TX queues are disabled, and all buffers have been
freed. Should the error path restore the device to a working state similar
to the err_alloc_buffers path below?
The err_alloc_buffers label handles the cleanup for xsk_pool state but it
looks like that error path would also need to call fec_enet_alloc_buffers(),
fec_restart(), napi_enable(), and netif_tx_start_all_queues() to restore
the device when is_run was true.
> + }
> +
> + rxq->xsk_pool = pool;
> + txq->xsk_pool = pool;
> + } else {
> + xsk_pool_dma_unmap(rxq->xsk_pool, 0);
> + rxq->xsk_pool = NULL;
> + txq->xsk_pool = NULL;
> + }
> +
> + if (is_run) {
> + err = fec_enet_alloc_buffers(ndev);
> + if (err) {
> + netdev_err(ndev, "Failed to alloc buffers\n");
> + goto err_alloc_buffers;
> + }
> +
> + fec_restart(ndev);
> + napi_enable(&fep->napi);
> + netif_tx_start_all_queues(ndev);
> + }
> +
> + return 0;
> +
> +err_alloc_buffers:
> + if (enable) {
> + xsk_pool_dma_unmap(pool, 0);
> + rxq->xsk_pool = NULL;
> + txq->xsk_pool = NULL;
> + }
> +
> + return err;
> +}
Powered by blists - more mailing lists