[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID:
<PAXPR04MB85104AD2673A1D58131CEBD48893A@PAXPR04MB8510.eurprd04.prod.outlook.com>
Date: Mon, 26 Jan 2026 02:37:01 +0000
From: Wei Fang <wei.fang@....com>
To: Jakub Kicinski <kuba@...nel.org>
CC: "john.fastabend@...il.com" <john.fastabend@...il.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>, Shenwei Wang
<shenwei.wang@....com>, "daniel@...earbox.net" <daniel@...earbox.net>, Frank
Li <frank.li@....com>, "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
Clark Wang <xiaoning.wang@....com>, "ast@...nel.org" <ast@...nel.org>,
"sdf@...ichev.me" <sdf@...ichev.me>, "imx@...ts.linux.dev"
<imx@...ts.linux.dev>, "davem@...emloft.net" <davem@...emloft.net>,
"edumazet@...gle.com" <edumazet@...gle.com>, "hawk@...nel.org"
<hawk@...nel.org>, "bpf@...r.kernel.org" <bpf@...r.kernel.org>,
"pabeni@...hat.com" <pabeni@...hat.com>, "andrew+netdev@...n.ch"
<andrew+netdev@...n.ch>
Subject: RE: [v4,net-next,13/15] net: fec: add fec_alloc_rxq_buffers_pp() to
allocate buffers from page pool
> net: fec: add AF_XDP zero-copy support
>
> This patch adds AF_XDP zero-copy support for both TX and RX on the FEC
> ethernet driver. RX buffers are allocated from the XSK pool instead of
> the page pool, and TX supports both frames from AF_XDP sockets and
> XDP_TX action.
>
> > diff --git a/drivers/net/ethernet/freescale/fec_main.c
> b/drivers/net/ethernet/freescale/fec_main.c
> > index c19ae60b04d7..d3e54dcf6a4c 100644
>
> [ ... ]
>
> > +static int fec_setup_xsk_pool(struct net_device *ndev,
> > + struct xsk_buff_pool *pool,
> > + u16 queue)
> > +{
> > + struct fec_enet_private *fep = netdev_priv(ndev);
> > + bool is_run = netif_running(ndev);
> > + struct fec_enet_priv_rx_q *rxq;
> > + struct fec_enet_priv_tx_q *txq;
> > + bool enable = !!pool;
> > + int err;
> > +
> > + if (queue >= fep->num_rx_queues || queue >= fep->num_tx_queues)
> > + return -ERANGE;
> > +
> > + if (is_run) {
> > + napi_disable(&fep->napi);
> > + netif_tx_disable(ndev);
> > + synchronize_rcu();
> > + fec_enet_free_buffers(ndev);
> > + }
> > +
> > + rxq = fep->rx_queue[queue];
> > + txq = fep->tx_queue[queue];
> > +
> > + if (enable) {
> > + err = xsk_pool_dma_map(pool, &fep->pdev->dev, 0);
> > + if (err) {
> > + netdev_err(ndev, "Failed to map xsk pool\n");
> > + return err;
> > + }
>
> If xsk_pool_dma_map() fails here when is_run was true, NAPI has already
> been disabled, TX has been disabled, and buffers have been freed. The
> early return leaves the device in a non-functional state with no way to
> recover except manually bringing the interface down and up.
>
> Should this error path restore the device state by calling fec_restart(),
> napi_enable(), and netif_tx_start_all_queues() before returning, similar
> to what is done after fec_enet_alloc_buffers() succeeds?
I will fix this issue, thanks.
>
> > +
> > + rxq->xsk_pool = pool;
> > + txq->xsk_pool = pool;
> > + } else {
> > + xsk_pool_dma_unmap(rxq->xsk_pool, 0);
> > + rxq->xsk_pool = NULL;
> > + txq->xsk_pool = NULL;
> > + }
> > +
> > + if (is_run) {
> > + err = fec_enet_alloc_buffers(ndev);
> > + if (err) {
> > + netdev_err(ndev, "Failed to alloc buffers\n");
> > + goto err_alloc_buffers;
> > + }
> > +
> > + fec_restart(ndev);
> > + napi_enable(&fep->napi);
> > + netif_tx_start_all_queues(ndev);
> > + }
> > +
> > + return 0;
> > +
> > +err_alloc_buffers:
> > + if (enable) {
> > + xsk_pool_dma_unmap(pool, 0);
> > + rxq->xsk_pool = NULL;
> > + txq->xsk_pool = NULL;
> > + }
> > +
> > + return err;
>
> Similarly, the err_alloc_buffers path cleans up the XSK pool state but
> does not restore NAPI/TX or restart the device when is_run was true.
> Does this leave the device in the same non-functional state?
>
The current driver does not support allocating the buffers first and then
replacing the old buffers, so calling fec_enet_alloc_buffers() again in the
err_alloc_buffers path may still fail and potentially make the device
unusable. I will think about how to change this situation. Thanks.
Powered by blists - more mailing lists