[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <70b71e7bb8a7dff2dacab99b0746e7bf2bee9344.camel@gmail.com>
Date: Tue, 25 Jul 2023 09:51:44 -0700
From: Alexander H Duyck <alexander.duyck@...il.com>
To: Wei Fang <wei.fang@....com>, davem@...emloft.net,
edumazet@...gle.com, kuba@...nel.org, shenwei.wang@....com,
xiaoning.wang@....com, pabeni@...hat.com, netdev@...r.kernel.org
Cc: linux-imx@....com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH net] net: fec: tx processing does not call XDP APIs if
budget is 0
On Tue, 2023-07-25 at 15:41 +0800, Wei Fang wrote:
> According to the clarification [1] in the latest napi.rst, the tx
> processing cannot call any XDP (or page pool) APIs if the "budget"
> is 0. Because NAPI is called with the budget of 0 (such as netpoll)
> indicates we may be in an IRQ context, however, we cannot use the
> page pool from IRQ context.
>
> [1] https://lore.kernel.org/all/20230720161323.2025379-1-kuba@kernel.org/
>
> Fixes: 20f797399035 ("net: fec: recycle pages for transmitted XDP frames")
> Signed-off-by: Wei Fang <wei.fang@....com>
> Suggested-by: Jakub Kicinski <kuba@...nel.org>
> ---
> drivers/net/ethernet/freescale/fec_main.c | 16 ++++++++++++----
> 1 file changed, 12 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
> index 073d61619336..66b5cbdb43b9 100644
> --- a/drivers/net/ethernet/freescale/fec_main.c
> +++ b/drivers/net/ethernet/freescale/fec_main.c
> @@ -1372,7 +1372,7 @@ fec_enet_hwtstamp(struct fec_enet_private *fep, unsigned ts,
> }
>
> static void
> -fec_enet_tx_queue(struct net_device *ndev, u16 queue_id)
> +fec_enet_tx_queue(struct net_device *ndev, u16 queue_id, int budget)
> {
> struct fec_enet_private *fep;
> struct xdp_frame *xdpf;
> @@ -1416,6 +1416,14 @@ fec_enet_tx_queue(struct net_device *ndev, u16 queue_id)
> if (!skb)
> goto tx_buf_done;
> } else {
> + /* Tx processing cannot call any XDP (or page pool) APIs if
> + * the "budget" is 0. Because NAPI is called with budget of
> + * 0 (such as netpoll) indicates we may be in an IRQ context,
> + * however, we can't use the page pool from IRQ context.
> + */
> + if (unlikely(!budget))
> + break;
> +
> xdpf = txq->tx_buf[index].xdp;
> if (bdp->cbd_bufaddr)
> dma_unmap_single(&fep->pdev->dev,
This statement isn't correct. There are napi enabled and non-napi
versions of these calls. This is the reason for things like the
"allow_direct" parameter in page_pool_put_full_page and the
"napi_direct" parameter in __xdp_return.
By blocking on these cases you can end up hanging the Tx queue which is
going to break netpoll as you are going to stall the ring on XDP
packets if they are already in the queue.
>From what I can tell your driver is using xdp_return_frame in the case
of an XDP frame which doesn't make use of the NAPI optimizations in
freeing from what I can tell. The NAPI optimized version is
xdp_return_frame_rx.
> @@ -1508,14 +1516,14 @@ fec_enet_tx_queue(struct net_device *ndev, u16 queue_id)
> writel(0, txq->bd.reg_desc_active);
> }
>
> -static void fec_enet_tx(struct net_device *ndev)
> +static void fec_enet_tx(struct net_device *ndev, int budget)
> {
> struct fec_enet_private *fep = netdev_priv(ndev);
> int i;
>
> /* Make sure that AVB queues are processed first. */
> for (i = fep->num_tx_queues - 1; i >= 0; i--)
> - fec_enet_tx_queue(ndev, i);
> + fec_enet_tx_queue(ndev, i, budget);
> }
>
> static void fec_enet_update_cbd(struct fec_enet_priv_rx_q *rxq,
> @@ -1858,7 +1866,7 @@ static int fec_enet_rx_napi(struct napi_struct *napi, int budget)
>
> do {
> done += fec_enet_rx(ndev, budget - done);
> - fec_enet_tx(ndev);
> + fec_enet_tx(ndev, budget);
> } while ((done < budget) && fec_enet_collect_events(fep));
>
> if (done < budget) {
Since you are passing budget, one optimization you could make use of
would be napi_consume_skb in your Tx path instead of dev_kfree_skb_any.
Powered by blists - more mailing lists