[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240927143308.emkgu7x5ybjnqaty@skbuf>
Date: Fri, 27 Sep 2024 17:33:08 +0300
From: Vladimir Oltean <vladimir.oltean@....com>
To: Wei Fang <wei.fang@....com>
Cc: Maciej Fijalkowski <maciej.fijalkowski@...el.com>,
"davem@...emloft.net" <davem@...emloft.net>,
"edumazet@...gle.com" <edumazet@...gle.com>,
"kuba@...nel.org" <kuba@...nel.org>,
"pabeni@...hat.com" <pabeni@...hat.com>,
Claudiu Manoil <claudiu.manoil@....com>,
"ast@...nel.org" <ast@...nel.org>,
"daniel@...earbox.net" <daniel@...earbox.net>,
"hawk@...nel.org" <hawk@...nel.org>,
"john.fastabend@...il.com" <john.fastabend@...il.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"bpf@...r.kernel.org" <bpf@...r.kernel.org>,
"stable@...r.kernel.org" <stable@...r.kernel.org>,
"imx@...ts.linux.dev" <imx@...ts.linux.dev>
Subject: Re: [PATCH net 3/3] net: enetc: reset xdp_tx_in_flight when updating
bpf program
Hi Wei,
On Mon, Sep 23, 2024 at 04:59:56AM +0300, Wei Fang wrote:
> Okay, I have tested this solution (see changes below), and from what I observed,
> the xdp_tx_in_flight can naturally drop to 0 in every test. So if there are no other
> risks, the next version will use this solution.
>
Sorry for the delay. I have tested this variant and it works. Just one
thing below.
> @@ -2467,10 +2469,6 @@ void enetc_start(struct net_device *ndev)
> struct enetc_ndev_priv *priv = netdev_priv(ndev);
> int i;
>
> - enetc_setup_interrupts(priv);
> -
> - enetc_enable_tx_bdrs(priv);
> -
> for (i = 0; i < priv->bdr_int_num; i++) {
> int irq = pci_irq_vector(priv->si->pdev,
> ENETC_BDR_INT_BASE_IDX + i);
> @@ -2479,6 +2477,10 @@ void enetc_start(struct net_device *ndev)
> enable_irq(irq);
> }
>
> + enetc_setup_interrupts(priv);
> +
> + enetc_enable_tx_bdrs(priv);
> +
> enetc_enable_rx_bdrs(priv);
>
> netif_tx_start_all_queues(ndev);
> @@ -2547,6 +2549,12 @@ void enetc_stop(struct net_device *ndev)
>
> enetc_disable_rx_bdrs(priv);
>
> + enetc_wait_bdrs(priv);
> +
> + enetc_disable_tx_bdrs(priv);
> +
> + enetc_clear_interrupts(priv);
Here, NAPI may still be scheduled. So if you clear interrupts, enetc_poll()
on another CPU might still have time to re-enable them. This makes the
call pointless.
Please move the enetc_clear_interrupts() call after the for() loop below
(AKA leave it where it is).
> +
> for (i = 0; i < priv->bdr_int_num; i++) {
> int irq = pci_irq_vector(priv->si->pdev,
> ENETC_BDR_INT_BASE_IDX + i);
> @@ -2555,12 +2563,6 @@ void enetc_stop(struct net_device *ndev)
> napi_synchronize(&priv->int_vector[i]->napi);
> napi_disable(&priv->int_vector[i]->napi);
> }
> -
> - enetc_wait_bdrs(priv);
> -
> - enetc_disable_tx_bdrs(priv);
> -
> - enetc_clear_interrupts(priv);
> }
> EXPORT_SYMBOL_GPL(enetc_stop);
FWIW, there are at least 2 other valid ways of solving this problem. One
has already been mentioned (reset the counter in enetc_free_rx_ring()):
@@ -2014,6 +2015,8 @@ static void enetc_free_rx_ring(struct enetc_bdr *rx_ring)
__free_page(rx_swbd->page);
rx_swbd->page = NULL;
}
+
+ rx_ring->xdp.xdp_tx_in_flight = 0;
}
static void enetc_free_rxtx_rings(struct enetc_ndev_priv *priv)
And the other would be to keep rescheduling NAPI until there are no more
pending XDP_TX frames.
diff --git a/drivers/net/ethernet/freescale/enetc/enetc.c b/drivers/net/ethernet/freescale/enetc/enetc.c
index 3cff76923ab9..36520f8c49a6 100644
--- a/drivers/net/ethernet/freescale/enetc/enetc.c
+++ b/drivers/net/ethernet/freescale/enetc/enetc.c
@@ -1689,6 +1689,7 @@ static int enetc_poll(struct napi_struct *napi, int budget)
work_done = enetc_clean_rx_ring_xdp(rx_ring, napi, budget, prog);
else
work_done = enetc_clean_rx_ring(rx_ring, napi, budget);
- if (work_done == budget)
+ if (work_done == budget || rx_ring->xdp.xdp_tx_in_flight)
complete = false;
if (work_done)
But I like your second proposal the best. It doesn't involve adding an
unnecessary extra test in the fast path.
Powered by blists - more mailing lists