[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <AM6PR0402MB3607A57923BA5997C79E9082FF6F0@AM6PR0402MB3607.eurprd04.prod.outlook.com>
Date: Tue, 30 Jun 2020 06:28:40 +0000
From: Andy Duan <fugang.duan@....com>
To: Tobias Waldekranz <tobias@...dekranz.com>,
"davem@...emloft.net" <davem@...emloft.net>
CC: "netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: RE: [EXT] [PATCH net] net: ethernet: fec: prevent tx starvation under
high rx load
From: Tobias Waldekranz <tobias@...dekranz.com> Sent: Tuesday, June 30, 2020 3:16 AM
> In the ISR, we poll the event register for the queues in need of service and
> then enter polled mode. After this point, the event register will never be read
> again until we exit polled mode.
>
> In a scenario where a UDP flow is routed back out through the same interface,
> i.e. "router-on-a-stick" we'll typically only see an rx queue event initially.
> Once we start to process the incoming flow we'll be locked polled mode, but
> we'll never clean the tx rings since that event is never caught.
>
> Eventually the netdev watchdog will trip, causing all buffers to be dropped and
> then the process starts over again.
>
> By adding a poll of the active events at each NAPI call, we avoid the
> starvation.
>
> Fixes: 4d494cdc92b3 ("net: fec: change data structure to support
> multiqueue")
> Signed-off-by: Tobias Waldekranz <tobias@...dekranz.com>
Acked-by: Fugang Duan <fugang.duan@....com>
> ---
> drivers/net/ethernet/freescale/fec_main.c | 22 +++++++++++++---------
> 1 file changed, 13 insertions(+), 9 deletions(-)
>
> diff --git a/drivers/net/ethernet/freescale/fec_main.c
> b/drivers/net/ethernet/freescale/fec_main.c
> index 2d0d313ee7c5..f6e25c2d2c8c 100644
> --- a/drivers/net/ethernet/freescale/fec_main.c
> +++ b/drivers/net/ethernet/freescale/fec_main.c
> @@ -1617,8 +1617,17 @@ fec_enet_rx(struct net_device *ndev, int
> budget) }
>
> static bool
> -fec_enet_collect_events(struct fec_enet_private *fep, uint int_events)
> +fec_enet_collect_events(struct fec_enet_private *fep)
> {
> + uint int_events;
> +
> + int_events = readl(fep->hwp + FEC_IEVENT);
> +
> + /* Don't clear MDIO events, we poll for those */
> + int_events &= ~FEC_ENET_MII;
> +
> + writel(int_events, fep->hwp + FEC_IEVENT);
> +
> if (int_events == 0)
> return false;
>
> @@ -1644,16 +1653,9 @@ fec_enet_interrupt(int irq, void *dev_id) {
> struct net_device *ndev = dev_id;
> struct fec_enet_private *fep = netdev_priv(ndev);
> - uint int_events;
> irqreturn_t ret = IRQ_NONE;
>
> - int_events = readl(fep->hwp + FEC_IEVENT);
> -
> - /* Don't clear MDIO events, we poll for those */
> - int_events &= ~FEC_ENET_MII;
> -
> - writel(int_events, fep->hwp + FEC_IEVENT);
> - fec_enet_collect_events(fep, int_events);
> + fec_enet_collect_events(fep);
>
> if ((fep->work_tx || fep->work_rx) && fep->link) {
> ret = IRQ_HANDLED;
> @@ -1674,6 +1676,8 @@ static int fec_enet_rx_napi(struct napi_struct
> *napi, int budget)
> struct fec_enet_private *fep = netdev_priv(ndev);
> int pkts;
>
> + fec_enet_collect_events(fep);
> +
> pkts = fec_enet_rx(ndev, budget);
>
> fec_enet_tx(ndev);
> --
> 2.17.1
Powered by blists - more mailing lists