[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LNX.2.21.1.2001221021590.8@nippy.intranet>
Date: Wed, 22 Jan 2020 10:33:53 +1100 (AEDT)
From: Finn Thain <fthain@...egraphics.com.au>
To: Eric Dumazet <eric.dumazet@...il.com>
cc: "David S. Miller" <davem@...emloft.net>,
Thomas Bogendoerfer <tsbogend@...ha.franken.de>,
Chris Zankel <chris@...kel.net>,
Laurent Vivier <laurent@...ier.eu>,
Geert Uytterhoeven <geert@...ux-m68k.org>,
netdev@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH net v2 01/12] net/sonic: Add mutual exclusion for accessing
shared state
On Tue, 21 Jan 2020, Eric Dumazet wrote:
> On 1/21/20 1:22 PM, Finn Thain wrote:
> > The netif_stop_queue() call in sonic_send_packet() races with the
> > netif_wake_queue() call in sonic_interrupt(). This causes issues
> > like "NETDEV WATCHDOG: eth0 (macsonic): transmit queue 0 timed out".
> > Fix this by disabling interrupts when accessing tx_skb[] and next_tx.
> > Update a comment to clarify the synchronization properties.
> >
> > Fixes: efcce839360f ("[PATCH] macsonic/jazzsonic network drivers update")
> > Tested-by: Stan Johnson <userm57@...oo.com>
> > Signed-off-by: Finn Thain <fthain@...egraphics.com.au>
>
> > @@ -284,9 +287,16 @@ static irqreturn_t sonic_interrupt(int irq, void *dev_id)
> > struct net_device *dev = dev_id;
> > struct sonic_local *lp = netdev_priv(dev);
> > int status;
> > + unsigned long flags;
> > +
> > + spin_lock_irqsave(&lp->lock, flags);
>
>
> This is a hard irq handler, no need to block hard irqs.
>
> spin_lock() here is enough.
>
Well, yes, assuming we're dealing with SMP [1]. Probably just disabling
pre-emption is all that will ever be needed.
Anyway, the real problem solved by disabling irqs is that macsonic must
avoid re-entrance of sonic_interrupt(). [2]
[1]
https://lore.kernel.org/netdev/alpine.LNX.2.21.1.2001211026190.8@nippy.intranet/T/#m0523c8b2a26a410ed56889d9230c37ba1160d40a
[2]
https://lore.kernel.org/netdev/alpine.LNX.2.21.1.2001211026190.8@nippy.intranet/T/#m1c8ca580d2b45e61a628d17839978d0bd5aaf061
Powered by blists - more mailing lists