[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1306899907.29297.12.camel@pasglop>
Date: Wed, 01 Jun 2011 13:45:07 +1000
From: Benjamin Herrenschmidt <benh@...nel.crashing.org>
To: David Miller <davem@...emloft.net>
Cc: netdev@...r.kernel.org, ruediger.herbst@...glemail.com,
bhamilton04@...il.com
Subject: Re: [RFC/PATCH] sungem: Spring cleaning and GRO support
> And I think I see what the problem is:
>
> > + if (unlikely(netif_queue_stopped(dev) &&
> > + TX_BUFFS_AVAIL(gp) > (MAX_SKB_FRAGS + 1))) {
> > + netif_tx_lock(dev);
> > + if (netif_queue_stopped(dev) &&
> > + TX_BUFFS_AVAIL(gp) > (MAX_SKB_FRAGS + 1))
> > + netif_wake_queue(dev);
> > + netif_tx_unlock(dev);
> > + }
> > }
>
> Don't use netif_tx_lock(), that has a loop and multiple atomics :-)
>
> It's going to grab a special global TX lock, and then grab a lock for
> TX queue zero, and finally set an atomic state bit in TX queue zero.
>
> Take a look at the implementation in netdevice.h
Ah good point ! I think I stole that from another driver (or I just had
a brain fart), indeed, it's bad.
> It's a special "lock everything TX", a mechanism for multiqueue
> drivers to shut quiesce all TX queue activity safely in one operation.
>
> Instead, do something like:
>
> struct netdev_queue *txq = netdev_get_tx_queue(dev, 0);
>
> __netif_tx_lock(txq, smp_processor_id();
> ...
> __netif_tx_unlock(txq);
>
> and I bet your TX numbers improve a bit.
Right, I'll give that a go. With the assistance of the other Ben H I've
been able to simplify the driver a lot more now too. mutex and remaining
lock are gone, rtnl lock does the job fine for synchronizing vs. reset
task and I cleared up a ton more of unused bits and pieces now that we
don't deal with link timer when the thing is off anymore.
I'll have a new patch later today hopefully with new numbers.
Cheers,
Ben.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists