[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20071009040037.GA15981@gondor.apana.org.au>
Date: Tue, 9 Oct 2007 12:00:38 +0800
From: Herbert Xu <herbert@...dor.apana.org.au>
To: jamal <hadi@...erus.ca>
Cc: David Miller <davem@...emloft.net>, netdev@...r.kernel.org,
kaber@...sh.net, dada1@...mosbay.com, johnpol@....mipt.ru
Subject: Re: [RFC][NET_SCHED] explict hold dev tx lock
On Wed, Sep 19, 2007 at 10:43:03PM -0400, jamal wrote:
>
> [NET_SCHED] explict hold dev tx lock
>
> For N cpus, with full throttle traffic on all N CPUs, funneling traffic
> to the same ethernet device, the devices queue lock is contended by all
> N CPUs constantly. The TX lock is only contended by a max of 2 CPUS.
> In the current mode of operation, after all the work of entering the
> dequeue region, we may endup aborting the path if we are unable to get
> the tx lock and go back to contend for the queue lock. As N goes up,
> this gets worse.
>
> The changes in this patch result in a small increase in performance
> with a 4CPU (2xdual-core) with no irq binding. Both e1000 and tg3
> showed similar behavior;
OK, after waking up a bit more I now have another question :)
Both of the drivers you've tested here are special. Firstly
e1000 is lockless so there is no contention here at all. On
the other hand tg3 doesn't take the TX lock on the clean-up
path unless the queue has been stopped.
In other words both drivers only take the TX lock on xmit so
this patch makes very little difference to them.
What I'm worried about is would we see worse behaviour with
drivers that do all their TX clean-up with the TX lock held
(which would cause qdisc_restart to spin while this is happening)?
Cheers,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <herbert@...dor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists