[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <46688169.8080806@intel.com>
Date: Thu, 07 Jun 2007 15:06:33 -0700
From: "Kok, Auke" <auke-jan.h.kok@...el.com>
To: hadi@...erus.ca
CC: Jeff Garzik <jeff@...zik.org>, David Miller <davem@...emloft.net>,
kaber@...sh.net, peter.p.waskiewicz.jr@...el.com,
netdev@...r.kernel.org,
Jesse Brandeburg <jesse.brandeburg@...el.com>
Subject: Re: [PATCH] NET: Multiqueue network device support.
jamal wrote:
> On Thu, 2007-07-06 at 08:03 -0700, Kok, Auke wrote:
>> To prevent against multiple entries bumping head & tail at the same time as well
>> as overwriting the same entries in the tx ring (contention for
>> next_to_watch/next_to_clean)?
>
> In current code that lock certainly doesnt protect those specifics.
> I thought at some point thats what it did; somehow that seems to have
> changed - the rx path/tx prunning is protected by tx_queue_lock
> I have tested it the patch on smp and it works.
>
>> It may be unlikely but ripping out the tx ring
>> lock might not be a good idea, perhaps after we get rid of LLTX in e1000?
>
> I dont think it matters either way. At the moment, you are _guaranteed_
> only one cpu can enter tx path. There may be another CPU, but as long
> (as in current code) you dont have any contention between tx and rx, it
> seems to be a non-issue.
>
>> to be honest: I'm open for ideas and I'll give it a try, but stuff like this
>> needs to go through some nasty stress testing (multiple clients, long time)
>> before I will consider it seriously, but fortunately that's something I can do.
>
> I empathize but take a closer look; seems mostly useless.
> And like i said I have done a quick test with an SMP machine and it
> seems to work fine; but your tests will probably be more thorough.
the contention isn't between multiple tx attempts, but between e1000_clean and
tx. You'll need bidirectional traffic with multiple clients probably to hit it...
Auke
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists