[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <E35F4F4D7F6C9E4E826FEC1F86CEF583042488B2@orsmsx412.amr.corp.intel.com>
Date: Tue, 10 Jul 2007 11:11:44 -0700
From: "Veeraiyan, Ayyappan" <ayyappan.veeraiyan@...el.com>
To: "Jeff Garzik" <jeff@...zik.org>
Cc: <netdev@...r.kernel.org>, <arjan@...ux.intel.com>,
<akpm@...ux-foundation.org>,
"Kok, Auke-jan H" <auke-jan.h.kok@...el.com>, <hch@...radead.org>,
<shemminger@...ux-foundation.org>, <nhorman@...driver.com>,
<inaky@...ux.intel.com>, <mb@...sch.de>
Subject: RE: [PATCH 0/1] ixgbe: Support for Intel(R) 10GbE PCI Express adapters - Take #2
On 7/10/07, Jeff Garzik <jeff@...zik.org> wrote:
> Ayyappan.Veeraiyan@...el.com wrote:
>
> Doing both tends to signal that the author hasn't bothered to measure
> the differences between various approaches, and pick a clear winner.
>
I did pick NAPI in our previous submission based on various tests. But
to get 10Gig line rate we need to use multiple Rx queues which will need
fake netdevs.. Since fake netdevs weren't acceptable, I added non-NAPI
support which gets 10Gig line rate with multi-rx. I am ok with removing
NAPI support till the work of separating the netdevs and NAPI work is
done..
> I strongly prefer NAPI combined with hardware interrupt mitigation --
it
> helps with multiple net interfaces balance load across the system, at
> times of high load -- but I'm open to other solutions as well.
>
Majority of tests we did here, we saw NAPI is better. But for some
specific test cases (especially if we add the SW RSC i.e. LRO), we saw
better throughput and CPU utilization with non-NAPI.
> So... what are your preferences? What is the setup that gets closest
> to wire speed under Linux? :)
With SW LRO, non-NAPI is better but without LRO, NAPI is better but NAPI
needs multiple Rx queues. So given the limitations, non-NPAI is my
preference now.
I will post the performance numbers later today..
>
> Jeff
Thanks..
Ayyappan
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists