[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4693C951.3040608@garzik.org>
Date: Tue, 10 Jul 2007 14:00:49 -0400
From: Jeff Garzik <jeff@...zik.org>
To: Ayyappan.Veeraiyan@...el.com
CC: netdev@...r.kernel.org, arjan@...ux.intel.com,
akpm@...ux-foundation.org, auke-jan.h.kok@...el.com,
hch@...radead.org, shemminger@...ux-foundation.org,
nhorman@...driver.com, inaky@...ux.intel.com, mb@...sch.de
Subject: Re: [PATCH 0/1] ixgbe: Support for Intel(R) 10GbE PCI Express adapters
- Take #2
Ayyappan.Veeraiyan@...el.com wrote:
> 7. NAPI mode uses sigle Rx queue and so fake netdev usage is removed.
> 8. Non-NAPI mode is added.
Honestly I'm not sure about drivers that have both NAPI and non-NAPI paths.
Several existing drivers do this, and in almost every case, I tend to
feel the driver would benefit from picking one approach, rather than
doing both.
Doing both tends to signal that the author hasn't bothered to measure
the differences between various approaches, and pick a clear winner.
I strongly prefer NAPI combined with hardware interrupt mitigation -- it
helps with multiple net interfaces balance load across the system, at
times of high load -- but I'm open to other solutions as well.
So... what are your preferences? What is the setup that gets closest
to wire speed under Linux? :)
Jeff
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists