[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090528152143.GA4501@neterion.com>
Date: Thu, 28 May 2009 11:21:43 -0400
From: Benjamin LaHaise <ben.lahaise@...erion.com>
To: Herbert Xu <herbert@...dor.apana.org.au>
Cc: "David S. Miller" <davem@...emloft.net>, netdev@...r.kernel.org
Subject: Re: [0/14] GRO: Lots of microoptimisations
On Thu, May 28, 2009 at 09:08:58AM +1000, Herbert Xu wrote:
> On Wed, May 27, 2009 at 01:52:23PM -0400, Benjamin LaHaise wrote:
> >
> > A few questions for you: I've been looking a bit into potential GRO
> > optimisations that are possible with the vxge driver. At least from my
> > existing testing on a P4 Xeon, it seems that doing packet rx via
> > napi_gro_receive() was a bit slower. I'll retest with these changes
>
> Slower compared to LRO or GRO off?
With GRO off I'm getting ~4.7-5Gbps to the receiver which is CPU bound with
netperf. With GRO on, that drops to ~3.9-4.3Gbps. The only real difference
is the entry point into the net code being napi_gro_receive() vs
netif_receive_skb().
> > of yours. What platform have your tests been run on? Also, do you have
> > any notes/ideas on how best to make use of the GRO functionality within
> > the kernel? I'm hoping it's possible to make use of a few of the hardware
> > hints to improve fast path performance.
>
> What sort of hints do you have?
We have a few bits in the hardware descriptor which indicate if the packet
is TCP or UDP, IPv4 or IPv6, as well as whether TCP packets are fast path
eligible. The hardware can also split up the headers to place the ethernet
MAC, IP and payload in separate buffers. I plan to run a few tests to see
if dispatching directly from the driver into the TCP fast path makes much
difference.
-ben
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists