[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <36F7E4A28C18BE4DB7C86058E7B607241E622022@MTRDAG01.mtl.com>
Date: Tue, 11 Sep 2012 18:49:19 +0000
From: Shlomo Pongratz <shlomop@...lanox.com>
To: Eric Dumazet <eric.dumazet@...il.com>
CC: "netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: RE: GRO aggregation
From: Eric Dumazet [eric.dumazet@...il.com]
Sent: Tuesday, September 11, 2012 9:33 PM
To: Shlomo Pongratz
Cc: netdev@...r.kernel.org
Subject: Re: GRO aggregation
On Tue, 2012-09-11 at 16:45 +0300, Shlomo Pongartz wrote:
> Hi,
>
> I’m checking GRO aggregation with kernel 3.6.0-rc1+ using Intel ixgbe
> driver.
> The mtu is 1500 and GRO is on and so are SG and RX checksum.
> I ran iperf with default setting and monitor the receiver with tcpdump.
> The tcpdump shows that the maximal aggregation is 32120 which is 21 * 1500.
> In the transmitter side tcpdump shows that TSO works better (~64K).
> I did a capture without GRO enabled to see if there was a difference
> between any flag
> of any two consecutive packets that forced flushing but didn't find
> anything.
> Is the GRO aggregation can be tuned.
It might mean NAPI runs while about 21 frames can be fetched at once
from NIC.
If receiver cpu is fast enough, it has no need to aggregate more segment
per skb.
Is LRO off or on ?
GRO itself has a 64Kbytes limit.
Hi Eric.
I disabled the LRO. I actually tried the all the 4 options and found that LRO, GRO, LRO+GRO gives the same results for ixgbe w.r.t aggregation size (didn't check for throughput or latency).
Is there a timeout that flushes the aggregated SKBs before 64K were aggregated?
Shlomo
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists