lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <36F7E4A28C18BE4DB7C86058E7B607241E622083@MTRDAG01.mtl.com>
Date:	Tue, 11 Sep 2012 19:24:26 +0000
From:	Shlomo Pongratz <shlomop@...lanox.com>
To:	Eric Dumazet <eric.dumazet@...il.com>
CC:	"netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: RE: GRO aggregation

From: Eric Dumazet [eric.dumazet@...il.com]
Sent: Tuesday, September 11, 2012 10:02 PM
To: Shlomo Pongratz
Cc: netdev@...r.kernel.org
Subject: RE: GRO aggregation

On Tue, 2012-09-11 at 18:49 +0000, Shlomo Pongratz wrote:

> I disabled the LRO. I actually tried the all the 4 options and found that LRO, GRO, LRO+GRO gives the same results for ixgbe w.r.t aggregation size (didn't check for throughput or latency).
> Is there a timeout that flushes the aggregated SKBs before 64K were aggregated?

At the end of NAPI run, we flush the gro state.

It basically means that an interrupt came, and we fetched 21 frames from
the NIC.

To get more packets per interrupt, you might try to slow down your
cpu ;)

But I dont get the point.


I see that in ixgbe the weight for the NAPI is 64 (netif_napi_add). So if packets are arriving in high rate then an the CPU is fast enough to collect the packets as they arrive, assuming packets continue to arrives while the NAPI runs. Then it should have aggregate more. So we will have less passes trough the stack.

Shlomo--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ