lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 1 Dec 2010 13:34:45 +0900
From:	Simon Horman <horms@...ge.net.au>
To:	Eric Dumazet <eric.dumazet@...il.com>
Cc:	Ben Hutchings <bhutchings@...arflare.com>, netdev@...r.kernel.org
Subject: Re: Bonding, GRO and tcp_reordering

On Tue, Nov 30, 2010 at 05:04:33PM +0100, Eric Dumazet wrote:
> Le mardi 30 novembre 2010 à 15:42 +0000, Ben Hutchings a écrit :
> > On Tue, 2010-11-30 at 22:55 +0900, Simon Horman wrote:
> 
> > > The only other parameter that seemed to have significant effect was to
> > > increase the mtu.  In the case of MTU=9000, GRO seemed to have a negative
> > > impact on throughput, though a significant positive effect on CPU
> > > utilisation.
> > [...]
> > 
> > Increasing MTU also increases the interval between packets on a TCP flow
> > using maximum segment size so that it is more likely to exceed the
> > difference in delay.
> > 
> 
> GRO really is operational _if_ we receive in same NAPI run several
> packets for the same flow.
> 
> As soon as we exit NAPI mode, GRO packets are flushed.
> 
> Big MTU --> bigger delays between packets, so big chance that GRO cannot
> trigger at all, since NAPI runs for one packet only.
> 
> One possibility with big MTU is to tweak "ethtool -c eth0" params
> rx-usecs: 20
> rx-frames: 5
> rx-usecs-irq: 0
> rx-frames-irq: 5
> so that "rx-usecs" is bigger than the delay between two MTU full sized
> packets.
> 
> Gigabit speed means 1 nano second per bit, and MTU=9000 means 72 us
> delay between packets.
> 
> So try :
> 
> ethtool -C eth0 rx-usecs 100
> 
> to get chance that several packets are delivered at once by NIC.
> 
> Unfortunately, this also add some latency, so it helps bulk transferts,
> and slowdown interactive traffic 

Thanks Eric,

I was tweaking those values recently for some latency tuning
but I didn't think of them in relation to last night's tests.

In terms of my measurements, its just benchmarking at this stage.
So a trade-off between throughput and latency is acceptable, so long
as I remember to measure what it is.

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ