[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1291133073.2904.128.camel@edumazet-laptop>
Date: Tue, 30 Nov 2010 17:04:33 +0100
From: Eric Dumazet <eric.dumazet@...il.com>
To: Ben Hutchings <bhutchings@...arflare.com>
Cc: Simon Horman <horms@...ge.net.au>, netdev@...r.kernel.org
Subject: Re: Bonding, GRO and tcp_reordering
Le mardi 30 novembre 2010 à 15:42 +0000, Ben Hutchings a écrit :
> On Tue, 2010-11-30 at 22:55 +0900, Simon Horman wrote:
> > The only other parameter that seemed to have significant effect was to
> > increase the mtu. In the case of MTU=9000, GRO seemed to have a negative
> > impact on throughput, though a significant positive effect on CPU
> > utilisation.
> [...]
>
> Increasing MTU also increases the interval between packets on a TCP flow
> using maximum segment size so that it is more likely to exceed the
> difference in delay.
>
GRO really is operational _if_ we receive in same NAPI run several
packets for the same flow.
As soon as we exit NAPI mode, GRO packets are flushed.
Big MTU --> bigger delays between packets, so big chance that GRO cannot
trigger at all, since NAPI runs for one packet only.
One possibility with big MTU is to tweak "ethtool -c eth0" params
rx-usecs: 20
rx-frames: 5
rx-usecs-irq: 0
rx-frames-irq: 5
so that "rx-usecs" is bigger than the delay between two MTU full sized
packets.
Gigabit speed means 1 nano second per bit, and MTU=9000 means 72 us
delay between packets.
So try :
ethtool -C eth0 rx-usecs 100
to get chance that several packets are delivered at once by NIC.
Unfortunately, this also add some latency, so it helps bulk transferts,
and slowdown interactive traffic
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists