[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <47A2105A.9010605@intel.com>
Date: Thu, 31 Jan 2008 10:15:54 -0800
From: "Kok, Auke" <auke-jan.h.kok@...el.com>
To: Carsten Aulbert <carsten.aulbert@....mpg.de>
CC: Andi Kleen <andi@...stfloor.org>,
Bruce Allen <ballen@...vity.phys.uwm.edu>,
"Brandeburg, Jesse" <jesse.brandeburg@...el.com>,
netdev@...r.kernel.org,
Henning Fehrmann <henning.fehrmann@....mpg.de>,
Bruce Allen <bruce.allen@....mpg.de>
Subject: Re: e1000 full-duplex TCP performance well below wire speed
Carsten Aulbert wrote:
> Hi Andi,
>
> Andi Kleen wrote:
>> Another issue with full duplex TCP not mentioned yet is that if TSO is
>> used the output will be somewhat bursty and might cause problems with
>> the TCP ACK clock of the other direction because the ACKs would need
>> to squeeze in between full TSO bursts.
>>
>> You could try disabling TSO with ethtool.
>
> I just tried that:
>
> https://n0.aei.uni-hannover.de/wiki/index.php/NetworkTestNetperf3
>
> It seems that the numbers do get better (sweet-spot seems to be MTU6000
> with 914 MBit/s and 927 MBit/s), however for other settings the results
> vary a lot so I'm not sure how large the statistical fluctuations are.
>
> Next test I'll try if it makes sense to enlarge the ring buffers.
sometimes it may help if the system (cpu) is laggy or busy a lot so that the card
has more buffers available (and thus can go longer without servicing)
Usually (if your system responds quickly) it's better to use *smaller* ring sizes
as this reduces cache. Hence the small default value.
so, unless the ethtool -S ethX output indicates that your system is too busy
(rx_no_buffer_count increases) I would not recommend increasing the ring size.
Auke
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists