[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.63.0801301629420.19938@trinity.phys.uwm.edu>
Date: Wed, 30 Jan 2008 16:33:26 -0600 (CST)
From: Bruce Allen <ballen@...vity.phys.uwm.edu>
To: Ben Greear <greearb@...delatech.com>
cc: netdev@...r.kernel.org,
Carsten Aulbert <carsten.aulbert@....mpg.de>,
Henning Fehrmann <henning.fehrmann@....mpg.de>,
Bruce Allen <bruce.allen@....mpg.de>
Subject: Re: e1000 full-duplex TCP performance well below wire speed
Hi Ben,
Thank you for the suggestions and questions.
>> We've connected a pair of modern high-performance boxes with integrated
>> copper Gb/s Intel NICS, with an ethernet crossover cable, and have run some
>> netperf full duplex TCP tests. The transfer rates are well below wire
>> speed. We're reporting this as a kernel bug, because we expect a vanilla
>> kernel with default settings to give wire speed (or close to wire speed)
>> performance in this case. We DO see wire speed in simplex transfers. The
>> behavior has been verified on multiple machines with identical hardware.
>
> Try using NICs in the pci-e slots. We have better luck there, as you
> usually have more lanes and/or higher quality NIC chipsets available in
> this case.
It's a good idea. We can try this, though it will take a little time to
organize.
> Try a UDP test to make sure the NIC can actually handle the throughput.
I should have mentioned this in my original post -- we already did this.
We can run UDP wire speed full duplex (over 900 Mb/s in each direction, at
the same time). So the problem stems from TCP or is aggravated by TCP.
It's not a hardware limitation.
> Look at the actual link usage as reported by the ethernet driver so that
> you take all of the ACKS and other overhead into account.
OK. We'll report on this as soon as possible.
> Try the same test using 10G hardware (CX4 NICs are quite affordable
> these days, and we drove a 2-port 10G NIC based on the Intel ixgbe
> chipset at around 4Gbps on two ports, full duplex, using pktgen). As in
> around 16Gbps throughput across the busses. That may also give you an
> idea if the bottleneck is hardware or software related.
OK. That will take more time to organize.
Cheers,
Bruce
>
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists