[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.63.0801311340270.14403@trinity.phys.uwm.edu>
Date: Thu, 31 Jan 2008 13:48:37 -0600 (CST)
From: Bruce Allen <ballen@...vity.phys.uwm.edu>
To: "Kok, Auke" <auke-jan.h.kok@...el.com>
cc: "Brandeburg, Jesse" <jesse.brandeburg@...el.com>,
netdev@...r.kernel.org,
Carsten Aulbert <carsten.aulbert@....mpg.de>,
Henning Fehrmann <henning.fehrmann@....mpg.de>,
Bruce Allen <bruce.allen@....mpg.de>
Subject: Re: e1000 full-duplex TCP performance well below wire speed
Hi Auke,
>> Based on the discussion in this thread, I am inclined to believe that
>> lack of PCI-e bus bandwidth is NOT the issue. The theory is that the
>> extra packet handling associated with TCP acknowledgements are pushing
>> the PCI-e x1 bus past its limits. However the evidence seems to show
>> otherwise:
>>
>> (1) Bill Fink has reported the same problem on a NIC with a 133 MHz
>> 64-bit PCI connection. That connection can transfer data at 8Gb/s.
>
> That was even a PCI-X connection, which is known to have extremely good latency
> numbers, IIRC better than PCI-e? (?) which could account for a lot of the
> latency-induced lower performance...
>
> also, 82573's are _not_ a serverpart and were not designed for this
> usage. 82546's are and that really does make a difference.
I'm confused. It DOESN'T make a difference! Using 'server grade' 82546's
on a PCI-X bus, Bill Fink reports the SAME loss of throughput with TCP
full duplex that we see on a 'consumer grade' 82573 attached to a PCI-e x1
bus.
Just like us, when Bill goes from TCP to UDP, he gets wire speed back.
Cheers,
Bruce
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists