lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.63.0801300324000.6391@trinity.phys.uwm.edu>
Date:	Wed, 30 Jan 2008 03:51:51 -0600 (CST)
From:	Bruce Allen <ballen@...vity.phys.uwm.edu>
To:	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
cc:	Henning Fehrmann <henning.fehrmann@....mpg.de>,
	Carsten Aulbert <carsten.aulbert@....mpg.de>,
	Bruce Allen <bruce.allen@....mpg.de>
Subject: e1000 full-duplex TCP performance well below wire speed

Dear LKML,

We've connected a pair of modern high-performance boxes with integrated 
copper Gb/s Intel NICS, with an ethernet crossover cable, and have run 
some netperf full duplex TCP tests.  The transfer rates are well below 
wire speed.  We're reporting this as a kernel bug, because we expect a 
vanilla kernel with default settings to give wire speed (or close to wire 
speed) performance in this case. We DO see wire speed in simplex 
transfers. The behavior has been verified on multiple machines with 
identical hardware.

Details:
Kernel version: 2.6.23.12
ethernet NIC: Intel 82573L
ethernet driver: e1000 version 7.3.20-k2
motherboard: Supermicro PDSML-LN2+ (one quad core Intel Xeon X3220, Intel 
3000 chipset, 8GB memory)

The test was done with various mtu sizes ranging from 1500 to 9000, with 
ethernet flow control switched on and off, and using reno and cubic as a 
TCP congestion control.

The behavior depends on the setup. In one test we used cubic congestion 
control, flow control off. The transfer rate in one direction was above 
0.9Gb/s while in the other direction it was 0.6 to 0.8 Gb/s. After 15-20s 
the rates flipped. Perhaps the two steams are fighting for resources. (The 
performance of a full duplex stream should be close to 1Gb/s in both 
directions.)  A graph of the transfer speed as a function of time is here: 
https://n0.aei.uni-hannover.de/networktest/node19-new20-noflow.jpg
Red shows transmit and green shows receive (please ignore other plots):

We're happy to do additional testing, if that would help, and very 
grateful for any advice!

Bruce Allen
Carsten Aulbert
Henning Fehrmann
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ