lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <47A0CD3B.3050502@candelatech.com>
Date:	Wed, 30 Jan 2008 11:17:15 -0800
From:	Ben Greear <greearb@...delatech.com>
To:	Bruce Allen <ballen@...vity.phys.uwm.edu>
CC:	netdev@...r.kernel.org,
	Carsten Aulbert <carsten.aulbert@....mpg.de>,
	Henning Fehrmann <henning.fehrmann@....mpg.de>,
	Bruce Allen <bruce.allen@....mpg.de>
Subject: Re: e1000 full-duplex TCP performance well below wire speed

Bruce Allen wrote:
> (Pádraig Brady has suggested that I post this to Netdev.  It was 
> originally posted to LKML here: http://lkml.org/lkml/2008/1/30/141 )
> 
> 
> Dear NetDev,
> 
> We've connected a pair of modern high-performance boxes with integrated 
> copper Gb/s Intel NICS, with an ethernet crossover cable, and have run 
> some netperf full duplex TCP tests.  The transfer rates are well below 
> wire speed.  We're reporting this as a kernel bug, because we expect a 
> vanilla kernel with default settings to give wire speed (or close to 
> wire speed) performance in this case. We DO see wire speed in simplex 
> transfers. The behavior has been verified on multiple machines with 
> identical hardware.

Try using NICs in the pci-e slots.  We have better
luck there, as you usually have more lanes and/or higher
quality NIC chipsets available in this case.

Try a UDP test to make sure the NIC can actually handle the throughput.

Look at the actual link usage as reported by the ethernet driver so that you
take all of the ACKS and other overhead into account.

Try the same test using 10G hardware (CX4 NICs are quite affordable
these days, and we drove a 2-port 10G NIC based on the Intel ixgbe
chipset at around 4Gbps on two ports, full duplex, using pktgen).
As in around 16Gbps throughput across the busses.  That may also give you an idea
if the bottleneck is hardware or software related.

Ben

-- 
Ben Greear <greearb@...delatech.com>
Candela Technologies Inc  http://www.candelatech.com

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ