lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <47A20BA1.8070206@hp.com>
Date:	Thu, 31 Jan 2008 09:55:45 -0800
From:	Rick Jones <rick.jones2@...com>
To:	Carsten Aulbert <carsten.aulbert@....mpg.de>
CC:	"Brandeburg, Jesse" <jesse.brandeburg@...el.com>,
	Bruce Allen <ballen@...vity.phys.uwm.edu>,
	netdev@...r.kernel.org,
	Henning Fehrmann <henning.fehrmann@....mpg.de>,
	Bruce Allen <bruce.allen@....mpg.de>
Subject: Re: e1000 full-duplex TCP performance well below wire speed

> netperf was used without any special tuning parameters. Usually we start 
> two processes on two hosts which start (almost) simultaneously, last for 
> 20-60 seconds and simply use UDP_STREAM (works well) and TCP_STREAM, i.e.
> 
> on 192.168.0.202: netperf -H 192.168.2.203 -t TCP_STREAL -l 20
> on 192.168.0.203: netperf -H 192.168.2.202 -t TCP_STREAL -l 20
> 
> 192.168.0.20[23] here is on eth0 which cannot do jumbo frames, thus we 
> use the .2. part for eth1 for a range of mtus.
> 
> The server is started on both nodes with the start-stop-daemon and no 
> special parameters I'm aware of.


So long as you are relying on external (netperf relative) means to 
report the throughput, those command lines would be fine.  I wouldn't be 
comfortably relying on the sum of the netperf-reported throughtputs with 
those comand lines though.  Netperf2 has no test synchronization, so two 
separate commands, particularly those initiated on different systems, 
are subject to skew errors.  99 times out of ten they might be epsilon, 
but I get a _little_ paranoid there.

There are three alternatives:

1) use netperf4.  not as convenient for "quick" testing at present, but 
it has explicit test synchronization, so  you "know" that the numbers 
presented are from when all connections were actively transferring data

2) use the aforementioned "burst" TCP_RR test.  This is then a single 
netperf with data flowing both ways on a single connection so no issue 
of skew, but perhaps an issue of being one connection and so one process 
on each end.

3) start both tests from the same system and follow the suggestions 
contained in :

<http://www.netperf.org/svn/netperf2/tags/netperf-2.4.4/doc/netperf.html>

particluarly:

<http://www.netperf.org/svn/netperf2/tags/netperf-2.4.4/doc/netperf.html#Using-Netperf-to-Measure-Aggregate-Performance>

and use a combination of TCP_STREAM and TCP_MAERTS (STREAM backwards) tests.

happy benchmarking,

rick jones
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ