lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <51951A8B.8080801@hp.com>
Date:	Thu, 16 May 2013 10:42:35 -0700
From:	Rick Jones <rick.jones2@...com>
To:	christoph.paasch@...ouvain.be
CC:	Eric Dumazet <eric.dumazet@...il.com>,
	netdev <netdev@...r.kernel.org>
Subject: Re: [PATCH net-next] tcp: speedup tcp_fixup_rcvbuf()

On 05/16/2013 12:06 AM, Christoph Paasch wrote:
> just out of curiosity, how do you run 200 concurrent netperfs?
> Is there an option as in iperf (-P) ?
> I did not find anything like this in the netperf-code.

There is nothing like that in the netperf2 code.  Concurrent netperfs is 
handled outside of netperf itself via scripting.  There is some 
discussion of some different mechanisms in netperf to use in conjunction 
with that external scripting to mitigate issues of skew error:

http://www.netperf.org/svn/netperf2/trunk/doc/netperf.html#Using-Netperf-to-Measure-Aggregate-Performance

My favorite these days is to use the interim results emitted when 
netperf is ./configure'd with --enable-demo , and reasonably 
synchronized clocks on the different systems running netperf, and then 
post-process them.  A single-system example of that being done is in 
doc/examples/runemomniaggdemo.sh , the results of which can be 
post-processed with doc/examples/post_proc.py .

I have used the interim results plus post processing mechanism as far 
out as 512ish concurrent netperfs running on 512ish systems targeting 
512ish other systems.  Apart from my innate lack of patience :) I don't 
believe there is much there to limit that mechanism scaling further. 
Perhaps others have already gone father.

I this specific situation where Eric was running 200 netperf TCP_CRR 
tests over loopback, if the difference from removing the loop was 
sufficiently large (and I'm guessing so based on the perf top output) 
then I would expect the difference to appear in service demand even for 
a single stream of TCP_CRR tests.

something like:

netperf -t TCP_CRR -c -i 30,3

before and after the change.  Perhaps use the -I option to request a 
narrower confidence interval than the default 5%  and use a longish 
per-iteration runtime (-l option) to help ensure hitting the confidence 
intervals.

happy benchmarking,

rick jones
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ