lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 08 Sep 2011 17:18:05 -0700
From:	Rick Jones <rick.jones2@...com>
To:	Glauber Costa <glommer@...allels.com>
CC:	linux-kernel@...r.kernel.org, linux-mm@...ck.org,
	containers@...ts.osdl.org, netdev@...r.kernel.org,
	xemul@...allels.com, "David S. Miller" <davem@...emloft.net>,
	Hiroyouki Kamezawa <kamezawa.hiroyu@...fujitsu.com>,
	"Eric W. Biederman" <ebiederm@...ssion.com>
Subject: Re: [PATCH] per-cgroup tcp buffer limitation

On 09/05/2011 07:35 PM, Glauber Costa wrote:
> To test for any performance impacts of this patch, I used netperf's
> TCP_RR benchmark on localhost, so we can have both recv and snd in action.
>
> Command line used was ./src/netperf -t TCP_RR -H localhost, and the
> results:
>
> Without the patch
> =================
>
> Socket Size   Request  Resp.   Elapsed  Trans.
> Send   Recv   Size     Size    Time     Rate
> bytes  Bytes  bytes    bytes   secs.    per sec
>
> 16384  87380  1        1       10.00    26996.35
> 16384  87380
>
> With the patch
> ===============
>
> Local /Remote
> Socket Size   Request  Resp.   Elapsed  Trans.
> Send   Recv   Size     Size    Time     Rate
> bytes  Bytes  bytes    bytes   secs.    per sec
>
> 16384  87380  1        1       10.00    27291.86
> 16384  87380

Comment about netperf TCP_RR - it can often have > 1% variability, so it 
would be a Good Idea (tm) to either run it multiple times in a row, or 
rely on the confidence intervals functionality.  Here, for example, is 
an invoking of netperf using confidence intervals and the recently 
added, related output selectors.  The options request that netperf be 
99% confident that the width of the confidence interval is 1%, and it 
should run at least 3 but no more than 30 (those are both the high and 
low limits respectively) iterations of the test.


raj@...dy:~/netperf2_trunk$ src/netperf -t TCP_RR -i 30,3 -I 99,1 -- -k 
throughput,confidence_level,confidence_interval,confidence_iteration,throughput_confid
MIGRATED TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET 
to localhost.localdomain (127.0.0.1) port 0 AF_INET : +/-0.500% @ 99% 
conf.  : histogram : first burst 0
THROUGHPUT=55555.94
CONFIDENCE_LEVEL=99
CONFIDENCE_INTERVAL=1.000000
CONFIDENCE_ITERATION=26
THROUGHPUT_CONFID=0.984

it took 26 iterations for netperf to be 99% confident the interval width 
was < 1% .  Here is a "several times in a row" for the sake of completeness:

raj@...dy:~/netperf2_trunk$ HDR="-P 1";for i in `seq 1 10`; do netperf 
-t TCP_RR $HDR -B "iteration $i" -- -o result_brand,throughput; HDR="-P 
0"; done
MIGRATED TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET 
to localhost.localdomain (127.0.0.1) port 0 AF_INET : first burst 0
Result Tag,Throughput
"iteration 1",55768.37
"iteration 2",55949.97
"iteration 3",55653.36
"iteration 4",55994.65
"iteration 5",54712.42
"iteration 6",55285.27
"iteration 7",55638.65
"iteration 8",55135.56
"iteration 9",56275.87
"iteration 10",55607.66

That way one can have greater confidence that one isn't accidentally 
comparing the trough of one configuration with the peak of another.

happy benchmarking,

rick jones

PS - while it may not really matter for loopback testing, where 
presumably 99 times out of 10 a single core will run at saturation, when 
running TCP_RR over a "real" network, including CPU utilization to get 
the differences in service demand is another Good Idea (tm) - 
particularly in the face of interrupt coalescing.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ