lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <46699192.6010404@hp.com>
Date:	Fri, 08 Jun 2007 10:27:46 -0700
From:	Rick Jones <rick.jones2@...com>
To:	hadi@...erus.ca
Cc:	Krishna Kumar2 <krkumar2@...ibm.com>,
	Gagan Arneja <gaagaan@...il.com>,
	Evgeniy Polyakov <johnpol@....mipt.ru>, netdev@...r.kernel.org,
	Sridhar Samudrala <sri@...ibm.com>,
	David Miller <davem@...emloft.net>,
	Robert Olsson <Robert.Olsson@...a.slu.se>
Subject: Re: [WIP][PATCHES] Network xmit batching

>>These results are based on the test script that I sent earlier today. I
>>removed the results for UDP 32 procs 512 and 4096 buffer cases since
>>the BW was coming >line speed (infact it was showing 1500Mb/s and
>>4900Mb/s respectively for both the ORG and these bits). 
> 
> 
> I expect UDP to overwhelm the receiver. So the receiver needs a lot more
> tuning (like increased rcv socket buffer sizes to keep up, IMO).
> 
> But yes, the above is an odd result - Rick any insight into this?

Indeed, there is no flow control provided by netperf for the UDP_STREAM 
test and so it is quite common for a receiver to be overwhelmed.  One 
can tweak the SO_RCVBUF size a bit to try to help with transients, but 
if the sender is sustainably faster than the receiver, you have to 
configure netperf with --enable-intervals  and then provide a send burst 
(number of sends) size and an inter burst interval (constrained by "HZ" 
on the platform) to pace the netperf UDP sender.  You can get finer 
grained control with --enable-spin, but that shoots your netperf-sided 
CPU util to hell.

And with UDP datagram sizes > MTU there is (in the abstract, not sure 
about current Linux code) the concern about filling a transmit queue 
with some but not all of the fragments of a datagram and the others 
being tossed, so one ends-up sending unreassemblable datagram fragments.


>>Summary : Average BW (whatever meaning that has) improved 0.65%, while
>>                 Service Demand deteriorated 11.86%
> 
> 
> Sorry, been many moons since i last played with netperf; what does "service
> demand" mean?

Service demand is a measure of efficiency.  It is a 
normalization/reconciliation of the "throughput" and the CPU utilization 
to arrive at a CPU consumed per unit of work figure.  Lower is better.

Now, when running aggregate tests with netperf2 using the "launch a 
bunch in the background with confidence intervals enble to get 
iterations to minimize skew error" :)

<http://www.netperf.org/svn/netperf2/tags/netperf-2.4.3/doc/netperf.html#Using-Netperf-to-Measure-Aggregate-Performance>

you cannot take the netperf service demand directly - each netperf is 
calculating assuming that it is the only thing running on the system. 
It then ass-u-me-s that the CPU util it measured was all for its work. 
This means the service demand figure will be quite higher than it really is.

So, for aggregate tests using netperf2, one has to calculate service 
demand by hand.  Sum the throughput as KB/s, convert the CPU util and 
number of CPUs to a microseconds of CPU consumed per second and divide 
to get microseconds per KB for the aggregate.

rick jones
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ