lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <4821D9D5.9000103@hp.com>
Date:	Wed, 07 May 2008 09:33:25 -0700
From:	Rick Jones <rick.jones2@...com>
To:	avorontsov@...mvista.com
CC:	netdev@...r.kernel.org, linuxppc-dev@...abs.org,
	Andy Fleming <afleming@...escale.com>
Subject: Re: [RFC] gianfar: low gigabit throughput

>>I have _got_ to make CPU utilization enabled by default one of these  
>>days :)  At least for mechanisms which don't require calibration.
> 
> 
> Heh, I've skipped the calibration chapter in the netperf manual. :-D
> Should revert to it.

Under linux, the CPU utilization mechanism in netperf does not require 
calibration, so you can add a -c (and -C if the remote is also linux) to 
the global command line options.  Netperf will the report CPU util and 
calculate the "service demand" which will be the quantity of CPU 
consumed per unit of work.

> So things becomes much better when the message size increases
> (I think the netperf then eating less cpu, and gives some precessing
> time to the kernel?).

Unless the compiler isn't doing a very good job, or perhaps if you've 
enabled histograms (./configure --enable-histogram) and set verbosity to 
2 or more (not the case here), netperf itself shouldn't be consuming 
very much CPU at all up in user space.  Now, as the size of the buffers 
passed to the transport increases, it does mean fewer system calls per 
KB transferred, which should indeed be more efficient.

>>What is the nature of the DMA stream between the two tests?  I find it  
>>interesting that the TCP Mb/s went up by more than the CPU MHz and  
>>wonder how much the Bus MHz came into play there - perhaps there were  
>>more DMA's to setup or across a broader memory footprint for TCP than  
>>for UDP.
> 
> 
> The gianfar indeed does a lot of dma on the "buffer descriptors", so
> probably the bus speed matters a lot. And combination of CPU and Bus
> gives the final result.

Do you have any way to calculate bus utilization - logic analyzer, or 
perhaps some performance counters in the hardware?

If you have an HP-UX or Solaris system handy to act as a receiver, you 
might give that a try - it would be interesting to see the effect of 
their ACK avoidance heuristics on TCP throughput.  One non-trivial 
difference I keep forgetting to mention is that TCP will have that 
returning ACK stream that UDP will not.

rick jones
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ