lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 18 Jan 2011 10:23:44 -0800
From:	Rick Jones <rick.jones2@...com>
To:	Ben Hutchings <bhutchings@...arflare.com>
CC:	mi wake <wakemi.wake@...il.com>, netdev@...r.kernel.org
Subject: Re: rps testing questions

Ben Hutchings wrote:
> On Mon, 2011-01-17 at 17:43 +0800, mi wake wrote:
> 
>>I do a rps(Receive Packet Steering) testing on centos 5.5 with  kernel 2.6.37.
>>cpu: 8 core Intel.
>>ethernet adapter: bnx2x
>>
>>Problem statement:
>>enable rps with:
>>echo "ff" > /sys/class/net/eth2/queues/rx-0/rps_cpus.
>>
>>running 1 instances of netperf TCP_RR: netperf  -t TCP_RR -H 192.168.0.1 -c -C
>>without rps: 9963.48(Trans Rate per sec)
>>with rps:  9387.59(Trans Rate per sec)

Presumably there was an increase in service demand corresponding with the drop 
in transactions per second.

Also, an unsolicited benchmarking style tip or two.  I find it helpful to either 
do several discrete runs, or use the confidence intervals (global -i and -I 
options) with the TCP_RR tests when I am looking to compare two settings.  I 
find a bit more "variability" in the _RR tests than the _STREAM tests.

http://www.netperf.org/svn/netperf2/trunk/doc/netperf.html#index-g_t_002dI_002c-Global-26

Pinning netperf/netserver is also something I tend to do, but combining that 
with  confidence intervals, RPS is kind of difficult - the successive data 
connections made while running the iterations of the confidence intervals will 
have different port numbers and so different hashing.  That would cause RPS to 
put the connections on different cores in turn, which would, in conjunction with 
netperf/netserver being pinned to a core cause the relationship between where 
netperf runs and where netserver runs to change.  That will likely result in 
cache to cache (processor cache) transfers which will definitely up the service 
demand and drop the single-stream transactions per second.

In theory :) with RFS that should not be an issue since where netperf/netserver 
are pinned controls where the inbound processing takes place.

We are in a maze of twisty heuristics... :)

>>I do ab and tbench testing also find there is less tps with enable
>>rps.but,there is more cpu using when with enable rps.when with enable
>>rps ,softirqs is blanced  on cpus.
>>
>>is there something wrong with my test?
> 
> 
> In addition to what Eric said, check the interrupt moderation settings
> (ethtool -c/-C options).  One-way latency for a single request/response
> test will be at least the interrupt moderation value.
> 
> I haven't tested RPS by itself (Solarflare NICs have plenty of hardware
> queues) so I don't know whether it can improve latency.  However, RFS
> certainly does when there are many flows.

Is there actually an expectation that either RPS or RFS would improve *latency*? 
  Multiple-stream throughput certainly, but with the additional work done to 
spread things around, I wouldn't expect either to improve latency.

happy benchmarking,

rick jones
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ