lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Tue, 18 Jan 2011 11:10:36 -0800
From:	Rick Jones <rick.jones2@...com>
To:	Ben Hutchings <bhutchings@...arflare.com>
CC:	mi wake <wakemi.wake@...il.com>, netdev@...r.kernel.org
Subject: Re: rps testing questions

Ben Hutchings wrote:
> On Tue, 2011-01-18 at 10:23 -0800, Rick Jones wrote:
> 
>>Ben Hutchings wrote:
>>
>>>On Mon, 2011-01-17 at 17:43 +0800, mi wake wrote:
> 
> [...]
> 
>>>>I do ab and tbench testing also find there is less tps with enable
>>>>rps.but,there is more cpu using when with enable rps.when with enable
>>>>rps ,softirqs is blanced  on cpus.
>>>>
>>>>is there something wrong with my test?
>>>
>>>
>>>In addition to what Eric said, check the interrupt moderation settings
>>>(ethtool -c/-C options).  One-way latency for a single request/response
>>>test will be at least the interrupt moderation value.
>>>
>>>I haven't tested RPS by itself (Solarflare NICs have plenty of hardware
>>>queues) so I don't know whether it can improve latency.  However, RFS
>>>certainly does when there are many flows.
>>
>>Is there actually an expectation that either RPS or RFS would improve *latency*? 
>>  Multiple-stream throughput certainly, but with the additional work done to 
>>spread things around, I wouldn't expect either to improve latency.
> 
> 
> Yes, it seems to make a big improvement to latency when many flows are
> active. 

OK, you and I were using different definitions.  I was speaking to single-stream 
latency, but didn't say it explicitly (I may have subconsciously thought it was 
implicit given the OP used a single instance of netperf :).

happy benchmarking,

rick jones

> Tom told me that one of his benchmarks was 200 * netperf TCP_RR
> in parallel, and I've seen over 40% reduction in latency for that. That
> said, allocating more RX queues might also help (sfc currently defaults
> to one per processor package rather than one per processor thread, due
> to concerns about CPU efficiency).
> 
> Ben.
> 

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ