lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4BB622F6.10606@hp.com>
Date:	Fri, 02 Apr 2010 10:01:42 -0700
From:	Rick Jones <rick.jones2@...com>
To:	Eric Dumazet <eric.dumazet@...il.com>
CC:	Changli Gao <xiaosuo@...il.com>, Tom Herbert <therbert@...gle.com>,
	davem@...emloft.net, netdev@...r.kernel.org
Subject: Re: [PATCH] rfs: Receive Flow Steering

Eric Dumazet wrote:
> 
> Your claim of RPS being not good for applications is wrong, our test
> results show an improvement as is. Maybe your applications dont scale,
> because of bad habits, or collidings heuristics, I dont know.

The progression in HP-UX was IPS (10.20) (aka RPS) then TOPS (11.0) (aka RFS). 
We found that IPS was great for single-flow-per-thread-of-execution stuff and 
that TOPS was better for multiple-flow-per-thread-of-execution stuff.  It was 
long enough ago now that I can safely say for one system-level benchmark not 
known to be a "networking" benchmark, and without a massive kernel component, 
TOPS was a 10% win.  Not too shabby.

It wasn't that IPS wasn't good in its context - just that TOPS was even better.

We also preferred the concept of the scheduler giving networking clues as to 
where to process an application's packets rather than networking trying to tell 
the scheduler.  There was some discussion of out of order worries, but we were 
willing to trust to the basic soundness of the scheduler - if it was moving 
threads around willy nilly at a rate able to cause big packet reordering it had 
fundamental problems that would have to be addressed anyway.  And while it may 
be incindiary to point this out :)  I suspect (without concrete data :) that 
bonding mode 0 is a much, Much, MUCH larger source of out-of-order traffic than 
any plausible scheduler thrashing.

happy benchmarking,

rick jones
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ