lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1429131712.7346.144.camel@edumazet-glaptop2.roam.corp.google.com>
Date:	Wed, 15 Apr 2015 14:01:52 -0700
From:	Eric Dumazet <eric.dumazet@...il.com>
To:	rapier <rapier@....edu>
Cc:	netdev@...r.kernel.org
Subject: Re: [Question] TCP stack performance decrease since 3.14

On Wed, 2015-04-15 at 15:31 -0400, rapier wrote:
> All,
> 
> First, my apologies if this came up previously but I couldn't find 
> anything using a keyword search of the mailing list archive.
> 
> As part of the on going work with web10g I need to come up with baseline 
> TCP stack performance for various kernel revision. Using netperf and 
> super_netperf* I've found that performance for TCP_CC, TCP_RR, and 
> TCP_CRR has decreased since 3.14.
> 
> 	3.14	3.18	4.0 	decrease %
> TCP_CC	183945	179222	175793	4.4%
> TCP_RR	594495	585484	561365	5.6%
> TCP_CRR	98677	96726	93026	5.7%
> 
> Stream tests have remained the same from 3.14 through 4.0.
> 
> All tests were conducted on the same platform from clean boot with stock 
> kernels.
> 
> So my questions are:
> 
> Has anyone else seen this or is this a result of some weirdness on my 
> system or artifact of my tests?
> 
> If others have seen this or is just simply to be expected (from new 
> features and the like) is it due to the TCP stack itself or other 
> changes in the kernel?
> 
> If so, is there anyway to mitigate the effect of this via stack tuning, 
> kernel configuration, etc?
> 
> Thanks!
> 
> Chris
> 
> 
> * The above results are the average of 10 iterations of super_netperf 
> for each test. I can run more iterations to verify the results but it 
> seem consistent. The number of parallel processes for each test was 
> tuned to produce the maximum test result. In other words, enough to push 
> things but not enough to cause performance hits due to being 
> cpu/memory/etc bound. If anyone wants the full results and test scripts 
> just let me know.
> --

Make sure you do not hit a c-state issue.

I've seen improvements in the stack translate to longer wait times, and
cpu takes longer to exit deep c-state.



--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ