lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 15 Apr 2015 14:31:49 -0700
From:	Rick Jones <rick.jones2@...com>
To:	rapier <rapier@....edu>, netdev@...r.kernel.org
Subject: Re: [Question] TCP stack performance decrease since 3.14

On 04/15/2015 12:31 PM, rapier wrote:
> All,
>
> First, my apologies if this came up previously but I couldn't find
> anything using a keyword search of the mailing list archive.
>
> As part of the on going work with web10g I need to come up with baseline
> TCP stack performance for various kernel revision. Using netperf and
> super_netperf* I've found that performance for TCP_CC, TCP_RR, and
> TCP_CRR has decreased since 3.14.
>
>      3.14    3.18    4.0     decrease %
> TCP_CC    183945    179222    175793    4.4%
> TCP_RR    594495    585484    561365    5.6%
> TCP_CRR    98677    96726    93026    5.7%
>
> Stream tests have remained the same from 3.14 through 4.0.

Have the service demands (usec of CPU consumed per KB) remained the same 
on the stream tests?  Even then, stateless offloads can help hide a 
multitude of path-length sins.

> All tests were conducted on the same platform from clean boot with stock
> kernels.
>
> So my questions are:
>
> Has anyone else seen this or is this a result of some weirdness on my
> system or artifact of my tests?

I've wondered if such a thing might be taking place but never had a 
chance to check.

One thing you might consider is "perf" profiling to see how the CPU 
consumption break-down has changed.

happy benchmarking,

rick jones

>
> If others have seen this or is just simply to be expected (from new
> features and the like) is it due to the TCP stack itself or other
> changes in the kernel?
>
> If so, is there anyway to mitigate the effect of this via stack tuning,
> kernel configuration, etc?
>
> Thanks!
>
> Chris
>
>
> * The above results are the average of 10 iterations of super_netperf
> for each test. I can run more iterations to verify the results but it
> seem consistent. The number of parallel processes for each test was
> tuned to produce the maximum test result. In other words, enough to push
> things but not enough to cause performance hits due to being
> cpu/memory/etc bound. If anyone wants the full results and test scripts
> just let me know.
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ