lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <5258395A.3030704@hp.com>
Date:	Fri, 11 Oct 2013 10:46:02 -0700
From:	Rick Jones <rick.jones2@...com>
To:	Kyle Hubert <khubert@...il.com>,
	Eric Dumazet <eric.dumazet@...il.com>
CC:	netdev@...r.kernel.org
Subject: Re: Peak TCP performance

On 10/10/2013 09:21 PM, Kyle Hubert wrote:
> On Thu, Oct 10, 2013 at 11:44 PM, Eric Dumazet <eric.dumazet@...il.com> wrote:
>>> Also, my copy of ethtool does not recognize tx-nocache-copy. However,
>>> I do have control over the net device. Is there something there I can
>>> set, or is tx-nocache-copy also a new feature? I'll start digging.
>>>
>>
>> nocache-copy was added in 3.0, but I do find its not a gain for current
>> cpus.
>>
>> You could get a fresh copy of ethtool sources :
>>
>> git clone git://git.kernel.org/pub/scm/network/ethtool/ethtool.git
>> cd ethtool
>> ./autogen.sh  ...
>
> That did the trick. Thanks for the help! Is there somewhere I can read
> up on this feature? A lot of the netdev features are opaque to me.
> Also, I can set NETIF_F_NOCACHE_COPY in the netdev->features to set
> this by default, yes?
>
> This at least mirrors the performance improvement that I see when
> forwarding, however I still see reserved CPU time. Is there a way to
> push it even farther?

Thought I would point-out that unless you do concrete steps to make it 
behave otherwise, netperf will constantly present the same set of 
cache-clean buffers to the transport.  The size of those buffers will be 
determined by some heuristics and will depend on the socket buffer size 
at the time the data socket is create, which itself will depend on 
whether or not you have used a test-specific -s option.  And the 
test-specific -m option will come into play.

If I am recalling correctly, the number of buffers will be one more than:

initialSO_SNDBUF / send_size

though you can control that with global -W option.

happy benchmarking,

rick jones

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ