lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <54DAD71D.1060400@gmail.com>
Date:	Tue, 10 Feb 2015 21:14:21 -0700
From:	David Ahern <dsahern@...il.com>
To:	"Eric W. Biederman" <ebiederm@...ssion.com>
CC:	Stephen Hemminger <stephen@...workplumber.org>,
	netdev@...r.kernel.org,
	Nicolas Dichtel <nicolas.dichtel@...nd.com>,
	roopa <roopa@...ulusnetworks.com>, hannes@...essinduktion.org,
	Dinesh Dutt <ddutt@...ulusnetworks.com>,
	Vipin Kumar <vipin@...ulusnetworks.com>
Subject: Re: [RFC PATCH 00/29] net: VRF support

On 2/6/15 2:22 PM, Eric W. Biederman wrote:
> I think you have also introduced a second layer of indirection and thus
> an extra cache-line miss with net_ctx.  At 60ns-100ns per cache line
> miss that could be significant.
>
> Overall the standard should be that there is no measurable performance
> overhead with something like this enabled.  At least at 1Gbps we were
> able to demonstrate there was not measuable performance impact with
> network namespaces before they were merged.
>
> Eric
>

Here's a quick look at performance impacts of this patch set.

Host:
     Fedora 21
     Intel(R) Core(TM) i7-4770 CPU @ 3.40GHz
     1 socket, 4 cores, 8 threads

VM:
     Fedora 21
     2 vcpus, cpu model is 'host,x2apic'
     1G RAM
     network: virtio + vhost with tap device connected to a bridge

Tests are between host OS and VM (RX: netperf in host; TX: netperf in 
guest; netserver is the reverse).

No tweaks have been done to the default Fedora settings. In particular 
all of the offloads that default to enabled on tap devices and virtio 
devices are left enabled. Specifically, these offloads are what 
accelerate the stream tests to the 40,000 Mbps range and hence really 
stress the overhead. Nor has any cpu pinning been done or any other 
attempts at optimizations.

Commands:
     netperf -l 10 -t TCP_STREAM -H <ip>
     netperf -l 10 -t TCP_RR -H <ip>

Results are the average of 3 runs:

                       pre-VRF          with VRF
                       TX      RX       TX     RX
TCP Stream (Mbps)   39503    40325    39856  38211
TCP RR (trans/sec)  46047    46512    47619  43032

* pre-VRF means commit id 7e8acbb69ee2 which is the commit id before 
this patch st
* with VRF = patches posted in this thread

The VM setup definitely pushes some limits and represents an extreme in 
performance comparisons. While the VRF patches do show a degradation in 
RX performance the delta is fairly small. As I mentioned before I can 
remove the vrf tagging to skbs which should help. Overall I have focused 
more on concept than the performance; I'm sure that delta can be reduced.

David
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ