lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+mtBx9yM+C8GeEHOGTHPVxNB3fJd7LQG0RaC3jywrO9_tQ58A@mail.gmail.com>
Date:	Wed, 21 Jan 2015 08:51:53 -0800
From:	Tom Herbert <therbert@...gle.com>
To:	Pravin Shelar <pshelar@...ira.com>
Cc:	David Miller <davem@...emloft.net>,
	Linux Netdev List <netdev@...r.kernel.org>
Subject: Re: [PATCH net-next 0/3] openvswitch: Add STT support.

On Wed, Jan 21, 2015 at 1:08 AM, Pravin Shelar <pshelar@...ira.com> wrote:
> On Tue, Jan 20, 2015 at 3:06 PM, Tom Herbert <therbert@...gle.com> wrote:
>> On Tue, Jan 20, 2015 at 12:25 PM, Pravin B Shelar <pshelar@...ira.com> wrote:
>>> Following patch series adds support for Stateless Transport
>>> Tunneling protocol.
>>> STT uses TCP segmentation offload available in most of NIC. On
>>> packet xmit STT driver appends STT header along with TCP header
>>> to the packet. For GSO packet GSO parameters are set according
>>> to tunnel configuration and packet is handed over to networking
>>> stack. This allows use of segmentation offload available in NICs
>>>
>>> Netperf unidirectional test gives ~9.4 Gbits/s performance on 10Gbit
>>> NIC with 1500 byte MTU with two TCP streams.
>>>
>> Having packets marked TCP which really aren't TCP is a rather scary
>> prospect to deploy in a real data center (TCP is kind of an important
>> protocol ;-) ). Can you give some more motivation on this, more data
>> that shows what the benefits are and how this compares to equivalent
>> encapsulation protocols that implement GRO and GSO.
>>
> There are multi-year deployments of STT, So it is already in real data center.
> Biggest advantage is STT does not need new NIC with tunnel offload.
> Any NIC that supports TCP offload can be used to achieve better
> performance.
>
> Following are numbers you asked for.
> Setup: net-next branch on server and client.
> netperf: TCP unidirectional tests with 5 streams. Numbers are averaged
> over 3 runs of 50 sec.
>
Please provides more details on your configuration so that others
might be able to reproduce your results. Also, it would be quite
helpful if you could implement STT as a normal network interface like
VXLAN does so that we can isolate performance of the protocol. For
instance I have no problem getting line rate with VXLAN using 5
streams with or without RCO in my testing. I assume you tested with
OVS and maybe VMs which may have a significant impact beyond the
protocol changes.

Another thing to consider in your analysis is the performance with
flows using small packets. STT should demonstrate better performance
with bulk flows since LSO and LRO are better performing relative to
GSO and GRO. But for flows with small packets, I don't see how there
could be any performance advantage. We already have ways to leverage
simple UDP checksum offload with UDP encapsulations, seems like STT
might just represent unnecessary header overhead in those cases.

> VXLAN:
> CPU
>   Client: 1.6
>   Server: 14.2
> Throughput: 5.6 Gbit/s
>
> VXLAN with rcsum:
> CPU
>   Client: 0.89
>   Server: 12.4
> Throughput: 5.8 Gbit/s
>
> STT:
> CPU
>   Client: 1.28
>   Server: 4.0
> Throughput: 9.5 Gbit/s
>
9.5Gbps? Rounding error or is this 40Gbps or larger than 1500 byte MTU?

Thanks,
Tom

> Thanks,
> Pravin.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ