lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sun, 8 Dec 2013 17:21:43 +0200
From:	Or Gerlitz <ogerlitz@...lanox.com>
To:	Joseph Gasparakis <joseph.gasparakis@...el.com>
CC:	Pravin B Shelar <pshelar@...ira.com>,
	Or Gerlitz <or.gerlitz@...il.com>,
	Eric Dumazet <eric.dumazet@...il.com>,
	Jerry Chu <hkchu@...gle.com>,
	Eric Dumazet <edumazet@...gle.com>,
	Alexei Starovoitov <ast@...mgrid.com>,
	David Miller <davem@...emloft.net>,
	netdev <netdev@...r.kernel.org>, <jeffrey.t.kirsher@...el.com>,
	John Fastabend <john.fastabend@...il.com>,
	Jerry Chu <hkchu@...gle.com>
Subject: Re: vxlan/veth performance issues on net.git + latest kernels

On 06/12/2013 12:30, Joseph Gasparakis wrote:
> On Fri, 6 Dec 2013, Or Gerlitz wrote:
>
>
>> 1. on which kernel did you manage to get along fine vxlan performance wise
>> with this hack?
>>
> I was running 3.10.6.
>
>> 2. did the hack helped for both veth host traffic or only on PV VM traffic?
>>
> No, just VM. I haven't tried veth.
>
> If you leave the DODGY bit, does your traffic get droped on Tx, after it
> leaves vxlan and before it hits your driver, which is what I had seen. Is
> that right?
>
> If you unset it, do you recover?
>
> What is the output of your ethtool -k on the interface you are
> transmitting from?
>
>> Currently it doesn't converge with 3.12.x or net.git, with veth/vxlan the
>> DODGE bit isn't set when looking on the skb in the vxlan xmit time, so there's
>> nothing for me to hack there. For VMs without unsetting the bit things don't
>> really work, but unsetting it for itself so far didn't get me far performance
>> wise.
>>
>> BTW guys, I saw the issues with both bridge/openvswitch configuration - seems
>> that we might have here somehow large breakage of the system w.r.t vxlan
>> traffic for rates that go over few Gbs -- so would love to get feedback of any
>> kind from the people that were involved with vxlan over the last months/year.
>>
>>

OK!! so finally I managed to get some hacked but stable ground to step 
on .... indeed  with 3.10.X (I tried 3.10.19) if you

1. reduce the VM PV NIC MTU to account for the vxlan tunneling overhead 
(e.g to 1450 vs 1500)
2. unset the DODGY bit for GSO packets in the vxlan driver 
handle_offloads function

--> You get sane vxlan performance when the VM xmits, without HW 
offloads I got up to 4-5 Gbs for single VM and with HW offloads > 30Gbs 
for single VM when the VM is sending to a peer hypervisor.

On VM RX side, it doesn't go too much higher, e.g stays in the order of 
3-4Gbs for single receiving VM, I am pretty sure this relates to the no 
GRO for vxlan which is pretty much terrible for VM traffic.

So it seems the TODO here is the following:

1. manage to get the hack for vm-vxlan traffic to work on the net tree
2. fix the bug that make the hack necessary
3. find the problem with veth-vxlan traffic on the net tree
4. add GRO support for encapsulated/vxlan traffic


Or.

Or.

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ