lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sun, 8 Dec 2013 15:07:54 +0200
From:	Or Gerlitz <ogerlitz@...lanox.com>
To:	Mike Rapoport <mike.rapoport@...ellosystems.com>,
	Or Gerlitz <or.gerlitz@...il.com>
CC:	Joseph Gasparakis <joseph.gasparakis@...el.com>,
	Pravin B Shelar <pshelar@...ira.com>,
	Eric Dumazet <eric.dumazet@...il.com>,
	Jerry Chu <hkchu@...gle.com>,
	Eric Dumazet <edumazet@...gle.com>,
	Alexei Starovoitov <ast@...mgrid.com>,
	David Miller <davem@...emloft.net>,
	netdev <netdev@...r.kernel.org>,
	"Kirsher, Jeffrey T" <jeffrey.t.kirsher@...el.com>,
	John Fastabend <john.fastabend@...il.com>
Subject: Re: vxlan/veth performance issues on net.git + latest kernels

On 08/12/2013 14:43, Mike Rapoport wrote:
> On Fri, Dec 06, 2013 at 11:30:37AM +0200, Or Gerlitz wrote:
>>> On 04/12/2013 11:41, Or Gerlitz wrote:
>> BTW guys, I saw the issues with both bridge/openvswitch configuration
>> - seems that we might have here somehow large breakage of the system
>> w.r.t vxlan traffic for rates that go over few Gbs -- so would love to
>> get feedback of any kind from the people that were involved with vxlan
>> over the last months/year.
> I've seen similar problems with vxlan traffic. In our scenario I had two VMs running on the same host and both VMs having the { veth --> bridge --> vlxan --> IP stack --> NIC } chain.

How the VMs were connected to the veth NICs? what kernel were you using?


> Running iperf on veth showed rate ~6 times slower than direct NIC <-> NIC. With a hack that forces large gso_size in vxlan's handle_offloads, I've got veth performing only slightly slower than NICs ... The explanation I thought of is that performing the split of the packet as late as possible reduces processing overhead and allows more data to be processed.

thanks for the tip! few quick clarifications -- so you artificially 
enlarged the gso_size of the skb? can you provide the line you added here

static int handle_offloads(struct sk_buff *skb)
{
         if (skb_is_gso(skb)) {
                 int err = skb_unclone(skb, GFP_ATOMIC);
                 if (unlikely(err))
                         return err;

                 skb_shinfo(skb)->gso_type |= SKB_GSO_UDP_TUNNEL;
         } else if (skb->ip_summed != CHECKSUM_PARTIAL)
                 skb->ip_summed = CHECKSUM_NONE;

         return 0;
}

also, why enlarging the gso size for skb's cause the actual segmentation 
to come into play lower in the stack?

Or.

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ