lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sun, 8 Dec 2013 16:30:55 +0200
From:	Mike Rapoport <mike.rapoport@...ellosystems.com>
To:	Or Gerlitz <ogerlitz@...lanox.com>
Cc:	Or Gerlitz <or.gerlitz@...il.com>,
	Joseph Gasparakis <joseph.gasparakis@...el.com>,
	Pravin B Shelar <pshelar@...ira.com>,
	Eric Dumazet <eric.dumazet@...il.com>,
	Jerry Chu <hkchu@...gle.com>,
	Eric Dumazet <edumazet@...gle.com>,
	Alexei Starovoitov <ast@...mgrid.com>,
	David Miller <davem@...emloft.net>,
	netdev <netdev@...r.kernel.org>,
	"Kirsher, Jeffrey T" <jeffrey.t.kirsher@...el.com>,
	John Fastabend <john.fastabend@...il.com>
Subject: Re: vxlan/veth performance issues on net.git + latest kernels

On Sun, Dec 08, 2013 at 03:07:54PM +0200, Or Gerlitz wrote:
> On 08/12/2013 14:43, Mike Rapoport wrote:
> > On Fri, Dec 06, 2013 at 11:30:37AM +0200, Or Gerlitz wrote:
> >>> On 04/12/2013 11:41, Or Gerlitz wrote:
> >> BTW guys, I saw the issues with both bridge/openvswitch configuration
> >> - seems that we might have here somehow large breakage of the system
> >> w.r.t vxlan traffic for rates that go over few Gbs -- so would love to
> >> get feedback of any kind from the people that were involved with vxlan
> >> over the last months/year.
> > I've seen similar problems with vxlan traffic. In our scenario I had two VMs running on the same host and both VMs having the { veth --> bridge --> vlxan --> IP stack --> NIC } chain.
> 
> How the VMs were connected to the veth NICs? what kernel were you using?
> 
> 
> > Running iperf on veth showed rate ~6 times slower than direct NIC <-> NIC. With a hack that forces large gso_size in vxlan's handle_offloads, I've got veth performing only slightly slower than NICs ... The explanation I thought of is that performing the split of the packet as late as possible reduces processing overhead and allows more data to be processed.
> 
> thanks for the tip! few quick clarifications -- so you artificially 
> enlarged the gso_size of the skb? can you provide the line you added here
 
It was something *very* hacky:

static int handle_offloads(struct sk_buff *skb)
{
	if (skb_is_gso(skb)) {
		int err = skb_unclone(skb, GFP_ATOMIC);
		if (unlikely(err))
			return err;
 
		skb_shinfo(skb)->gso_type |= SKB_GSO_UDP_TUNNEL;

		if (skb->len < 64000)
			skb_shinfo(skb)->gso_size = skb->len;
		else
			skb_shinfo(skb)->gso_size = 64000;

	} else if (skb->ip_summed != CHECKSUM_PARTIAL)
		skb->ip_summed = CHECKSUM_NONE;
 
	return 0;
}
 
> also, why enlarging the gso size for skb's cause the actual segmentation 
> to come into play lower in the stack?
> 
> Or.
> 
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ