[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20131208124352.GA7935@zed.ravello.local>
Date: Sun, 8 Dec 2013 14:43:52 +0200
From: Mike Rapoport <mike.rapoport@...ellosystems.com>
To: Or Gerlitz <or.gerlitz@...il.com>
Cc: Or Gerlitz <ogerlitz@...lanox.com>,
Joseph Gasparakis <joseph.gasparakis@...el.com>,
Pravin B Shelar <pshelar@...ira.com>,
Eric Dumazet <eric.dumazet@...il.com>,
Jerry Chu <hkchu@...gle.com>,
Eric Dumazet <edumazet@...gle.com>,
Alexei Starovoitov <ast@...mgrid.com>,
David Miller <davem@...emloft.net>,
netdev <netdev@...r.kernel.org>,
"Kirsher, Jeffrey T" <jeffrey.t.kirsher@...el.com>,
John Fastabend <john.fastabend@...il.com>
Subject: Re: vxlan/veth performance issues on net.git + latest kernels
On Fri, Dec 06, 2013 at 11:30:37AM +0200, Or Gerlitz wrote:
> > On 04/12/2013 11:41, Or Gerlitz wrote:
>
> BTW guys, I saw the issues with both bridge/openvswitch configuration
> - seems that we might have here somehow large breakage of the system
> w.r.t vxlan traffic for rates that go over few Gbs -- so would love to
> get feedback of any kind from the people that were involved with vxlan
> over the last months/year.
I've seen similar problems with vxlan traffic. In our scenario I had two VMs
running on the same host and both VMs having the { veth --> bridge -->
vlxan --> IP stack --> NIC } chain.
Running iperf on veth showed rate ~6 times slower than direct NIC <-> NIC.
With a hack that forces large gso_size in vxlan's handle_offloads, I've
got veth performing only slightly slower than NICs ...
The explanation I thought of is that performing the split of the packet
as late as possible reduces processing overhead and allows more data to
be processed.
My $0.02
>
> Or.
>
--
Sincerely yours,
Mike.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists