[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAEP_g=9bAN2X+V6gHXk5uLN0tHt2LyA15Dqr3+C=kQQBZv7mZQ@mail.gmail.com>
Date: Mon, 29 Jun 2015 18:06:27 -0700
From: Jesse Gross <jesse@...ira.com>
To: Rick Jones <rick.jones2@...com>
Cc: Ramu Ramamurthy <sramamur@...ux.vnet.ibm.com>,
Tom Herbert <tom@...bertland.com>,
David Miller <davem@...emloft.net>,
netdev <netdev@...r.kernel.org>
Subject: Re: [PATCH RFC net-next] vxlan: GRO support at tunnel layer
On Mon, Jun 29, 2015 at 1:04 PM, Rick Jones <rick.jones2@...com> wrote:
> PS FWIW, if I shift from using just the linux native vxlan to a "mostly
> full" set of OpenStack compute node plumbing - two OVS bridges and a linux
> bridge and associated plumbing with a vxlan tunnel defined in OVS, but
> nothing above the Linux bridge (and no VMs) I see more like 4.9 Gbit/s. The
> veth pair connecting the linux bridge to the top ovs bridge show rx checksum
> and gro enabled. the linux bridge itself shows GRO but rx checksum off
> (fixed). I'm not sure how to go about checking the OVS constructs.
This is because the OVS path won't go through the VXLAN device receive
routines and the code from this patch won't be executed. Your results
make sense then because it is similar to the original no GRO case.
This should hopefully be resolved soon - there are some patches in
progress that will make OVS use the normal tunnel device receive
paths. Once those are in, the performance should be equal in both
cases.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists