lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 1 Dec 2015 11:08:42 -0500
From:	"John W. Linville" <linville@...driver.com>
To:	Hannes Frederic Sowa <hannes@...essinduktion.org>
Cc:	Tom Herbert <tom@...bertland.com>, Jesse Gross <jesse@...nel.org>,
	David Miller <davem@...emloft.net>,
	Anjali Singhai Jain <anjali.singhai@...el.com>,
	Linux Kernel Network Developers <netdev@...r.kernel.org>,
	Kiran Patil <kiran.patil@...el.com>
Subject: Re: [PATCH v1 1/6] net: Generalize udp based tunnel offload

On Tue, Dec 01, 2015 at 04:49:28PM +0100, Hannes Frederic Sowa wrote:
> On Tue, Dec 1, 2015, at 16:44, John W. Linville wrote:
> > On Mon, Nov 30, 2015 at 09:26:51PM -0800, Tom Herbert wrote:
> > > On Mon, Nov 30, 2015 at 5:28 PM, Jesse Gross <jesse@...nel.org> wrote:
> > 
> > > > Based on what we can do today, I see only two real choices: do some
> > > > refactoring to clean up the stack a bit or remove the existing VXLAN
> > > > offloading altogether. I think this series is trying to do the former
> > > > and the result is that the stack is cleaner after than before. That
> > > > seems like a good thing.
> > > 
> > > There is a third choice which is to do nothing. Creating an
> > > infrastructure that claims to "Generalize udp based tunnel offload"
> > > but actually doesn't generalize the mechanism is nothing more than
> > > window dressing-- this does nothing to help with the VXLAN to
> > > VXLAN-GPE transition for instance. If geneve specific offload is
> > > really needed now then that can be should with another ndo function,
> > > or alternatively ntuple filter with a device specific action would at
> > > least get the stack out of needing to be concerned with that.
> > > Regardless, we will work optimize the rest of the stack for devices
> > > that implement protocol agnostic mechanisms.
> > 
> > Is there no concern about NDO proliferation? Does the size of the
> > netdev_ops structure matter? Beyond that, I can see how a single
> > entry point with an enum specifying the offload type isn't really any
> > different in the grand scheme of things than having multiple NDOs,
> > one per offload.
> > 
> > Given the need to live with existing hardware offloads, I would lean
> > toward a consolidated NDO. But if a different NDO per tunnel type is
> > preferred, I can be satisified with that.
> 
> Having per-offloading NDOs helps the stack to gather further information
> what kind of offloads the driver has even maybe without trying to call
> down into the layer (just by comparing to NULL). Checking this inside
> the driver offload function clearly does not have this feature. So we
> finally can have "ip tunnel please-recommend-type" feature. :)

That is a valuable insight! Maybe the per-offload NDO isn't such a
bad idea afterall... :-)

John
-- 
John W. Linville		Someday the world will need a hero, and you
linville@...driver.com			might be all we have.  Be ready.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ