[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150301172055.GB32246@neilslaptop.think-freely.org>
Date: Sun, 1 Mar 2015 12:20:55 -0500
From: Neil Horman <nhorman@...driver.com>
To: "Arad, Ronen" <ronen.arad@...el.com>
Cc: Thomas Graf <tgraf@...g.ch>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
Simon Horman <simon.horman@...ronome.com>,
"Fastabend, John R" <john.r.fastabend@...el.com>,
Jiri Pirko <jiri@...nulli.us>,
"davem@...emloft.net" <davem@...emloft.net>,
"andy@...yhouse.net" <andy@...yhouse.net>,
"dborkman@...hat.com" <dborkman@...hat.com>,
"ogerlitz@...lanox.com" <ogerlitz@...lanox.com>,
"jesse@...ira.com" <jesse@...ira.com>,
"jpettit@...ira.com" <jpettit@...ira.com>,
"joestringer@...ira.com" <joestringer@...ira.com>,
"jhs@...atatu.com" <jhs@...atatu.com>,
"sfeldma@...il.com" <sfeldma@...il.com>,
"f.fainelli@...il.com" <f.fainelli@...il.com>,
"roopa@...ulusnetworks.com" <roopa@...ulusnetworks.com>,
"linville@...driver.com" <linville@...driver.com>,
"shrijeet@...il.com" <shrijeet@...il.com>,
"gospo@...ulusnetworks.com" <gospo@...ulusnetworks.com>,
"bcrl@...ck.org" <bcrl@...ck.org>
Subject: Re: Flows! Offload them.
On Sun, Mar 01, 2015 at 09:47:46AM +0000, Arad, Ronen wrote:
>
>
> >-----Original Message-----
> >From: netdev-owner@...r.kernel.org [mailto:netdev-owner@...r.kernel.org] On
> >Behalf Of Thomas Graf
> >Sent: Friday, February 27, 2015 12:42 AM
> >To: Neil Horman
> >Cc: Simon Horman; Fastabend, John R; Jiri Pirko; netdev@...r.kernel.org;
> >davem@...emloft.net; andy@...yhouse.net; dborkman@...hat.com;
> >ogerlitz@...lanox.com; jesse@...ira.com; jpettit@...ira.com;
> >joestringer@...ira.com; jhs@...atatu.com; sfeldma@...il.com;
> >f.fainelli@...il.com; roopa@...ulusnetworks.com; linville@...driver.com;
> >shrijeet@...il.com; gospo@...ulusnetworks.com; bcrl@...ck.org
> >Subject: Re: Flows! Offload them.
> >
> >
> >Maybe I'm misunderstanding your statement here but I think it's essential
> >that the kernel is able to handle whatever we program in hardware even
> >if the hardware tables look differrent than the software tables, no matter
> >whether the configuration occurs through OVS or not. A punt to software
> >should always work even if it does not happen. So while I believe that
> >OVS needs more control over the hardware than available through the
> >datapath cache it must program both the hardware and software in parallel
> >even though the building blocks for doing so might look different.
> >
>
> I believe that having an equivalent punt path should be optional and
> controlled by application policy. Some applications might give up on punt
> path due to its throughput implication and prefer to just drop in HW and
> possibly leak some packets to software for exception processing and logging
> only.
>
Thats only one use case. Having a software fallback path implemented by, and
gated by, application policy is fine for some newer application, but there is a
huge legacy set of applications available today, that relies on kernel
functionality for dataplane forwarding. For this use case, what we want/need
is the in-kernel dataplane to be opportunistically offloaded to whatever degree
possible, based on administratively assigned policy, and have that happen
transparently to user functionality. The trade off there is that we don't
always have control over what exactly gets offloaded.
Thats why we need both API's. Something like the flow API that can offer fine
grained control for applications that are new and willing to understand more
about the hardware they're using, and something at a kernel functional
granularity that provides legacy acceleration with all the tradeoffs that
entails.
Neil
> >--
> >To unsubscribe from this list: send the line "unsubscribe netdev" in
> >the body of a message to majordomo@...r.kernel.org
> >More majordomo info at http://vger.kernel.org/majordomo-info.html
>
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists