lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAEP_g=9rtisxsDg0TJzZh3THUd_PeNo9d0MHSip5nzLw1P0WSQ@mail.gmail.com>
Date:	Fri, 17 Jul 2015 16:33:59 -0700
From:	Jesse Gross <jesse@...ira.com>
To:	John Fastabend <john.fastabend@...il.com>
Cc:	Jiri Pirko <jiri@...nulli.us>, Scott Feldman <sfeldma@...il.com>,
	Simon Horman <simon.horman@...ronome.com>,
	David Miller <davem@...emloft.net>,
	Netdev <netdev@...r.kernel.org>
Subject: Re: [PATCH net-next] rocker: forward packets to CPU when port is
 joined to openvswitch

On Thu, Jul 16, 2015 at 7:41 AM, John Fastabend
<john.fastabend@...il.com> wrote:
> On 15-07-16 01:14 AM, Jiri Pirko wrote:
>> Thu, Jul 16, 2015 at 09:09:39AM CEST, sfeldma@...il.com wrote:
>>> On Wed, Jul 15, 2015 at 11:58 PM, Jiri Pirko <jiri@...nulli.us> wrote:
>>>> Thu, Jul 16, 2015 at 08:40:31AM CEST, sfeldma@...il.com wrote:
>>>>> On Wed, Jul 15, 2015 at 6:39 PM, Simon Horman
>>>>> <simon.horman@...ronome.com> wrote:
>>>>>> Teach rocker to forward packets to CPU when a port is joined to Open vSwitch.
>>>>>> There is scope to later refine what is passed up as per Open vSwitch flows
>>>>>> on a port.
>>>>>>
>>>>>> This does not change the behaviour of rocker ports that are
>>>>>> not joined to Open vSwitch.
>>>>>>
>>>>>> Signed-off-by: Simon Horman <simon.horman@...ronome.com>
>>>>>
>>>>> Acked-by: Scott Feldman <sfeldma@...il.com>
>>>>>
>>>>> Now, OVS flows on a port.  Strange enough, that was the first RFC
>>>>> implementation for switchdev/rocker where we hooked into ovs-kernel
>>>>> module and programmed flows into hw.  We pulled all of that code
>>>>> because, IIRC, the ovs folks didn't want us hooking into the kernel
>>>>> module directly.  We dropped the ovs hooks and focused on hooking
>>>>> kernel's L2/L3.  The device (rocker) didn't really change: OF-DPA
>>>>> pipeline was used for both.  Might be interesting to try hooking it
>>>>> again.
>>>>
>>>>
>>>> I think that now we have an infrastructure prepared for that. I mean,
>>>> what we need to do is to introduce another generic switchdev object
>>>> called "ntupleflow" and hook-up again into ovs datapath and cls_flower
>>>> and insert/remove the object from those codes. Should be pretty easy to do.
>>>
>>> That sounds right.  Is the ovs datapath hooking still happening in the
>>> ovs-kernel module?  Remind me again, what was the objection the last
>>> time we tried that?
>>
>> Yep, we need to hook there. Otherwise it won't be transparent.
>>
>> Last time the objection was that this would be ovs specific. But that is
>> passed today. We have switchdev infra with objects, we have cls_flower
>> which would use the same object. I say let's do this now.
>>
>
> My objection wasn't that it was OVS specific but based on two
> observations. First the user-kernel interface for OVS would need
> to changed to optimally use hardware and then userspace would need
> to be changed to pack rules optimally for hardware. The reason is
> hardware has wildcards _and_ priority fields typically. This is a
> different structure than we would want to use in software. Maybe
> there is value in having a sub-optimal 'transparent' implementation
> though. Note I can't see how you can possibly reverse engineer this
> from what the kernel gets from userspace today and build out an
> optimal solution.

Yes, this was the main concern. Furthermore, things are likely to get
worse rather than better on this front (i.e. if/when OVS starts using
a more general BPF engine rather than its own flow processor).
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ