[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180525102803.GA30627@apalos>
Date: Fri, 25 May 2018 13:28:04 +0300
From: Ilias Apalodimas <ilias.apalodimas@...aro.org>
To: Andrew Lunn <andrew@...n.ch>
Cc: Ivan Vecera <ivecera@...hat.com>, Jiri Pirko <jiri@...nulli.us>,
netdev@...r.kernel.org, grygorii.strashko@...com,
ivan.khoronzhuk@...aro.org, nsekhar@...com,
francois.ozog@...aro.org, yogeshs@...com, spatton@...com
Subject: Re: [PATCH 0/4] RFC CPSW switchdev mode
On Fri, May 25, 2018 at 09:29:02AM +0300, Ilias Apalodimas wrote:
> On Thu, May 24, 2018 at 06:33:10PM +0200, Andrew Lunn wrote:
> > On Thu, May 24, 2018 at 07:02:54PM +0300, Ilias Apalodimas wrote:
> > > On Thu, May 24, 2018 at 05:25:59PM +0200, Andrew Lunn wrote:
> > > > O.K, back to the basic idea. Switch ports are just normal Linux
> > > > interfaces.
> > > >
> > > > How would you configure this with two e1000e put in a bridge? I want
> > > > multicast to be bridged between the two e1000e, but the host stack
> > > > should not see the packets.
> > > I am not sure i am following. I might be missing something. In your case you
> > > got two ethernet pci/pcie interfaces bridged through software. You can filter
> > > those if needed. In the case we are trying to cover, you got a hardware that
> > > offers that capability. Since not all switches are pcie based shouldn't we be
> > > able to allow this ?
> >
> > switchdev is about offloading what Linux can do to hardware to
> > accelerate it. The switch is a block of accelerator hardware, like a
> > GPU is for accelerating graphics. Linux can render OpenGL, but it is
> > better to hand it over to the GPU accelerator.
> >
> > Same applies here. The Linux bridge can bridge multicast. Using the
> > switchdev API, you can push that down to the accelerator, and let it
> > do it.
> >
> > So you need to think about, how do you make the Linux bridge not pass
> > multicast traffic to the host stack. Then how do you extend the
> > switchdev API so you can push this down to the accelerator.
> >
> > To really get switchdev, you often need to pivot your point of view a
> > bit. People often think, switchdev is about writing drivers for
> > switches. Its not, its about how you offload networking which Linux
> > can do down to a switch. And if the switch cannot accelerate it, you
> > leave Linux to do it.
> >
> > When you get in the details, i think you will find the switchdev API
> > actually already has what you need for this use case. What you need to
> > figure out is how you make the Linux bridge not pass multicast to the
> > host. Well, actually, not pass multicast it has not asked for. Then
> > accelerate it.
> >
> Understood, if we missed back anything on handling multicast for
> the cpu port we'll go back and fix it (i am assuming snooping is the answer
> here). Multicasting is only one part of the equation though. What about the
> need for vlans/FDBs on that port though?
>
I just noticed this: https://www.spinics.net/lists/netdev/msg504760.html
I tried doing the "bridge vlan add vid 2 dev br0 self" in my initial attempts
but didn't get a notification to program the CPU port(with the sef argument).
This obviously solves our vlan configuration issue. So it's only static FBDs
left.
Regards
Ilias
Powered by blists - more mailing lists