[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180618164602.GA26411@apalos>
Date: Mon, 18 Jun 2018 19:46:02 +0300
From: Ilias Apalodimas <ilias.apalodimas@...aro.org>
To: Andrew Lunn <andrew@...n.ch>
Cc: netdev@...r.kernel.org, grygorii.strashko@...com,
ivan.khoronzhuk@...aro.org, nsekhar@...com, jiri@...nulli.us,
ivecera@...hat.com, f.fainelli@...il.com, francois.ozog@...aro.org,
yogeshs@...com, spatton@...com, Jose.Abreu@...opsys.com
Subject: Re: [RFC v2, net-next, PATCH 0/4] Add switchdev on TI-CPSW
On Mon, Jun 18, 2018 at 06:28:36PM +0200, Andrew Lunn wrote:
> > Yes, if the CPU port is added on the VLAN then unregistered multicast traffic
> > (and thus IGMP joins) will reach the CPU port and everything will work as
> > expected. I think we should not consider this as a "problem" as long as it's
> > descibed properly in Documentation. This switch is excected to support this.
>
> Back to the two e1000e. What would you expect to happen with them?
> Either IGMP snooping needs to work, or your don't do snooping at
> all.
That's a different use case, you don't have a CPU port on e1000 and it
"just works". You can't do anything to the card and drop the packet. If you
want to have the same example imagine something like "i filter and drop IGMP
messages on one of the 2 e1000e's on the bridge but i expect IGMP to work".
It's a totally different hardware here were this is an option and for TI
usecases a valid option.
>
> > What you describe is essentially what happens on "example 2."
> > Enabling the unregistered multicat traffic to be directed to the CPU will cover
> > all use cases that require no user intervention for everything to work. MDBs
> > will automcatically be added/removed to the hardware and traffic will be
> > offloaded.
> > With the current code the user has that possibility, so it's up to him to
> > decide what mode of configuration he prefers.
>
> So by default, it just needs to work. You can give the user the option
> to shoot themselves in the foot, but they need to actively pull the
> trigger to blow their own foot off.
Yes it does by default. I don't consider it "foot shooting" though.
If we stop thinking about switches connected to user environments and think of
industrial ones, my understanding is that this is a common scenarioa that needs
to be supported.
>
> > > > Setting on/off and IFF_MULTICAST (on eth0/eth1/br0) will affect registered
> > > > multicast masks programmed in the switch(for port1, port2, cpu port
> > > > respectively).
> > >
> > > > This muct occur before adding VLANs on the interfaces. If you change the
> > > > flag after the VLAN configuration you need to re-issue the VLAN config
> > > > commands.
> > >
> > > This you should fix. You should be able to get the stack to tell you
> > > about all the configured VLANs, so you can re-program the switch.
> > I was planning fixing these via bridge link commands which would get propagated
> > to the driver for port1/port2 and CPU port. I just didn't find anything
> > relevant to multicast on bridge commands apart from flooding (which is used
> > properly). I think that the proper way to do this is follow the logic that was
> > introduced by VLANs i.e:
> > bridge vlan add dev br0 vid 100 pvid untagged self <---- destined for CPU port
> > and apply it for multicast/flooding etc.
> > This requires iproute2 changes first though.
>
> No, i think you can do this in the driver. The driver just needs to
> ask the stack to tell it about all the VLANs again. Or you walk the
> VLAN tables in the hardware and do the programming based on that. I
> don't see why user space should be involved at all. What would you
> expect with two e1000e's?
We are pretty much describing the same thing i just thought having a bridge
command for it was more appropriate.
After the user removes the multicast flag for an interface i'll just walk VLANs
and adjust accordingly. It's doable and i'll change it for the patch.
Thanks
Ilias
Powered by blists - more mailing lists