[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180618160418.GA25068@apalos>
Date: Mon, 18 Jun 2018 19:04:19 +0300
From: Ilias Apalodimas <ilias.apalodimas@...aro.org>
To: Andrew Lunn <andrew@...n.ch>
Cc: netdev@...r.kernel.org, grygorii.strashko@...com,
ivan.khoronzhuk@...aro.org, nsekhar@...com, jiri@...nulli.us,
ivecera@...hat.com, f.fainelli@...il.com, francois.ozog@...aro.org,
yogeshs@...com, spatton@...com, Jose.Abreu@...opsys.com
Subject: Re: [RFC v2, net-next, PATCH 0/4] Add switchdev on TI-CPSW
On Mon, Jun 18, 2018 at 05:04:24PM +0200, Andrew Lunn wrote:
> Hi Ilias
>
> Thanks for removing the CPU port. That helps a lot moving forward.
>
> > - Multicast testing client-port1(tagged on vlan 100) server-port1
> > switch-config is provided by TI (https://git.ti.com/switch-config)
> > and is used to verify correct switch configuration.
> > 1. switch-config output
> > - type: vlan , vid = 100, untag_force = 0x4, reg_mcast = 0x6,
> > unreg_mcast = 0x0, member_list = 0x6
> > Server running on sw0p2: iperf -s -u -B 239.1.1.1 -i 1
> > Client running on sw0p1: iperf -c 239.1.1.1 -u -b 990m -f m -i 5 -t 3600
> > No IGMP reaches the CPU port to add MDBs(since CPU does not receive
> > unregistered multicast as programmed).
>
> Is this something you can work around? Is there a TCAM you can program
> to detect IGMP and pass the packets to the CPU? Without receiving
> IGMP, multicast is pretty broken.
Yes it's described in example 2. (i'll explain in detail further down)
>
> If i understand right, a multicast listener running on the CPU should
> work, since you can add an MDB to receive multicast traffic from the
> two ports. Multicast traffic sent from the CPU also works. What does
> not work is IGMP snooping of traffic between the two switch ports. You
> have no access to the IGMP frames, so cannot snoop. So unless you can
> work around that with a TCAM, i think you just have to blindly pass
> all multicast between the two ports.
Yes, if the CPU port is added on the VLAN then unregistered multicast traffic
(and thus IGMP joins) will reach the CPU port and everything will work as
expected. I think we should not consider this as a "problem" as long as it's
descibed properly in Documentation. This switch is excected to support this.
What you describe is essentially what happens on "example 2."
Enabling the unregistered multicat traffic to be directed to the CPU will cover
all use cases that require no user intervention for everything to work. MDBs
will automcatically be added/removed to the hardware and traffic will be
offloaded.
With the current code the user has that possibility, so it's up to him to
decide what mode of configuration he prefers.
>
> > Setting on/off and IFF_MULTICAST (on eth0/eth1/br0) will affect registered
> > multicast masks programmed in the switch(for port1, port2, cpu port
> > respectively).
>
> > This muct occur before adding VLANs on the interfaces. If you change the
> > flag after the VLAN configuration you need to re-issue the VLAN config
> > commands.
>
> This you should fix. You should be able to get the stack to tell you
> about all the configured VLANs, so you can re-program the switch.
I was planning fixing these via bridge link commands which would get propagated
to the driver for port1/port2 and CPU port. I just didn't find anything
relevant to multicast on bridge commands apart from flooding (which is used
properly). I think that the proper way to do this is follow the logic that was
introduced by VLANs i.e:
bridge vlan add dev br0 vid 100 pvid untagged self <---- destined for CPU port
and apply it for multicast/flooding etc.
This requires iproute2 changes first though.
>
> > - NFS:
> > The only way for NFS to work is by chrooting to a minimal environment when
> > switch configuration that will affect connectivity is needed.
>
> You might want to look at the commit history for DSA. Florian added a
> patch which makes NFS root work with DSA. It might give you clues as
> to what you need to add to make it just work.
I'll have a look. Chrooting is rarely needed in our case anyway. It's only
needed when "loss of connectivity" is bound to take place.
>
> Andrew
Thanks
Ilias
Powered by blists - more mailing lists