[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230621102527.f47kmwminkhe7ttt@skbuf>
Date: Wed, 21 Jun 2023 13:25:27 +0300
From: Vladimir Oltean <olteanv@...il.com>
To: Christian Marangi <ansuelsmth@...il.com>
Cc: Andrew Lunn <andrew@...n.ch>,
Florian Fainelli <f.fainelli@...il.com>,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [net-next PATCH] net: dsa: qca8k: add support for
port_change_master
On Tue, Jun 20, 2023 at 03:04:28PM +0200, Christian Marangi wrote:
> > > + if (dsa_port_is_cpu(dp))
> > > + cpu_port_mask |= BIT(dp->index);
> > > + } else {
> > > + dp = dsa_port_from_netdev(master);
> >
> > dsa_port_from_netdev() is implemented by calling:
> >
> > static inline struct dsa_port *dsa_slave_to_port(const struct net_device *dev)
> > {
> > struct dsa_slave_priv *p = netdev_priv(dev);
> >
> > return p->dp;
> > }
> >
> > The "struct net_device *master" does not have a netdev_priv() of the
> > type "struct dsa_slave_priv *". So, this function does not do what you
> > want, but instead it messes through the guts of an unrelated private
> > structure, treating whatever it finds at offset 16 as a pointer, and
> > dereferincing that as a struct dsa_port *. I'm surprised it didn't
> > crash, to be frank.
> >
> > To find the cpu_dp behind the master, you need to dereference
> > master->dsa_ptr (for which we don't have a helper).
> >
>
> I was searching for an helper but no luck. Is it safe to access
> master->dsa_ptr? In theory the caller of port_change_master should
> already check that the passed master is a dsa port?
*that the passed network interface is a master - netdev_uses_dsa()
What is attached to the DSA master through dev->dsa_ptr is the CPU port.
what makes a net_device be a DSA master is dsa_master_setup(), and what
makes it stop being that is dsa_master_teardown(). Both are called under
rtnl_lock(), so as long as you are in a calling context where that lock
is held, you can be sure that the value of netdev_uses_dsa() does not
change for a device - and thus the value of dev->dsa_ptr.
> I see in other context that master->dsa_ptr is checked if not NULL.
> Should I do the same check here?
Nope. DSA takes care of passing a fully set up DSA master as the
"master" argument, and the calling convention is that rtnl_lock() is held.
> > > + /* Assign the new CPU port in LOOKUP MEMBER */
> > > + val |= cpu_port_mask;
> > > +
> > > + ret = qca8k_rmw(priv, QCA8K_PORT_LOOKUP_CTRL(port),
> > > + QCA8K_PORT_LOOKUP_MEMBER,
> > > + val);
> > > + if (ret)
> > > + return ret;
> > > +
> > > + /* Fast Age the port to flush FDB table */
> > > + qca8k_port_fast_age(ds, port);
> >
> > Why do you have to fast age the (user) port?
> >
>
> The 2 CPU port have a different mac address, is it a problem?
But fast ageing the user port (which is what "port" is, here) gets rid
of the FDB entries learned on that port as part of the bridging service,
and which have it as a *destination*. So I'm not sure how that operation
would help. The MAC address of the DSA masters, if learned at all, would
not point towards any user port but towards CPU ports.
FWIW, dsa_port_change_master() takes care of migrating/replaying a lot of
configuration, including the MAC addresses for local address filtering -
dsa_slave_unsync_ha() and dsa_slave_sync_ha().
That being said, those 2 functions are dead code for your switch,
because dsa_switch_supports_uc_filtering() and dsa_switch_supports_mc_filtering()
both return false.
It would be good to hear from you how do you plan the qca8k driver to
send and receive packets. From looking at the code (learning on the CPU
port isn't enabled), I guess that the MAC addresses of the ports are
never programmed in the FDB and thus, they reach the CPU by flooding,
with the usual drawbacks that come with that - packets destined for
local termination will also be flooded to other stations in the bridging
domain. Getting rid of the reliance on flooding will have its own
challenges. You can't enable automatic address learning [ on the CPU
ports ] with multiple active CPU ports, because one FDB entry could ping
pong from one CPU port to the other, leading to packet loss from certain
user ports when the FDB entry points to the CPU port that isn't affine
to the inbound port. So you'd probably need to program some sort of
"multicast" FDB entries that target all CPU ports, and rely on the
PORT_VID_MEMBER field to restrict forwarding to only one of those CPU
ports at a time.
> > > +
> > > + /* Reenable port */
> > > + qca8k_port_set_status(priv, port, 1);
> >
> > or disable/enable it, for that matter?
> >
>
> The idea is sto stop any traffic flowing to one CPU to another before
> doing the change.
Both DSA masters are prepared to handle traffic when port_change_master()
is called, so unless there's some limitation in the qca8k driver, there
shouldn't be any in DSA.
> > From my notes in commit eca70102cfb1 ("net: dsa: felix: add support for
> > changing DSA master"), I recall this:
> >
> > When we change the DSA master to a LAG device, DSA guarantees us that
> > the LAG has at least one lower interface as a physical DSA master.
> > But DSA masters can come and go as lowers of that LAG, and
> > ds->ops->port_change_master() will not get called, because the DSA
> > master is still the same (the LAG). So we need to hook into the
> > ds->ops->port_lag_{join,leave} calls on the CPU ports and update the
> > logical port ID of the LAG that user ports are assigned to.
> >
> > Otherwise said:
> >
> > $ ip link add bond0 type bond mode balance-xor && ip link set bond0 up
> > $ ip link set eth0 down && ip link set eth0 master bond0 # .port_change_master() gets called
> > $ ip link set eth1 down && ip link set eth1 master bond0 # .port_change_master() does not get called
> > $ ip link set eth0 nomaster # .port_change_master() does not get called
> >
> > Unless something has changed, I believe that you need to handle these as well,
> > and update the QCA8K_PORT_LOOKUP_MEMBER field. In the case above, your
> > CPU port association would remain towards eth0, but the bond's lower interface
> > is eth1.
> >
>
> Can you better describe this case?
>
> In theory from the switch view, with a LAG we just set that an user port
> can receive packet from both CPU port.
>
> Or you are saying that when an additional memeber is added to the LAG,
> port_change_master is not called and we could face a scenario where:
>
> - dsa master is LAG
> - LAG have the 2 CPU port
> - user port have LAG as master but QCA8K_PORT_LOOKUP_MEMBER with only
> one CPU?
>
> If I got this right, then I get what you mean with the fact that I
> should update the lag_join/leave definition and refresh each
> configuration.
In Documentation/networking/dsa/configuration.rst I gave 2 examples of
changing the DSA master to be a LAG.
In the list of 4 commands I posted in the previous reply, I assumed that
eth0 is the original DSA master, and eth1 is the second (initially inactive)
DSA master.
When eth0 joins a LAG, DSA notices that and implicitly migrates all user
ports affine to eth0 towards bond0 as the new DSA master. At that time,
.port_change_master() will be called for all user ports under eth0, to
be notified that the new DSA master is bond0.
Once all user ports have bond0 as a DSA master, .port_change_master()
will no longer be called as long as bond0 remains their DSA master.
But the lower port configuration of bond0 can still change.
During the command where eth1 also becomes a lower port of bond0, DSA
just calls .port_lag_join() for the CPU port attached to eth1, and you
need to handle that and update the CPU port mask. Same thing when eth0
leaves bond0.
Powered by blists - more mailing lists