[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20211216233812.rpalegklcrd4ifzs@skbuf>
Date: Fri, 17 Dec 2021 01:38:12 +0200
From: Vladimir Oltean <olteanv@...il.com>
To: Ansuel Smith <ansuelsmth@...il.com>
Cc: Andrew Lunn <andrew@...n.ch>,
Vivien Didelot <vivien.didelot@...il.com>,
Florian Fainelli <f.fainelli@...il.com>,
"David S. Miller" <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>, linux-kernel@...r.kernel.org,
netdev@...r.kernel.org
Subject: Re: [net-next PATCH RFC v6 00/16] Add support for qca8k mdio rw in
Ethernet packet
On Wed, Dec 15, 2021 at 05:34:45PM +0100, Ansuel Smith wrote:
> > > I tested this with multicpu port and with port6 set as the unique port and
> > > it's sad.
> > > It seems they implemented this feature in a bad way and this is only
> > > supported with cpu port0. When cpu port6 is the unique port, the switch
> > > doesn't send ack packet. With multicpu port, packet ack are not duplicated
> > > and only cpu port0 sends them. This is the same for the MIB counter.
> > > For this reason this feature is enabled only when cpu port0 is enabled and
> > > operational.
> >
> > Let's discuss this a bit (not the hardware limitation, that one is what
> > it is). When DSA has multiple CPU ports, right now both host-side
> > Ethernet ports are set up as DSA masters. By being a DSA master, I mean
> > that dev->dsa_ptr is a non-NULL pointer, so these interfaces expect to
> > receive packets that are trapped by the DSA packet_type handlers.
> > But due to the way in which dsa_tree_setup_default_cpu() is written,
> > by default only the first CPU port will be used. So the host port
> > attached to the second CPU port will be a DSA master technically, but it
> > will be an inactive one and won't be anyone's master (no dp->cpu_dp will
> > point to this master's dev->dsa_ptr). My idea of DSA support for
> > multiple CPU ports would be to be able to change the dp->cpu_dp mapping
> > through rtnetlink, on a per user port basis (yes, this implies we don't
> > have a solution for DSA ports).
>
> I have a similar implementation that was proposed as RFC many times ago.
Yes, well, how to assign a user port to a CPU port seems not to be the
biggest problem that needs to be solved before support for multiple CPU
ports can fully go in.
> > My second observation is based on the fact that some switches support a
> > single CPU port, yet they are wired using two Ethernet ports towards the
> > host. The Felix and Seville switches are structured this way. I think
> > some Broadcom switches too.
> > Using the rtnetlink user API, a user could be able to migrate all user
> > ports between one CPU port and the other, and as long as the
> > configuration is valid, the switch driver should accept this (we perform
> > DSA master changing while all ports are down, and we could refuse going
> > up if e.g. some user ports are assigned to CPU port A and some user
> > ports to CPU port B). Nonetheless, the key point is that when a single
> > CPU port is used, the other CPU port kinda sits there doing nothing. So
> > I also have some patches that make the host port attached to this other
> > CPU port be a normal interface (not a DSA master).
> > The switch side of things is still a CPU port (not a user port, since
> > there still isn't any net device registered for it), but nonetheless, it
> > is a CPU port with no DSA tagging over it, hence the reason why the host
> > port isn't a DSA master. The patch itself that changes this behavior
> > sounds something like "only set up a host port as a DSA master if some
> > user ports are assigned to it".
> > As to why I'm doing it this way: the device tree should be fixed, and I
> > do need to describe the connection between the switch CPU ports and the
> > DSA masters via the 'ethernet = <&phandle>;' property. From a hardware
> > perspective, both switch ports A and B are CPU ports, equally. But this
> > means that DSA won't create a user port for the CPU port B, which would
> > be the more natural way to use it.
> > Now why this pertains to you: Vivien's initial stab at management over
> > Ethernet wanted to decouple a bit the concept of a DSA master (used for
> > the network stack) from the concept of a host port used for in-band
> > management (used for register access). Whereas our approach here is to
> > keep the two coupled, due to us saying "hey, if there's a direct
> > connection to the switch, this is a DSA master anyway, is it not?".
> > Well, here's one thing which you wouldn't be able to do if I pursue my
> > idea with lazy DSA master setup: if you decide to move all your user
> > ports using rtnetlink to CPU port 6, then the DSA master of CPU port 0
> > will cease to be a DSA master. So that will also prevent the management
> > protocol from working.
>
> About the migration problem, wonder if we can just use a refcount that
> would represent the user of the master port. The port won't be DSA
> master anymore if no user are connected. A switch can increase this ref
> if the port is mandatory for some operation. (qca8k on state change
> operational would increase the ref and decrease and then the port can be
> removed from a DSA master) That should handle all the other switch and
> still permit a driver to ""bypass"" this behaviour.
Maybe. Although not quite like the way in which you propose. Remember
that the idea is for a DSA master to be a regular interface until it
gains a user. So there's the chicken and egg problem if you want to
become a user on ->master_state_change()... because it's not a master.
You'd have to specify upfront.
> > I don't want to break your use case, but then again, I'm wondering what
> > we could do to support the second CPU port working without DSA tagging,
> > without changing the device trees to declare it as a user port (which in
> > itself isn't bad, it's just that we need to support all use cases with a
> > single, unified device tree).
>
> Just some info about the secondary CPU port.
> From Documentation the second cpu port in sgmii mode can be used also for
> other task so yes we should understand how to handle this. (base-x, mac
> and phy) This mode is set based on the phy mode and if the dsa port is a
> cpu port. Device tree changes can be accepted as AFAIK due to DSA not
> supporting multi cpu, CPU port 6 was never used/defined. (But I'm
> not sure... that is the case for all the device we have on openwrt)
What do you mean exactly by "other tasks"?
> Considering that introducing multicpu port would require a
> bit of rework, wonder if we should introduce some new bindings/node and
> fallback to a legacy (aka force first cpu port as the unique cpu port
> and ignore others) in the absence of this new implementation. (Hoping I
> didn't get all wrong with the main problem here)
The defaults would stay the same. (I've no idea why we would introduce
new device tree bindings? the only device tree change IMO would be to
declare the link between the second CPU port and its DSA master, if you
haven't done that already) But my key point was that, to some extent,
some change to the current behavior will still be required. Like right
now, a kernel 5.15 when it sees a device tree with 2 CPU ports will have
2 DSA masters. Maybe kernel 5.17 will only start off with the first port
as a DSA master, and the other just a candidate. I'm hoping this won't
change observable behavior for the worse for anyone, because device
trees are supposed to be there to stay, not change on a whim. My hope is
based on the fact that as far as I can see, that second DSA master is
effectively useless. Which in fact creates the second problem: exactly
because the second host port is useless with the current code structure,
I can see people describing it as a user rather than CPU port in current
device trees, just to make some use out of it. But that restricts the
potential user base (and therefore appeal) of my change of behavior or
of multi-CPU port support in general, and this is a bit sad. I think we
should be all spending a bit more time with current-generation kernels
on "updated" device trees with multiple CPU ports defined, and see
what's broken and what could be improved, because otherwise we could be
leaving behind a huge mess when the device trees get updated, and we
need to run the occasional old kernel on them.
> But if I'm not wrong there was at the start some years ago an idea of a
> node used to declare master port separate from the generic port node but
> it was rejected?
I don't know anything about this.
Powered by blists - more mailing lists