lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sat, 24 Dec 2022 10:53:10 -0800
From:   Colin Foster <colin.foster@...advantage.com>
To:     Vladimir Oltean <olteanv@...il.com>
Cc:     Andrew Lunn <andrew@...n.ch>,
        Florian Fainelli <f.fainelli@...il.com>,
        Alexandre Belloni <alexandre.belloni@...tlin.com>,
        netdev@...r.kernel.org
Subject: Re: Crosschip bridge functionality

On Sat, Dec 24, 2022 at 02:59:34AM +0200, Vladimir Oltean wrote:
> Hi Colin,
> 
> On Fri, Dec 23, 2022 at 12:54:29PM -0800, Colin Foster wrote:
> > On Fri, Dec 23, 2022 at 09:05:27PM +0100, Andrew Lunn wrote:
> > I'm starting to understand why there's only one user of
> > crosschip_bridge_* functions. So this sounds to me like a "don't go down
> > this path - you're in for trouble" scenario.
> 
> Trying to build on top of what Andrew has already replied.
> 
> Back when I was new to DSA and completely unqualified to be a DSA reviewer/
> maintainer (it's debatable whether now I am), I actually had some of the
> same questions about what's possible in terms of software support, given
> the Vitesse architectural limitations for cross-chip bridging a la Marvell,
> in this email thread:
> https://patchwork.kernel.org/project/linux-arm-kernel/patch/1561131532-14860-5-git-send-email-claudiu.manoil@nxp.com/

Thank you for this link. I'll look it over. As usual, I'll need some
time to absorb all this information :-)

> 
> That being said, you need to broaden your detection criteria for cross-chip
> bridging; sja1105 (and tag_8021q in general) supports this too, except
> it's a bit hidden from the ds->ops->crosschip_bridge_join() operation.
> It all relies on the concept of cross-chip notifier chain from switch.c.
> dsa_tag_8021q_bridge_join() will emit a DSA_NOTIFIER_TAG_8021Q_VLAN_ADD
> event, which the other tag_8021q capable switches in the system will see
> and react to.
> 
> Because felix and sja1105 each support a tagger based on tag_8021q for
> different needs, there is an important difference in their implementations.
> The comment in dsa_tag_8021q_bridge_join() - called by sja1105 but not
> by felix - summarizes the essence of the difference.

Hmm... So the Marvell and sja1105 both support "Distributed" but in
slightly different ways?

> 
> If Felix were to gain support for tag_8021q cross-chip bridging*, the
> driver would would need to look at the switch's position within the PCB topology.
> On the user ports, tag_8021q would have to be implemented using the VCAP
> TCAM rules, to retain support for VLAN-aware bridging and just push/pop the
> VLAN that serves as make-shift tag. On the DSA "cascade" ports, tag_8021q
> would have to be implemented using the VLAN table, in order to make the
> switch understand the tag that's already in the packet and route based
> on it, rather than push yet another one. The proper combination of VCAP
> rules and VLAN table entries needs much more consideration to cover all
> scenarios (CPU RX over a daisy chain; CPU TX over a daisy chain;
> autonomous forwarding over 2 switches; autonomous forwarding over 3
> switches; autonomous forwarding between sja1105 and felix; forwarding
> done by felix for traffic originated by one sja1105 and destined to
> another sja1105; forwarding done by felix for traffic originated by a
> sja1105 and destined to a felix user port with no other downstream switch).

^ This paragraph is what I need! Although I'm leaning very much torward
the "run away" solution (and buying some fun hardware in the process)
this is something I'll keep revisiting as I learn. If it isn't
fall-off-a-log easy for you, I probably don't stand a chance.

> 
> You might find some of my thoughts on this topic interesting, in the
> "Switch topology changes" chapter of this PDF:
> https://lpc.events/event/11/contributions/949/attachments/823/1555/paper.pdf

I'm well aware of this paper :-) I'll give it another re-read, as I
always find new things.

> 
> With that development summary in mind, you'll probably be prepared to
> use "git log" to better understand some of the stages that tag_8021q
> cross-chip bridging has been through.

Yes, a couple key terms and a little background can go a very long way!
Thanks.

> 
> In principle, when comparing tag_8021q cross-chip bridging to something
> proprietary like Marvell, I consider it to be somewhat analogous to
> Russian/SSSR engineering: it's placed on the "good" side of the diminishing
> returns curve, or i.o.w., it works stupidly well for how simplistic it is.
> I could be interested to help if you come up with a sound proposal that
> addresses your needs and is generic enough that pieces of it are useful
> to others too.

Great to know. I'm in a very early "theory only" stage of this. First
things first - I need to button up full switch functionality and add
another year to the Copyright notice.

> 
> *I seriously doubt that any hw manufacturer would be crazy enough to
> use Vitesse switches for an application for which they are essentially
> out of spec and out of their intended use. Yet that is more or less the
> one-sentence description of what we, at NXP, are doing with them, so I
> know what it's like and I don't necessarily discourage it ;) Generally
> I'd say they take a bit of pushing quite well (while failing at some
> arguably reasonable and basic use cases, like flow control on NPI port -
> go figure).

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ