lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 23 Dec 2022 12:54:29 -0800
From:   Colin Foster <colin.foster@...advantage.com>
To:     Andrew Lunn <andrew@...n.ch>
Cc:     Florian Fainelli <f.fainelli@...il.com>,
        Vladimir Oltean <olteanv@...il.com>,
        Alexandre Belloni <alexandre.belloni@...tlin.com>,
        netdev@...r.kernel.org
Subject: Re: Crosschip bridge functionality

On Fri, Dec 23, 2022 at 09:05:27PM +0100, Andrew Lunn wrote:
> On Fri, Dec 23, 2022 at 11:37:47AM -0800, Colin Foster wrote:
> > Hello,
> > 
> > I've been looking into what it would take to add the Distributed aspect
> > to the Felix driver, and I have some general questions about the theory
> > of operation and if there are any limitations I don't foresee. It might
> > be a fair bit of work for me to get hardware to even test, so avoiding
> > dead ends early would be really nice!
> > 
> > Also it seems like all the existing Felix-like hardware is all
> > integrated into a SOC, so there's really no other potential users at
> > this time.
> > 
> > For a distributed setup, it looks like I'd just need to create
> > felix_crosschip_bridge_{join,leave} routines, and use the mv88e6xxx as a
> > template. These routines would create internal VLANs where, assuming
> > they use a tagging protocol that the switch can offload (your
> > documentation specifically mentions Marvell-tagged frames for this
> > reason, seemingly) everything should be fully offloaded to the switches.
> > 
> > What's the catch?
> 
> I actually think you need silicon support for this. Earlier versions
> of the Marvell Switches are missing some functionality, which results
> in VLANs leaking in distributed setups. I think the switches also
> share information between themselves, over the DSA ports, i.e. the
> ports between switches.
> 
> I've no idea if you can replicate the Marvell DSA concept with VLANs.
> The Marvell header has D in DSA as a core concept. The SoC can request
> a frame is sent out a specific port of a specific switch. And each
> switch has a routing table which indicates what egress port to use to
> go towards a specific switch. Frames received at the SoC indicate both
> the ingress port and the ingress switch, etc.

"It might not work at all" is definitely a catch :-)

I haven't looked into the Marvell documentation about this, so maybe
that's where I should go next. It seems Ocelot chips support
double-tagging, which would lend itself to the SoC being able to
determine which port and switch for ingress and egress... though that
might imply it could only work with DSA ports on the first chip, which
would be an understandable limitation.

> 
> > In the Marvell case, is there any gotcha where "under these scenarios,
> > the controlling CPU needs to process packets at line rate"?
> 
> None that i know of. But i'm sure Marvell put a reasonable amount of
> thought into how to make a distributed switch. There is at least one
> patent covering the concept. It could be that a VLAN based
> re-implemention could have such problems. 

I'm starting to understand why there's only one user of
crosschip_bridge_* functions. So this sounds to me like a "don't go down
this path - you're in for trouble" scenario.


Thanks for the info!

> 
> 	Andrew

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ