lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 3 Mar 2022 14:35:30 +0000
From:   Vladimir Oltean <vladimir.oltean@....com>
To:     Alvin Šipraga <ALSI@...g-olufsen.dk>
CC:     "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        Jakub Kicinski <kuba@...nel.org>,
        "David S. Miller" <davem@...emloft.net>,
        Florian Fainelli <f.fainelli@...il.com>,
        Andrew Lunn <andrew@...n.ch>,
        Vivien Didelot <vivien.didelot@...il.com>,
        Vladimir Oltean <olteanv@...il.com>,
        Ido Schimmel <idosch@...dia.com>,
        Tobias Waldekranz <tobias@...dekranz.com>,
        Claudiu Manoil <claudiu.manoil@....com>,
        Alexandre Belloni <alexandre.belloni@...tlin.com>,
        "UNGLinuxDriver@...rochip.com" <UNGLinuxDriver@...rochip.com>
Subject: Re: [PATCH net-next 00/10] DSA unicast filtering

On Thu, Mar 03, 2022 at 02:20:29PM +0000, Alvin Šipraga wrote:
> Vladimir Oltean <vladimir.oltean@....com> writes:
> 
> > Hi Alvin,
> >
> > On Thu, Mar 03, 2022 at 12:16:26PM +0000, Alvin Šipraga wrote:
> >> Hi Vladimir,
> >> 
> >> Vladimir Oltean <vladimir.oltean@....com> writes:
> >> 
> >> > This series doesn't attempt anything extremely brave, it just changes
> >> > the way in which standalone ports which support FDB isolation work.
> >> >
> >> > Up until now, DSA has recommended that switch drivers configure
> >> > standalone ports in a separate VID/FID with learning disabled, and with
> >> > the CPU port as the only destination, reached trivially via flooding.
> >> > That works, except that standalone ports will deliver all packets to the
> >> > CPU. We can leverage the hardware FDB as a MAC DA filter, and disable
> >> > flooding towards the CPU port, to force the dropping of packets with
> >> > unknown MAC DA.
> >> >
> >> > We handle port promiscuity by re-enabling flooding towards the CPU port.
> >> > This is relevant because the bridge puts its automatic (learning +
> >> > flooding) ports in promiscuous mode, and this makes some things work
> >> > automagically, like for example bridging with a foreign interface.
> >> > We don't delve yet into the territory of managing CPU flooding more
> >> > aggressively while under a bridge.
> >> >
> >> > The only switch driver that benefits from this work right now is the
> >> > NXP LS1028A switch (felix). The others need to implement FDB isolation
> >> > first, before DSA is going to install entries to the port's standalone
> >> > database. Otherwise, these entries might collide with bridge FDB/MDB
> >> > entries.
> >> >
> >> > This work was done mainly to have all the required features in place
> >> > before somebody starts seriously architecting DSA support for multiple
> >> > CPU ports. Otherwise it is much more difficult to bolt these features on
> >> > top of multiple CPU ports.
> >> 
> >> So, previously FDB entries were only installed on bridged ports. Now you
> >> also want to install FDB entries on standalone ports so that flooding
> >> can be disabled on standalone ports for the reasons stated in your cover
> >> letter.
> >
> > Not "on standalone ports", but on the CPU ports, for the standalone
> > ports' addresses. Otherwise, yes.
> >
> >> To implement FDB isolation in a DSA driver, a typical approach might be
> >> to use a filter ID (FID) for the FDB entries that is unique per
> >> bridge. That is, since FDB entries were only added on bridged ports
> >> (through learning or static entries added by software), the DSA driver
> >> could readily use the bridge_num of the bridge that is being offloaded
> >> to select the FID. The same bridge_num/FID would be used by the hardware
> >> for lookup/learning on the given port.
> >
> > Yes and no. "FID" is a double-edged sword, it will actually depend upon
> > hardware implementation. FID in general is a mechanism for FDB
> > partitioning. If the VID->FID translation is keyed only by VID, then the
> > only intended use case for that is to support shared VLAN learning,
> > where all VIDs use the same FID (=> the same database for addresses).
> > This isn't very interesting for us.
> > If the VID->FID translation is keyed by {port, VID}, it is then possible
> > to make VIDs on different ports (part of different bridges) use
> > different FIDs, and that is what is interesting.
> >
> >> If the above general statements are correct-ish, then my question here
> >> is: what should be the FID - or other equivalent unique identifier used
> >> by the hardware for FDB isolation - when the port is not offloading a
> >> bridge, but is standalone? If FDB isolation is implemented in hardware
> >> with something like FIDs, then do all standalone ports need to have a
> >> unique FID?
> >
> > Not necessarily, although as far as the DSA core is concerned, we treat
> > drivers supporting FDB isolation as though each port has its own database.
> > For example, the sja1105 driver really has unique VIDs (=> FIDs) per
> > standalone port, so if we didn't treat it that way, it wouldn't work.
> 
> I think I'm not quite grokking what it means for a port to have its own
> database. Above you corrected me by stating that this patchset does not
> install FDB entries "on standalone ports", but on the CPU ports. The way
> I parse this is: you are adding an FDB entry to the database of the
> CPU port.

No. I am adding an FDB entry _towards_ the CPU port.

Perhaps I need to clarify what I mean by a "database". It is a construct
used by DSA to denote "which subset of ports should match this entry".
The entries from the database of a port should only be matched by that
port when it operates as standalone. Not by other ports, not by bridged
ports.
The entries from the database of a bridge should only be matched by the
ports in that bridge. Not by ports in other bridges, not by standalone
ports.
The entries from the database of a LAG should only be matched by the
ports in that LAG.

The FDB entries added here belong to the database of a standalone port.
This is necessary, because classifying the packet to a FID is done on
the ingress (=> standalone) port. But the destination is the CPU port.

All shared (DSA and CPU) ports serve multiple databases.

> But how would that help with the "line side to CPU side"
> direction? When a packet from the "line side" enters one of the
> (standalone) user ports, the switch would appeal to the forwarding
> database of that port, not the CPU port, no? So how does it help to
> install FDB entries on the CPU port(s)?

See above. This is exactly why we pass the struct dsa_db as argument to
.port_fdb_add(), and why you can't just deduce the database from the
provided "port" argument. The CPU port must serve the databases of
standalone ports.

> Basically for a switch with two ports, swp0 and swp1, with MAC addresses
> foo and bar respectively, we want the following FDB entries to be
> installed when both ports are in standalone mode:
> 
>     MAC | VID | PORT
>     ----+-----+-------------
>     foo |  0  | cpu_port_num
>     bar |  0  | cpu_port_num
> 
> Such that, when swp0 receives a packet with MAC DA 'foo', it knows to
> forward that packet to cpu_port_num, i.e. towards the CPU port, instead
> of flooding/dropping/trapping it. And ditto for swp2 and 'bar'. Right?
> 
> Now DSA assumes that each port has its own forwarding database, so let
> me annotate how I see that by adding another column "PORT DB":
> 
>     MAC | VID | PORT         | PORT DB
>     ----+-----+--------------+--------------
>     foo |  0  | cpu_port_num | swp0_port_num  [*]
>     bar |  0  | cpu_port_num | swp1_port_num  [**]
> 
> This codifies better that the entry for 'foo' should, ideally, only be
> used by the switch when it receives a packet from 'foo' on swp0, but not
> if it's received on swp1, etc. Internally this might be weakened to the
> same underlying FID, but as you already pointed out, this is acceptable.
> 
> Overall this seems logical and OK to me, but I can't reconcile it with
> your statement that the FDB entries in question are installed "on the
> CPU port". Wouldn't that look like this?
> 
>     MAC | VID | PORT         | PORT DB
>     ----+-----+--------------+-------------
>     foo |  0  | cpu_port_num | cpu_port_num
>     bar |  0  | cpu_port_num | cpu_port_num
> 
> ... which seems wrong.

See above. Basically, when I say "install a FDB entry on port X", I mean
"install it and make it point towards port X". Maybe I should be more
specific.

> I'm suspecting/hoping that my above misunderstanding is just a matter
> of terminology. For the sake of emphasis, allow me to describe in plain
> English the FDB entries [*] and [**] above:
> 
>  [*] add an FDB entry on swp0 where [the host/station? with] MAC address
>      'foo' is behind the CPU port on VID 0
> [**] add an FDB entry on swp1 where [the host/station? with] MAC address
>      'bar' is behind the CPU port on VID 0
> 
> Merely an attempt, and perhaps not idiomatic, but now you can tell me
> what's wrong in describing it that way :-)

Perhaps:

[*]  add an FDB entry to the switch, with MAC address 'foo', which can
     only be matched for packets ingressing swp0 in standalone mode
     (i.e. is in swp0's port database), whose destination is the CPU
     port

[**] same thing, just replace 'foo' with 'bar', and 'swp0' with 'swp1'

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ