lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 18 Jan 2019 11:41:08 +0000
From:   Ido Schimmel <idosch@...lanox.com>
To:     Florian Fainelli <f.fainelli@...il.com>
CC:     "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "andrew@...n.ch" <andrew@...n.ch>,
        "vivien.didelot@...il.com" <vivien.didelot@...il.com>,
        "davem@...emloft.net" <davem@...emloft.net>,
        Jiri Pirko <jiri@...lanox.com>,
        "ilias.apalodimas@...aro.org" <ilias.apalodimas@...aro.org>,
        "ivan.khoronzhuk@...aro.org" <ivan.khoronzhuk@...aro.org>,
        "roopa@...ulusnetworks.com" <roopa@...ulusnetworks.com>,
        "nikolay@...ulusnetworks.com" <nikolay@...ulusnetworks.com>
Subject: Re: [PATCH net-next 09/14] net: bridge: Propagate MC addresses with
 VID through switchdev

On Thu, Jan 17, 2019 at 11:17:57AM -0800, Florian Fainelli wrote:
> On 1/17/19 6:05 AM, Ido Schimmel wrote:
> > On Wed, Jan 16, 2019 at 12:00:57PM -0800, Florian Fainelli wrote:
> >> In order for bridge port members to get a chance to implement unicast
> >> and multicast address filtering correctly, which would matter for e.g:
> >> switch network devices, synchronize the UC and MC lists down to the
> >> individual bridge port members using switchdev HOST_MDB objects such
> >> that this does not impact drivers that already have a ndo_set_rx_mode()
> >> operation which likely already operate in promiscuous mode.
> >>
> >> When the bridge has multicast snooping enabled, proper HOST_MDB
> >> notifications will be sent through br_mdb_notify() already.
> > 
> > I don't understand the change. HOST_MDB is used to notify underlying
> > drivers about MDB entries that should be configured to locally receive
> > packets. This is triggered by the transmission of an IGMP report through
> > the bridge, for example.
> > 
> > It seems that you're trying to overload HOST_MDB with multicast address
> > filtering on bridge ports?
> 
> I don't really think this is an abuse of HOST_MDB, since in case the
> bridge has multicast_snooping enabled, and there is e.g: a multicast
> application bound to the bridge master device, we would get those
> notifications through HOST_MDB already. This is the same use case that I
> am addressing here, ndo_set_rx_mode() learns about local multicast
> addresses that should be programmed, which means there is a multicast
> application listening on the bridge master device itself.
> 
> The problem that I want to solve is that with Broadcom b53/bcm_sf2
> switches, we cannot easily filter/flood multicast for the CPU/management
> port.
> 
> We have per-port controls for MC/IPMC flooding, and we also have a
> separate control for CPU/management port receiving multicast. If either
> of these two bits/settings are configured, then the CPU port will always
> receive multicast, even when we should be filtering it in HW. The only
> way to perform selective reception of multicast to the CPU port is to
> program a corresponding MDB entry.
> 
> >Why are you performing this filtering?
> 
> If I do not filter, then non-bridged ports on which there is no
> multicast application bound to would be passing up multicast traffic all
> the way to the CPU port, which then has to be dropped in software. This
> is not acceptable IMHO because it is a deviation from how a standalone
> NIC supporting multicast filtering would operate.
> 
> > Shouldn't you allow all MAC addresses to ingress?
> 
> I do allow all MC addresses to ingress on the front-panel switch ports
> (while honoring the multicast_snooping setting), but we have no control
> over what the CPU/management port should be doing.
> 
> As I wrote earlier, if we flood to the CPU/management port, because
> there is at least one switch device port, in the bridge, and that bridge
> has multicast_snooping disabled, then this could break filtering for
> other, non-bridged ports. That is really not acceptable IMHO.
> 
> The reason why I chose switchdev HOST_MDB notification here are two fold:
> 
> - this is the same use case as with multicast_snooping=1 and we target
> the CPU port within DSA to resolve that use case, so from the switch
> driver perspective, there is no difference in the context
> 
> - this does not impact network device drivers that have a
> ndo_set_rx_mode() and somehow decide to support things through that API
> since those would typically have a switchdev_port_attr_set() callback

HOST_MDB was added for a very specific use case. To allow the bridge
driver to notify underlying switch drivers about MDB entries that should
be programmed to locally receive packets when multicast is enabled.
Andrew described it very nicely in merge commit
5d37636abd15ace8686a54167b488364ee79e88d

Ingress filtering is something completely different and not applicable
to bridged ports that should allow every address to ingress.

switchdev allows to offload the bridge datapath to capable devices, but
you're abusing to it allow non-bridged ports to perform address
filtering. Completely unrelated.

Therefore, it seems completely inappropriate to me to use HOST_MDB for
this reason. This applies to patch #10 as well.

It really sounds like the HW you're working with is not designed to work
in this mixed state where some ports are bridged and some are expected to
act as standalone NICs.

If you're still determined to support this use case, I suggest the
following. In your driver, program the bridge's address list as MDB
entries when the first port is enslaved to it. Then, add a new netdev
event whenever an address is added / removed from this list (in
__dev_set_rx_mode() ?). Have your driver listen to it and program MDB
entries accordingly.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ