[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190728191558.zuopgfqza2iz5d5b@lx-anielsen.microsemi.net>
Date: Sun, 28 Jul 2019 21:15:59 +0200
From: "Allan W. Nielsen" <allan.nielsen@...rochip.com>
To: Andrew Lunn <andrew@...n.ch>
CC: Horatiu Vultur <horatiu.vultur@...rochip.com>,
Nikolay Aleksandrov <nikolay@...ulusnetworks.com>,
<roopa@...ulusnetworks.com>, <davem@...emloft.net>,
<bridge@...ts.linux-foundation.org>, <netdev@...r.kernel.org>,
<linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] net: bridge: Allow bridge to joing multicast groups
The 07/27/2019 05:02, Andrew Lunn wrote:
> > As you properly guessed, this model is quite different from what we are used to.
>
> Yes, it takes a while to get the idea that the hardware is just an
> accelerator for what the Linux stack can already do. And if the switch
> cannot do some feature, pass the frame to Linux so it can handle it.
This is understood, and not that different from what we are used to.
The surprise was to make all multicast traffic to go to the CPU.
> You need to keep in mind that there could be other ports in the bridge
> than switch ports, and those ports might be interested in the
> multicast traffic. Hence the CPU needs to see the traffic.
This is a good argument, but I was under the impression that not all HW/drivers
supports foreign interfaces (see ocelot_netdevice_dev_check and
mlxsw_sp_port_dev_check).
> But IGMP snooping can be used to optimise this.
Yes, IGMP snooping can limit the multicast storm of multicast IP traffic, but
not for L2 non-IP multicast traffic.
We could really use something similar for non-IP multicast MAC addresses.
Trying to get back to the original problem:
We have a network which implements the ODVA/DLR ring protocol. This protocol
sends out a beacon frame as often as every 3 us (as far as I recall, default I
believe is 400 us) to this MAC address: 01:21:6C:00:00:01.
Try take a quick look at slide 10 in [1].
If we assume that the SwitchDev driver implemented such that all multicast
traffic goes to the CPU, then we should really have a way to install a HW
offload path in the silicon, such that these packets does not go to the CPU (as
they are known not to be use full, and a frame every 3 us is a significant load
on small DMA connections and CPU resources).
If we assume that the SwitchDev driver implemented such that only "needed"
multicast packets goes to the CPU, then we need a way to get these packets in
case we want to implement the DLR protocol.
I'm sure that both models can work, and I do not think that this is the main
issue here.
Our initial attempt was to allow install static L2-MAC entries and append
multiple ports to such an entry in the MAC table. This was rejected, for several
good reasons it seems. But I'm not sure it was clear what we wanted to achieve,
and why we find it to be important. Hopefully this is clear with a real world
use-case.
Any hints or ideas on what would be a better way to solve this problems will be
much appreciated.
/Allan
[1] https://www.odva.org/Portals/0/Library/Conference/2017-ODVA-Conference_Woods_High%20Availability_Guidelines%20for%20Use%20of%20DLR%20in%20EtherNetIP%20Networks_FINAL%20PPT.pdf
Powered by blists - more mailing lists