[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <13ac391c-61f5-cb77-69a0-416b0390f50d@televic.com>
Date: Mon, 27 Jan 2020 12:29:14 +0100
From: Jürgen Lambrecht <j.lambrecht@...evic.com>
To: Andrew Lunn <andrew@...n.ch>
Cc: Horatiu Vultur <horatiu.vultur@...rochip.com>,
linux-kernel@...r.kernel.org, netdev@...r.kernel.org,
bridge@...ts.linux-foundation.org, jiri@...nulli.us,
ivecera@...hat.com, davem@...emloft.net, roopa@...ulusnetworks.com,
nikolay@...ulusnetworks.com, anirudh.venkataramanan@...el.com,
olteanv@...il.com, jeffrey.t.kirsher@...el.com,
UNGLinuxDriver@...rochip.com
Subject: Re: [RFC net-next v3 06/10] net: bridge: mrp: switchdev: Extend
switchdev API to offload MRP
On 1/26/20 4:59 PM, Andrew Lunn wrote:
> Given the design of the protocol, if the hardware decides the OS etc
> is dead, it should stop sending MRP_TEST frames and unblock the ports.
> If then becomes a 'dumb switch', and for a short time there will be a
> broadcast storm. Hopefully one of the other nodes will then take over
> the role and block a port.
In my experience a closed loop should never happen. It can make software crash and give other problems.
An other node should first take over before unblocking the ring ports. (If this is possible - I only follow this discussion halfly)
What is your opinion?
(FYI:
I made that mistake once doing a proof-of-concept ring design: during testing, when a "broken" Ethernet cable was "fixed" I had for a short time a loop, and then it happened often that that port of the (Marvell 88E6063) switch was blocked. (To unblock, only solution was to bring that port down and up again, and then all "lost" packets came out in a burst.)
That problem was caused by flow control (with pause frames), and disabling flow control fixed it, but flow-control is default on as far as I know.
)
Kind regards,
Jürgen
Powered by blists - more mailing lists