[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200125163504.GF18311@lunn.ch>
Date: Sat, 25 Jan 2020 17:35:04 +0100
From: Andrew Lunn <andrew@...n.ch>
To: Horatiu Vultur <horatiu.vultur@...rochip.com>
Cc: linux-kernel@...r.kernel.org, netdev@...r.kernel.org,
bridge@...ts.linux-foundation.org, jiri@...nulli.us,
ivecera@...hat.com, davem@...emloft.net, roopa@...ulusnetworks.com,
nikolay@...ulusnetworks.com, anirudh.venkataramanan@...el.com,
olteanv@...il.com, jeffrey.t.kirsher@...el.com,
UNGLinuxDriver@...rochip.com
Subject: Re: [RFC net-next v3 06/10] net: bridge: mrp: switchdev: Extend
switchdev API to offload MRP
> SWITCHDEV_OBJ_ID_RING_TEST_MRP: This is used when to start/stop sending
> MRP_Test frames on the mrp ring ports. This is called only on nodes that have
> the role Media Redundancy Manager.
How do you handle the 'headless chicken' scenario? User space tells
the port to start sending MRP_Test frames. It then dies. The hardware
continues sending these messages, and the neighbours thinks everything
is O.K, but in reality the state machine is dead, and when the ring
breaks, the daemon is not there to fix it?
And it is not just the daemon that could die. The kernel could opps or
deadlock, etc.
For a robust design, it seems like SWITCHDEV_OBJ_ID_RING_TEST_MRP
should mean: start sending MRP_Test frames for the next X seconds, and
then stop. And the request is repeated every X-1 seconds.
Andrew
Powered by blists - more mailing lists