[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240218-lynx28g-infinite-loop-v1-1-59cc5cef8367@solid-run.com>
Date: Sun, 18 Feb 2024 15:09:10 +0100
From: Josua Mayer <josua@...id-run.com>
To: Ioana Ciornei <ioana.ciornei@....com>, Vinod Koul <vkoul@...nel.org>,
Kishon Vijay Abraham I <kishon@...nel.org>
Cc: Jon Nettleton <jon@...id-run.com>, Rabeeh Khoury <rabeeh@...id-run.com>,
Yazan Shhady <yazan.shhady@...id-run.com>, netdev@...r.kernel.org,
linux-phy@...ts.infradead.org, linux-kernel@...r.kernel.org,
Josua Mayer <josua@...id-run.com>
Subject: [PATCH] phy: lynx-28g: use read_poll_timeout when waiting for
halt/reset bits
Power-on and -off functions set and wait for self-clearing reset-
and halt-bits in serdes registers.
In certain operating conditions these bits may not actually clear.
Replace the infinite while-loops with read_poll_timeout to ensure that
power_on and power_off will always return even when the bits did not
clear within a full second.
On timeout print an error message and return error code.
In this situation the serdes lane has an invalid state,
without no known recovery procedure.
This fixes an infinite loop / hang during boot observed on a
misconfigured lx2160a system where serdes #1 PLLS (161MHz) and PLLF
(100MHz) were mistakenly swapped.
After reset the PLL's did lock (to wrong rates) but the serdes locked up
and never cleared either lane-specific or global halt- and reset-bits.
Signed-off-by: Josua Mayer <josua@...id-run.com>
---
drivers/phy/freescale/phy-fsl-lynx-28g.c | 42 ++++++++++++++++++++++++--------
1 file changed, 32 insertions(+), 10 deletions(-)
diff --git a/drivers/phy/freescale/phy-fsl-lynx-28g.c b/drivers/phy/freescale/phy-fsl-lynx-28g.c
index e2187767ce00..5ae5b47619f8 100644
--- a/drivers/phy/freescale/phy-fsl-lynx-28g.c
+++ b/drivers/phy/freescale/phy-fsl-lynx-28g.c
@@ -335,6 +335,7 @@ static int lynx_28g_power_off(struct phy *phy)
{
struct lynx_28g_lane *lane = phy_get_drvdata(phy);
u32 trstctl, rrstctl;
+ int err;
if (!lane->powered_up)
return 0;
@@ -344,11 +345,21 @@ static int lynx_28g_power_off(struct phy *phy)
lynx_28g_lane_rmw(lane, LNaRRSTCTL, HLT_REQ, HLT_REQ);
/* Wait until the halting process is complete */
- do {
- trstctl = lynx_28g_lane_read(lane, LNaTRSTCTL);
- rrstctl = lynx_28g_lane_read(lane, LNaRRSTCTL);
- } while ((trstctl & LYNX_28G_LNaTRSTCTL_HLT_REQ) ||
- (rrstctl & LYNX_28G_LNaRRSTCTL_HLT_REQ));
+ err = read_poll_timeout(lynx_28g_lane_read, trstctl,
+ !(trstctl & LYNX_28G_LNaTRSTCTL_HLT_REQ),
+ 10000, 1000000, false, lane, LNaTRSTCTL);
+ if (err) {
+ dev_err(&phy->dev, "tx lane halt failed: %d\n", err);
+ return err;
+ }
+
+ err = read_poll_timeout(lynx_28g_lane_read, rrstctl,
+ !(rrstctl & LYNX_28G_LNaRRSTCTL_HLT_REQ),
+ 10000, 1000000, false, lane, LNaRRSTCTL);
+ if (err) {
+ dev_err(&phy->dev, "tx lane halt failed: %d\n", err);
+ return err;
+ }
lane->powered_up = false;
@@ -359,6 +370,7 @@ static int lynx_28g_power_on(struct phy *phy)
{
struct lynx_28g_lane *lane = phy_get_drvdata(phy);
u32 trstctl, rrstctl;
+ int err;
if (lane->powered_up)
return 0;
@@ -368,11 +380,21 @@ static int lynx_28g_power_on(struct phy *phy)
lynx_28g_lane_rmw(lane, LNaRRSTCTL, RST_REQ, RST_REQ);
/* Wait until the reset sequence is completed */
- do {
- trstctl = lynx_28g_lane_read(lane, LNaTRSTCTL);
- rrstctl = lynx_28g_lane_read(lane, LNaRRSTCTL);
- } while (!(trstctl & LYNX_28G_LNaTRSTCTL_RST_DONE) ||
- !(rrstctl & LYNX_28G_LNaRRSTCTL_RST_DONE));
+ err = read_poll_timeout(lynx_28g_lane_read, trstctl,
+ trstctl & LYNX_28G_LNaTRSTCTL_RST_DONE,
+ 10000, 1000000, false, lane, LNaTRSTCTL);
+ if (err) {
+ dev_err(&phy->dev, "lane tx reset failed: %d\n", err);
+ return err;
+ }
+
+ err = read_poll_timeout(lynx_28g_lane_read, rrstctl,
+ rrstctl & LYNX_28G_LNaRRSTCTL_RST_DONE,
+ 10000, 1000000, false, lane, LNaRRSTCTL);
+ if (err) {
+ dev_err(&phy->dev, "lane rx reset failed: %d\n", err);
+ return err;
+ }
lane->powered_up = true;
---
base-commit: 6613476e225e090cc9aad49be7fa504e290dd33d
change-id: 20240218-lynx28g-infinite-loop-5f090b4237c7
Best regards,
--
Josua Mayer <josua@...id-run.com>
Powered by blists - more mailing lists