[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z77myuNCoe_la7e4@shell.armlinux.org.uk>
Date: Wed, 26 Feb 2025 10:02:50 +0000
From: "Russell King (Oracle)" <linux@...linux.org.uk>
To: Jon Hunter <jonathanh@...dia.com>
Cc: Andrew Lunn <andrew@...n.ch>, Heiner Kallweit <hkallweit1@...il.com>,
Alexandre Torgue <alexandre.torgue@...s.st.com>,
Andrew Lunn <andrew+netdev@...n.ch>,
Bryan Whitehead <bryan.whitehead@...rochip.com>,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>,
linux-arm-kernel@...ts.infradead.org,
linux-stm32@...md-mailman.stormreply.com,
Marcin Wojtas <marcin.s.wojtas@...il.com>,
Maxime Coquelin <mcoquelin.stm32@...il.com>, netdev@...r.kernel.org,
Paolo Abeni <pabeni@...hat.com>, UNGLinuxDriver@...rochip.com,
"linux-tegra@...r.kernel.org" <linux-tegra@...r.kernel.org>
Subject: Re: [PATCH net-next 9/9] net: stmmac: convert to phylink managed EEE
support
On Tue, Feb 25, 2025 at 02:21:01PM +0000, Jon Hunter wrote:
> Hi Russell,
>
> On 19/02/2025 20:57, Russell King (Oracle) wrote:
> > So, let's try something (I haven't tested this, and its likely you
> > will need to work it in to your other change.)
> >
> > Essentially, this disables the receive clock stop around the reset,
> > something the stmmac driver has never done in the past.
> >
> > diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
> > index 1cbea627b216..8e975863a2e3 100644
> > --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
> > +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
> > @@ -7926,6 +7926,8 @@ int stmmac_resume(struct device *dev)
> > rtnl_lock();
> > mutex_lock(&priv->lock);
> > + phy_eee_rx_clock_stop(priv->dev->phydev, false);
> > +
> > stmmac_reset_queues_param(priv);
> > stmmac_free_tx_skbufs(priv);
> > @@ -7937,6 +7939,9 @@ int stmmac_resume(struct device *dev)
> > stmmac_restore_hw_vlan_rx_fltr(priv, ndev, priv->hw);
> > + phy_eee_rx_clock_stop(priv->dev->phydev,
> > + priv->phylink_config.eee_rx_clk_stop_enable);
> > +
> > stmmac_enable_all_queues(priv);
> > stmmac_enable_all_dma_irq(priv);
>
>
> Sorry for the delay, I have been testing various issues recently and needed
> a bit more time to test this.
>
> It turns out that what I had proposed last week does not work. I believe
> that with all the various debug/instrumentation I had added, I was again
> getting lucky. So when I tested again this week on top of vanilla v6.14-rc2,
> it did not work :-(
>
> However, what you are suggesting above, all by itself, is working. I have
> tested this on top of vanilla v6.14-rc2 and v6.14-rc4 and it is working
> reliably. I have also tested on some other boards that use the same stmmac
> driver (but use the Aquantia PHY) and I have not seen any issues. So this
> does fix the issue I am seeing.
>
> I know we are getting quite late in the rc for v6.14, but not sure if we
> could add this as a fix?
The patch above was something of a hack, bypassing the layering, so I
would like to consider how this should be done properly.
I'm still wondering whether the early call to phylink_resume() is
symptomatic of this same issue, or whether there is a PHY that needs
phy_start() to be called to output its clock even with link down that
we don't know about.
The phylink_resume() call is relevant to this because I'd like to put:
phy_eee_rx_clock_stop(priv->dev->phydev,
priv->phylink_config.eee_rx_clk_stop_enable);
in there to ensure that the PHY is correctly configured for clock-stop,
but given stmmac's placement that wouldn't work.
I'm then thinking of phylink_pre_resume() to disable the EEE clock-stop
at the PHY.
I think the only thing we could do is try solving this problem as per
above and see what the fall-out from it is. I don't get the impression
that stmmac users are particularly active at testing patches though, so
it may take months to get breakage reports.
--
RMK's Patch system: https://www.armlinux.org.uk/developer/patches/
FTTP is here! 80Mbps down 10Mbps up. Decent connectivity at last!
Powered by blists - more mailing lists