[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YdQ6x2Mz2lOJOQdp@shell.armlinux.org.uk>
Date: Tue, 4 Jan 2022 12:17:11 +0000
From: "Russell King (Oracle)" <linux@...linux.org.uk>
To: Corentin Labbe <clabbe.montjoie@...il.com>
Cc: linus.walleij@...aro.org, ulli.kroll@...glemail.com,
kuba@...nel.org, davem@...emloft.net, andrew@...n.ch,
hkallweit1@...il.com, linux-arm-kernel@...ts.infradead.org,
netdev@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: net: phy: marvell: network working with generic PHY and not with
marvell PHY
On Tue, Jan 04, 2022 at 01:09:13PM +0100, Corentin Labbe wrote:
> Le Tue, Jan 04, 2022 at 11:41:40AM +0000, Russell King (Oracle) a écrit :
> > On Tue, Jan 04, 2022 at 12:33:15PM +0100, Corentin Labbe wrote:
> > > Le Tue, Jan 04, 2022 at 11:14:46AM +0000, Russell King (Oracle) a écrit :
> > > > On Tue, Jan 04, 2022 at 11:58:01AM +0100, Corentin Labbe wrote:
> > > > > Hello
> > > > >
> > > > > I have a gemini SSI 1328 box which has a cortina ethernet MAC with a Marvell 88E1118 as given by:
> > > > > Marvell 88E1118 gpio-0:01: attached PHY driver (mii_bus:phy_addr=gpio-0:01, irq=POLL)
> > > > > So booting with CONFIG_MARVELL_PHY=y lead to a non-working network with link set at 1Gbit
> > > > > Setting 'max-speed = <100>;' (as current state in mainline dtb) lead to a working network.
> > > > > By not working, I mean kernel started with ip=dhcp cannot get an IP.
> > > >
> > > > How is the PHY connected to the host (which interface mode?) If it's
> > > > RGMII, it could be that the wrong RGMII interface mode is specified in
> > > > DT.
> > > >
> > >
> > > The PHY is set as RGMII in DT (arch/arm/boot/dts/gemini-ssi1328.dts)
> > > The only change to the mainline dtb is removing the max-speed.
> >
> > So, it's using "rgmii" with no delay configured at the PHY with the
> > speed limited to 100Mbps. You then remove the speed limitation and
> > it doesn't work at 1Gbps.
> >
> > I think I've seen this on other platforms (imx6 + ar8035) when the
> > RGMII delay is not correctly configured - it will work at slower
> > speeds but not 1G.
> >
> > The RGMII spec specifies that there will be a delay - and the delay can
> > be introduced by either the MAC, PHY or by PCB track routing. It sounds
> > to me like your boot environment configures the PHY to introduce the
> > necessary delay, but then, because the DT "rgmii" mode means "no delay
> > at the PHY" when you use the Marvell driver (which respects that), the
> > Marvell driver configures the PHY for no delay, resulting in a non-
> > working situation at 1G.
> >
> > I would suggest checking how the boot environment configures the PHY,
> > and change the "rgmii" mode in DT to match. There is a description of
> > the four RGMII modes in Documentation/networking/phy.rst that may help
> > understand what each one means.
> >
>
> So if I understand, the generic PHY does not touch delays and so values set by bootloader are kept.
Correct - the RGMII delays are not part of the standard 802.3 clause 22
register set, so the generic driver has no knowledge how to change
these.
> The boot environment give no clue on how the PHY is set.
> Only debug showed is:
> PHY 0 Addr 1 Vendor ID: 0x01410e11
> mii_write: phy_addr=0x1 reg_addr=0x4 value=0x5e1
> mii_write: phy_addr=0x1 reg_addr=0x9 value=0x300
> mii_write: phy_addr=0x1 reg_addr=0x0 value=0x1200
> mii_write: phy_addr=0x1 reg_addr=0x0 value=0x9200
> mii_write: phy_addr=0x1 reg_addr=0x0 value=0x1200
Hmm, it doesn't. The first two register writes set the advertisement.
The last three are just the PHY reset.
> Does it is possible to dump PHY registers when using generic PHY and
> find delay values ? For example ethtool -d eth0 ?
Even if that were possible, Marvell PHYs use a paged scheme to access
configuration registers, so merely reading the 32 registers would
probably not help. However, see my follow-up to my previous reply for
some further thoughts.
--
RMK's Patch system: https://www.armlinux.org.uk/developer/patches/
FTTP is here! 40Mbps down 10Mbps up. Decent connectivity at last!
Powered by blists - more mailing lists