[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <76b04a20-2e63-4ebf-841c-303371883094@lunn.ch>
Date: Wed, 19 Nov 2025 19:30:14 +0100
From: Andrew Lunn <andrew@...n.ch>
To: Alexander Duyck <alexander.duyck@...il.com>
Cc: Lee Trager <lee@...ger.us>,
Maxime Chevallier <maxime.chevallier@...tlin.com>,
Susheela Doddagoudar <susheelavin@...il.com>,
netdev@...r.kernel.org, mkubecek@...e.cz,
Hariprasad Kelam <hkelam@...vell.com>,
Alexander Duyck <alexanderduyck@...com>
Subject: Re: Ethtool: advance phy debug support
> I think part of the issue is the fact that a PMA/PMD and a PHY get
> blurred due to the fact that a phylib driver will automatically bind
> to said PMA/PMD.
You have some control over that. When you create the MDIO bus, you can
set mii_bus->phy_mask to make it ignore addresses on the bus. Or,
since you don't have a real MDIO bus, set the ID registers to 0xffff
and it will decided there is no device there.
If you were using device tree, you could also bind an mdio_device
device to it, rather than a phy_device. This is how we handle Ethernet
switches on MDIO busses.
> A good metaphor for something like this would be taking a car for a
> test drive versus balancing the tires. In the case of the PRBS test we
> may want to take the individual lanes and test them one at a time and
> at various frequencies to deal with potential cross talk and such. It
> isn't until we have verified everything is good there that we would
> then want to take the combination of lanes, add FEC and a PCS, and try
> sending encoded traffic over it. That said, maybe I am arguing the
> generic phy version of this testing versus the Ethernet phy version of
> it.
I think you need to decide what your real use cases are.
> True. In our case we have both PCS capability for PRBS and generic phy
> capability for that. Being able to control those at either level would
> be useful. In my mind I was thinking it might be best for us to go
> after PCS first in the case of fbnic due to the fact that the PMD is
> managed by the firmware.
And hopefully the PCS code is a lot more reusable since it should work
for any PCS conforming to 802.3, once you have straightened out the
odd register mapping your hardware has.
> Really this gets at the more fundamental problem. We still don't have
> a good way to break out all the components within the link setup.
> Things like lanes are still an abstract concept in the network setup
> and aren't really represented at all in the phylink/phylib code. Part
> of the reason for me breaking out the generic PHY as a PMD in fbnic
> was because we needed a way to somehow include the training state for
> it into the total link state.
>
> I suspect to some extent we would need to look at something similar
> for all the PRBS testing and such to provide a way for the PCS, FEC,
> etc to all play with a generic phy in the setup and have it make sense
> to it as a network device.
I think we probably do need to represent the lanes somehow. But there
are lots of open questions. Do we have one phylink_pcs per lane? Or
one phylink_pcs which can handle multiple lanes, all being configured
the same? What needs to be considered here is probably splitting, when
you create two netdev instances each with two lanes, or 4 netdev
instances each with one lane? That is probably easier when there is a
phylink_pcs per lane, or at least, some structure which represents a
lane within a PCS.
Andrew
Powered by blists - more mailing lists