lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250715080532.07883d74@fedora.home>
Date: Tue, 15 Jul 2025 08:05:32 +0200
From: Maxime Chevallier <maxime.chevallier@...tlin.com>
To: Andrew Lunn <andrew@...n.ch>
Cc: davem@...emloft.net, netdev@...r.kernel.org,
 linux-kernel@...r.kernel.org, thomas.petazzoni@...tlin.com, Jakub Kicinski
 <kuba@...nel.org>, Eric Dumazet <edumazet@...gle.com>, Paolo Abeni
 <pabeni@...hat.com>, Russell King <linux@...linux.org.uk>, Florian Fainelli
 <f.fainelli@...il.com>, Heiner Kallweit <hkallweit1@...il.com>, Vladimir
 Oltean <vladimir.oltean@....com>, Köry Maincent
 <kory.maincent@...tlin.com>, Oleksij Rempel <o.rempel@...gutronix.de>,
 Simon Horman <horms@...nel.org>, Shuah Khan <shuah@...nel.org>,
 linux-kselftest@...r.kernel.org
Subject: Re: [PATCH net-next v3 1/3] net: netdevsim: Add PHY support in
 netdevsim

On Sat, 12 Jul 2025 18:54:38 +0200
Andrew Lunn <andrew@...n.ch> wrote:

> > +static int nsim_mdio_read(struct mii_bus *bus, int phy_addr, int reg_num)
> > +{
> > +	return 0;
> > +}
> > +
> > +static int nsim_mdio_write(struct mii_bus *bus, int phy_addr, int reg_num,
> > +			   u16 val)
> > +{
> > +	return 0;
> > +}  
> 
> If i'm reading the code correctly, each PHY has its own MDIO bus? And
> the PHY is always at address 0?

Yes indeed. 

> Maybe for address != 0, these should return -ENODEV?

That could be done yes, but I don't think this will ever happen as this
is only ever going to be used in netdevsim, which also controls the PHY
instantiation. I'm OK to add the ENODEV though :)

For the record, the first draft implementation I had locally used a
single MDIO bus on which we could register up to 32 netdevsim PHYs, but
that wasn't enough. At some point Jakub pointed me to the case of
netlink DUMP requests that would be too large to fit in a single
netlink message (default threshold for that is > 4K messages), so to
test that with the phy_link_topology stuff, I had to add around 150
PHYs...

> I'm guessing the PHY core is going to perform reads/writes for things
> like EEE? And if the MAC driver has an IOCTL handler, it could also do
> reads/writes. So something is needed here, but i do wounder if hard
> coded 0 is going to work out O.K? Have you looked at what accesses the
> core actually does?

I don't see that driver being useful outside of netdevsim, so at
least we have a good idea of what the MAC driver will do.

There'e no ioctl, but we can be on the safe side and stub it a bit more.

From the tests I've been running, I didn't encounter any side-effect
but I only tested very simple cases (set link up, run ethtool, etc.)

I've considered re-using swphy for example, and using an emulated MDIO
bus for netdevsim PHY, but my fear was that this would in the end get
too complex. Let's say we want to add a new netdevsim debugfs
file that allows us to control what is the speed reported by the phy,
if we want to report say 10G speeds, we also need to emulate C45, etc.

I guess at some point, we will run into scenarios where phylib is going
to call some genphy helpers, it really depends on how deep we want to go
with this netdevsim PHY driver.

The use-cases that I saw were :
 - Being able to test the netlink stuff now that we have the ability to
know which PHY handles which command
 - In the future, write some tests for the linkmode reporting (that I
broke once with phy_caps)
 - When PHY Muxes eventually get there, I'd also like to netdevsim
that.

However that's just my view of the thing, we could also consider having
this driver rely purely on MDIO emulation, where the MDIO registers
would behave exactly as specified in C22/C37/C45, but that's quite more
complex :)

I'm open for any improvement or ideas on that driver :)

>      Andrew

Thanks for the review,

Maxime

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ