[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <C42S3ZZ1E08V.CJ83S0R5R8CY@wkz-x280>
Date: Fri, 10 Jul 2020 09:51:44 +0200
From: "Tobias Waldekranz" <tobias@...dekranz.com>
To: "Vladimir Oltean" <olteanv@...il.com>
Cc: <netdev@...r.kernel.org>, <andrew@...n.ch>, <f.fainelli@...il.com>,
<hkallweit1@...il.com>
Subject: Re: MDIO Debug Interface
On Fri Jul 10, 2020 at 3:18 AM CEST, Vladimir Oltean wrote:
> I will let the PHY library maintainers comment about implementation
> design choices made by mdio-netlink. However, I want to add a big "+1"
> from my side for identifying the correct issues in the existing PHY
> ioctls and doing something about it. I think the mainline kernel needs
> this.
> Please be aware that, if your mdio-netlink module, or something
> equivalent to it, lands in mainline, QEMU/KVM is going to be one of its
> users (for virtualizing an MDIO bus). So this is going to be more than
> just for debugging.
> And, while we're at it: context switches from a VM to a host are
> expensive. And the PHY library polls around 5 MDIO registers per PHY
> every second. It would be nice if your mdio-netlink module had some sort
> of concept of "poll offload": just do the polling in the kernel side and
> notify the user space only of a change.
The current flow is:
1. User: Send program to kernel.
2. Kernel: Verify program.
3. Kernel: Lock bus.
4. Kernel: Execute program.
5. Kernel: Unlock bus.
6. User: Read back status, including the output buffer.
(3, 5) is what allows for doing complex operations in a race free
manner. (4) is capped with a timeout to make sure that userspace can't
monopolize the bus. A "poll offload" would have to yield (i.e. unlock)
the bus in between poll cycles. Certainly doable, but it complicates
the model a bit.
> Thanks,
> -Vladimir
Powered by blists - more mailing lists