[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <636d2189-f27a-49d9-a2c4-ea980a8cd63d@lunn.ch>
Date: Fri, 19 May 2023 17:18:13 +0200
From: Andrew Lunn <andrew@...n.ch>
To: Richard Cochran <richardcochran@...il.com>
Cc: Jacob Keller <jacob.e.keller@...el.com>,
Jakub Kicinski <kuba@...nel.org>,
Köry Maincent <kory.maincent@...tlin.com>,
netdev@...r.kernel.org, glipus@...il.com,
maxime.chevallier@...tlin.com, vladimir.oltean@....com,
vadim.fedorenko@...ux.dev, gerhard@...leder-embedded.com,
thomas.petazzoni@...tlin.com, krzysztof.kozlowski+dt@...aro.org,
robh+dt@...nel.org, linux@...linux.org.uk
Subject: Re: [PATCH net-next RFC v4 2/5] net: Expose available time stamping
layers to user space.
On Fri, May 19, 2023 at 06:50:19AM -0700, Richard Cochran wrote:
> On Fri, May 19, 2023 at 02:50:42PM +0200, Andrew Lunn wrote:
>
> > I would actually say there is nothing fundamentally blocking using
> > NETWORK_PHY_TIMESTAMPING with something other than DT. It just needs
> > somebody to lead the way.
>
> +1
>
> > For ACPI in the scope of networking, everybody just seems to accept DT
> > won, and stuffs DT properties into ACPI tables.
>
> Is that stuff mainline?
There is some support. But it is somewhat limited.
ACPI and networking is an odd area. ACPI has historically been
x86. And few x86 SoCs have integrated networking. Those that do seem
to me to be PCI devices internally glued onto the PCIe bus, and
firmware driving the hardware, not Linux.
Integrated networking is much more popular for other architectures
SoCs, ARM, MIPS, PowerPc. These are all DT. And in general, Linux is
controlling the hardware, which is why we have good standardised DT
bindings for MDIO busses, PHYs, SFPs, etc.
Then ARM pushed ACPI for server class ARM systems. Now server class
systems generally don't have integrated Ethernet. They have lots of
PCIe lanes, and it seems normal to put one or more NICs on PCIe. That
also gives the flexibility you can get a high performance DPU from a
network specialist, or just a plain boring 10G PCIe device. As a
result, ACPI not saying anything about networking is not really an
issue for server class machines.
The little interest i've seen for ACPI networking has come from
'hobbist' trying to use ACPI on ARM systems which are not intended to
be servers. Generally, there is a fully working DT description of the
hardware, and Linux is happy to control the hardware using that DT
description. Getting ACPI to work is mostly straight forward, due to
most building blocks being standard. xhci for USB, ata for block
devices, etc. But they then run into the complete lack of
standardisation for networking, and nothing at all about networking in
the ACPI standard. And these people tend not to be ACPI gurus who
could extend ACPI to cover the complexity of networking hardware. So
they just stuff the existing DT properties into ACPI tables and call
it done. And i have to push back on this, because they try to stuff
everything in, including properties we have deprecated because DT has
a long history and we got things wrong along the way.
> > For PCI devices, there
> > has been some good work being done by Trustnetic using software nodes,
> > for gluing together GPIO controllers, I2C controller, SFP and
> > PHYLINK.
>
> mainline also?
On the way. Trustnetic got thrown in the deep end. They are new to
mainline. They brought a typical "vendor crap driver" and tried to get
it mainlined. It reinvented everything rather then reuse what already
exists in Linux. So they are effectively writing a new driver under
our guidance. It is an unusual device, because it is a PCIe device,
but without firmware. Linux controls everything. So they have the
double trouble of being mainline newbies, plus having to do things
with mainline that nobody else has done before in order to support
their hardware. So it is moving slow. But they are sticking at it, so
i think in the end they will get it working, and it could become a
reference for others to follow.
Andrew
Powered by blists - more mailing lists