[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAL_Jsq+jnz3SAc+m5RRN9cOs+5k=CC4Fud9gsmquVjv2zVv6pQ@mail.gmail.com>
Date: Mon, 9 May 2022 13:36:04 -0500
From: Rob Herring <robh+dt@...nel.org>
To: Frank Rowand <frowand.list@...il.com>
Cc: Clément Léger <clement.leger@...tlin.com>,
Pantelis Antoniou <pantelis.antoniou@...sulko.com>,
Bjorn Helgaas <bhelgaas@...gle.com>,
Allan Nielsen <allan.nielsen@...rochip.com>,
Horatiu Vultur <horatiu.vultur@...rochip.com>,
Steen Hegelund <steen.hegelund@...rochip.com>,
Thomas Petazzoni <thomas.petazonni@...tlin.com>,
Alexandre Belloni <alexandre.belloni@...tlin.com>,
Mark Brown <broonie@...nel.org>,
Andy Shevchenko <andriy.shevchenko@...ux.intel.com>,
Jakub Kicinski <kuba@...nel.org>,
Hans de Goede <hdegoede@...hat.com>,
Andrew Lunn <andrew@...n.ch>, devicetree@...r.kernel.org,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
PCI <linux-pci@...r.kernel.org>
Subject: Re: [PATCH 0/3] add dynamic PCI device of_node creation for overlay
On Mon, May 9, 2022 at 10:56 AM Frank Rowand <frowand.list@...il.com> wrote:
>
> On 5/9/22 07:16, Clément Léger wrote:
> > Le Fri, 6 May 2022 13:33:22 -0500,
> > Frank Rowand <frowand.list@...il.com> a écrit :
> >
> >> On 4/27/22 04:44, Clément Léger wrote:
> >>> This series adds foundation work to support the lan9662 PCIe card.
> >>> This card is meant to be used an ethernet switch with 2 x RJ45
> >>> ports and 2 x 2.5G SFPs. The lan966x SoCs can be used in two
> >>> different ways:
> >>>
> >>> - It can run Linux by itself, on ARM64 cores included in the SoC.
> >>> This use-case of the lan966x is currently being upstreamed, using a
> >>> traditional Device Tree representation of the lan996x HW blocks
> >>> [1] A number of drivers for the different IPs of the SoC have
> >>> already been merged in upstream Linux.
> >>>
> >>> - It can be used as a PCIe endpoint, connected to a separate
> >>> platform that acts as the PCIe root complex. In this case, all the
> >>> devices that are embedded on this SoC are exposed through PCIe BARs
> >>> and the ARM64 cores of the SoC are not used. Since this is a PCIe
> >>> card, it can be plugged on any platform, of any architecture
> >>> supporting PCIe.
> >>>
> >>> The problem that arose is that we want to reuse all the existing OF
> >>> compatible drivers that are used when in SoC mode to instantiate the
> >>> PCI device when in PCIe endpoint mode.
> >>>
> >>> A previous attempt to tackle this problem was made using fwnode [1].
> >>> However, this proved being way too invasive and it required
> >>> modifications in both subsystems and drivers to support fwnode.
> >>> First series did not lead to a consensus and multiple ideas to
> >>> support this use-case were mentioned (ACPI overlay, fwnode,
> >>> device-tree overlay). Since it only seemed that fwnode was not a
> >>> totally silly idea, we continued on this way.
> >>>
> >>> However, on the series that added fwnode support to the reset
> >>> subsystem, Rob Herring mentioned the fact that OF overlay might
> >>> actually be the best way to probe PCI devices and populate platform
> >>> drivers using this overlay. He also provided a branch containing
> >>> some commits that helped
> >>
> >> I need to go look at the various email threads mentioned above before
> >> I continue reading this patch series.
> >>
> >> I do have serious concerns with this approach. I need to investigate
> >> more fully before I can determine whether the concerns are addressed
> >> sufficiently.
> >>
> >> To give some background to my longstanding response to similar
> >> proposals, here is my old statement from
> >> https://elinux.org/Device_Tree_Reference:
> >>
> >> Overlays
> >> Mainline Linux Support
> >> Run time overlay apply and run time overlay remove from user space
> >> are not supported in the mainline kernel. There are out of tree
> >> patches to implement this feature via an overlay manager. The overlay
> >> manager is used successfully by many users for specific overlays on
> >> specific boards with specific environments and use cases. However,
> >> there are many issues with the Linux kernel overlay implementation
> >> due to incomplete and incorrect code. The overlay manager has not
> >> been accepted in mainline due to these issues. Once these issues are
> >> resolved, it is expected that some method of run time overlay apply
> >> and overlay removal from user space will be supported by the Linux
> >> kernel.
> >>
> >> There is a possibility that overlay apply and overlay remove
> >> support could be phased in slowly, feature by feature, as specific
> >> issues are resolved.
> >
> > Hi Frank,
> >
> > This work uses the kernel space interface (of_overlay_fdt_apply())
> > and the device tree overlay is builtin the driver. This interface was
> > used until recently by rcu-dcar driver. While the only user (sic),
> > this seems to work pretty well and I was able to use it successfully.
>
> Yes, of_overlay_fdt_apply() was used by one driver. But that driver
> was explicitly recognized as a grandfathered exception, and not an
> example for other users. It was finally removed in 5.18-rc1.
What API are folks supposed to use exactly? That's the only API to
apply an overlay. I thought the FPGA mgr code was using it too, but
it's not. It doesn't look to me like the upstream code there even
works as nothing applies the overlays AFAICT. If there are no in
kernel users applying overlays, then let's remove the overlay code. I
hear it has lots of problems.
I am *way* more comfortable with driver specific applying of overlays
than any generic mechanism. I don't think we'll ever have a generic
mechanism. At least not one that doesn't end up with the same usage
constraints driver specific cases would have.
> You may have used of_overlay_fdt_apply() in a specific use case at
> a specific kernel version, but if you read through the references
> I provided you will find that applying overlays after the kernel
> boots is a fragile endeavor, with expectations of bugs and problems
> being exposed as usage is changed (simple example is that my adding
> some overlay notifier unittests exposed yet another memory leak).
The exception being specific drivers that are only applying overlays
isolated to their device as this usecase is. The usecase here is
entirely self-contained. The only base tree is only what's needed to
represent the PCI device.
> The reference that I provided also shows how the overlay code is
> being improved over time. Even with improvements, it will remain
> fragile.
>
> >
> > Moreover, this support targets at using this with PCI devices. This
> > devices are really well contained and do not interfere with other
> > devices. This actually consists in adding a complete subtree into the
> > existing device-tree and thus it limits the interactions between
> > potentially platform provided devices and PCI ones.
>
> Yes, that it is very important that you have described this fact, both
> here and in other emails. Thank you for that information, it does help
> understanding the alternatives.
>
> I've hesitated in recommending a specific solution before better
> understanding the architecture of your pcie board and drivers, but
> I've delayed too long, so I am going to go ahead and mention one
> possibility at the risk of not yet fully understanding the situation.
>
> On the surface, it appears that your need might be well met by having
> a base devicetree that describes all of the pcie nodes, but with each
> node having a status of "disabled" so that they will not be used.
> Have a devicetree overlay describing the pcie card (as you proposed),
> where the overlay also includes a status of "ok" for the pcie node.
> Applying the overlay, with a method of redirecting the target to a
> specific pcie node would change the status of the pcie node to enable
> its use. (You have already proposed a patch to modify of_overlay_fdt_apply()
> to allow a modified target, so not a new concept from me.) My suggestion
> is to apply the overlay devicetree to the base devicetree before the
> combined FDT devicetree is passed to the kernel at boot. The overlay
> apply could be done by several different entities. It could be before
> the bootloader executes, it could be done by the bootloader, it could
> be done by a shim between the bootloader and the kernel. This method
> avoids all of the issues of applying an overlay to a running system
> that I find problematic. It is also a method used by the U-boot
> bootloader, as an example.
Adding a layer, the solution to all problems...
I don't think that's a workable solution unless all the components are
in one party's control. Given the desire to work on ACPI and DT based
systems, that doesn't sound like the case here.
> The other big issue is mixing ACPI and devicetree on a single system.
> Historically, the Linux devicetree community has not been receptive
> to the ides of that mixture. Your example might be a specific case
> where the two can be isolated from each other, or maybe not. (For
> disclosure, I am essentially ACPI ignorant.) I suspect that mixing
> ACPI and devicetree is a recipe for disaster in the general case.
The idea here is what is described by ACPI and DT are disjoint which I
think we can enforce. Enforcement comes from fwnode assuming it has
either an ACPI or a DT handle, but not both.
> More to come later as I finish reading through the various threads.
There is also the Xilinx folks wanting to support their PCI FPGA card
with DT for the FPGA contents on both ACPI and DT systems.
Rob
Powered by blists - more mailing lists