[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Ynp6IO9UREcX0k9C@robh.at.kernel.org>
Date: Tue, 10 May 2022 09:43:44 -0500
From: Rob Herring <robh@...nel.org>
To: Frank Rowand <frowand.list@...il.com>
Cc: Clément Léger <clement.leger@...tlin.com>,
Pantelis Antoniou <pantelis.antoniou@...sulko.com>,
Bjorn Helgaas <bhelgaas@...gle.com>,
Allan Nielsen <allan.nielsen@...rochip.com>,
Horatiu Vultur <horatiu.vultur@...rochip.com>,
Steen Hegelund <steen.hegelund@...rochip.com>,
Thomas Petazzoni <thomas.petazonni@...tlin.com>,
Alexandre Belloni <alexandre.belloni@...tlin.com>,
Mark Brown <broonie@...nel.org>,
Andy Shevchenko <andriy.shevchenko@...ux.intel.com>,
Jakub Kicinski <kuba@...nel.org>,
Hans de Goede <hdegoede@...hat.com>,
Andrew Lunn <andrew@...n.ch>, devicetree@...r.kernel.org,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
PCI <linux-pci@...r.kernel.org>
Subject: Re: [PATCH 0/3] add dynamic PCI device of_node creation for overlay
On Mon, May 09, 2022 at 03:35:45PM -0500, Frank Rowand wrote:
> On 5/9/22 13:36, Rob Herring wrote:
> > On Mon, May 9, 2022 at 10:56 AM Frank Rowand <frowand.list@...il.com> wrote:
> >>
> >> On 5/9/22 07:16, Clément Léger wrote:
> >>> Le Fri, 6 May 2022 13:33:22 -0500,
> >>> Frank Rowand <frowand.list@...il.com> a écrit :
> >>>
> >>>> On 4/27/22 04:44, Clément Léger wrote:
> >>>>> This series adds foundation work to support the lan9662 PCIe card.
> >>>>> This card is meant to be used an ethernet switch with 2 x RJ45
> >>>>> ports and 2 x 2.5G SFPs. The lan966x SoCs can be used in two
> >>>>> different ways:
[...]
> >>> This work uses the kernel space interface (of_overlay_fdt_apply())
> >>> and the device tree overlay is builtin the driver. This interface was
> >>> used until recently by rcu-dcar driver. While the only user (sic),
> >>> this seems to work pretty well and I was able to use it successfully.
> >>
> >> Yes, of_overlay_fdt_apply() was used by one driver. But that driver
> >> was explicitly recognized as a grandfathered exception, and not an
> >> example for other users. It was finally removed in 5.18-rc1.
> >
> > What API are folks supposed to use exactly? That's the only API to
> > apply an overlay.
>
> Yes, that is the API to designed to be used if overlays are applied
> after the kernel has booted.
>
> > I thought the FPGA mgr code was using it too, but
>
> That was my understanding too.
>
> > it's not. It doesn't look to me like the upstream code there even
> > works as nothing applies the overlays AFAICT. If there are no in
> > kernel users applying overlays, then let's remove the overlay code. I
> > hear it has lots of problems.
>
> I would not object to doing that. But I suspect there are other people
> who might. I much prefer that overlays be applied before boot.
Strictly speaking, we don't keep APIs with no users. Perhaps removing or
threatening to remove would spur on some work in your overlay issues
list. :)
> >>> Moreover, this support targets at using this with PCI devices. This
> >>> devices are really well contained and do not interfere with other
> >>> devices. This actually consists in adding a complete subtree into the
> >>> existing device-tree and thus it limits the interactions between
> >>> potentially platform provided devices and PCI ones.
> >>
> >> Yes, that it is very important that you have described this fact, both
> >> here and in other emails. Thank you for that information, it does help
> >> understanding the alternatives.
> >>
> >> I've hesitated in recommending a specific solution before better
> >> understanding the architecture of your pcie board and drivers, but
> >> I've delayed too long, so I am going to go ahead and mention one
> >> possibility at the risk of not yet fully understanding the situation.
> >>
> >> On the surface, it appears that your need might be well met by having
> >> a base devicetree that describes all of the pcie nodes, but with each
> >> node having a status of "disabled" so that they will not be used.
> >> Have a devicetree overlay describing the pcie card (as you proposed),
> >> where the overlay also includes a status of "ok" for the pcie node.
> >> Applying the overlay, with a method of redirecting the target to a
> >> specific pcie node would change the status of the pcie node to enable
> >> its use. (You have already proposed a patch to modify of_overlay_fdt_apply()
> >> to allow a modified target, so not a new concept from me.) My suggestion
> >> is to apply the overlay devicetree to the base devicetree before the
> >> combined FDT devicetree is passed to the kernel at boot. The overlay
> >> apply could be done by several different entities. It could be before
> >> the bootloader executes, it could be done by the bootloader, it could
> >> be done by a shim between the bootloader and the kernel. This method
> >> avoids all of the issues of applying an overlay to a running system
> >> that I find problematic. It is also a method used by the U-boot
> >> bootloader, as an example.
> >
> > Adding a layer, the solution to all problems...
>
> < insert xkcd reference here > :-)
>
> >
> > I don't think that's a workable solution unless all the components are
> > in one party's control. Given the desire to work on ACPI and DT based
> > systems, that doesn't sound like the case here.
>
> That is the motivation behind my questions of how generic or targeted
> the use of this one specific card is.
>
> A pre-boot shim that discovers the card and applies the overlay could
> be somewhat generic to ACPI systems.
>
> Is the overlay approach also being proposed for DT based systems?
Yes, the intent is it would work for either.
Another usecase I have in mind are USB to serial chips with downstream
GPIOs, I2C, SPI, etc. where you want to describe downstream devices. And
then you plug in more than 1...
> >> The other big issue is mixing ACPI and devicetree on a single system.
> >> Historically, the Linux devicetree community has not been receptive
> >> to the ides of that mixture. Your example might be a specific case
> >> where the two can be isolated from each other, or maybe not. (For
> >> disclosure, I am essentially ACPI ignorant.) I suspect that mixing
> >> ACPI and devicetree is a recipe for disaster in the general case.
> >
> > The idea here is what is described by ACPI and DT are disjoint which I
> > think we can enforce. Enforcement comes from fwnode assuming it has
> > either an ACPI or a DT handle, but not both.
>
> I thought the intent was to use DT API drivers, not fwnode API drivers.
Yes.
> And I thought the card was not to be described by ACPI, so there would
> not be any ACPI fwnode info for the card _and_ fwnode is somewhat
> irrelevant since the drivers are using the DT API.
The card would not be described, but the PCI host bridge would likely
be and that may be where the conflict is. Though I think PCI hosts are
described differently from devices that end up with a fwnode handle in
ACPI.
My current thinking is the whole PCI bus structure from the card
device up the tree needs to be created in the base DT. Maybe we could
create just a fake parent for the device and avoid the above issue, but
I'd guess that would just create other issues.
> So the enforcement
> you mention could be implemented by the overlay apply code verifying
> that there is no ACPI fwnode for any DT node in the overlay. So not
> an _assumption_, but a testable rule.
I think we'd need something where ever device fwnode ptrs are getting
set. Not entirely sure yet.
> That is a long way of essentially agreeing with you.
>
> >
> >> More to come later as I finish reading through the various threads.
> >
> > There is also the Xilinx folks wanting to support their PCI FPGA card
> > with DT for the FPGA contents on both ACPI and DT systems.
>
> Is that the XRT Alveo driver infrastructure thread? I was not cc:ed
> on that thread, and just recently stumbled upon it. Yet another
> thread that is on my to-read list.
Yes, and I've had a call about it in the 'system DT' call.
Rob
Powered by blists - more mailing lists