[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <67f386ad-62d6-4ad7-8906-face362b5ef0@tuxon.dev>
Date: Fri, 24 Oct 2025 09:18:54 +0300
From: Claudiu Beznea <claudiu.beznea@...on.dev>
To: Bjorn Helgaas <helgaas@...nel.org>
Cc: lpieralisi@...nel.org, kwilczynski@...nel.org, mani@...nel.org,
robh@...nel.org, bhelgaas@...gle.com, krzk+dt@...nel.org,
conor+dt@...nel.org, geert+renesas@...der.be, magnus.damm@...il.com,
p.zabel@...gutronix.de, linux-pci@...r.kernel.org,
linux-renesas-soc@...r.kernel.org, devicetree@...r.kernel.org,
linux-kernel@...r.kernel.org,
Claudiu Beznea <claudiu.beznea.uj@...renesas.com>,
Wolfram Sang <wsa+renesas@...g-engineering.com>
Subject: Re: [PATCH v5 2/6] PCI: rzg3s-host: Add Renesas RZ/G3S SoC host
driver
Hi, Bjorn,
On 10/23/25 18:55, Bjorn Helgaas wrote:
> On Thu, Oct 23, 2025 at 08:11:17AM +0300, Claudiu Beznea wrote:
>> On 10/22/25 22:49, Bjorn Helgaas wrote:
>>> On Tue, Oct 07, 2025 at 04:36:53PM +0300, Claudiu wrote:
>>>> From: Claudiu Beznea <claudiu.beznea.uj@...renesas.com>
>>>>
>>>> The Renesas RZ/G3S features a PCIe IP that complies with the PCI Express
>>>> Base Specification 4.0 and supports speeds of up to 5 GT/s. It functions
>>>> only as a root complex, with a single-lane (x1) configuration. The
>>>> controller includes Type 1 configuration registers, as well as IP
>>>> specific registers (called AXI registers) required for various adjustments.
>
>>>> +++ b/drivers/pci/controller/pcie-rzg3s-host.c
>>>
>>>> +#define RZG3S_PCI_MSIRCVWMSKL 0x108
>>>> +#define RZG3S_PCI_MSIRCVWMSKL_MASK GENMASK(31, 2)
>>>
>>> Unfortunate to have to add _MASK here when none of the other GENMASKs
>>> need it. Can't think of a better name though.
>>
>> Most of the register offsets and fields defines tried to use the naming
>> from the HW manual. ...
>
> It's OK as-is.
>
>>>> +static int rzg3s_pcie_msi_enable(struct rzg3s_pcie_host *host)
>>>> +{
>>>> + struct platform_device *pdev = to_platform_device(host->dev);
>>>> + struct rzg3s_pcie_msi *msi = &host->msi;
>>>> + struct device *dev = host->dev;
>>>> + const char *devname;
>>>> + int irq, ret;
>>>> +
>>>> + ret = devm_mutex_init(dev, &msi->map_lock);
>>>> + if (ret)
>>>> + return ret;
>>>> +
>>>> + msi->irq = platform_get_irq_byname(pdev, "msi");
>>>> + if (msi->irq < 0)
>>>> + return dev_err_probe(dev, irq ? irq : -EINVAL,
>>>> + "Failed to get MSI IRQ!\n");
>>>> +
>>>> + devname = devm_kasprintf(dev, GFP_KERNEL, "%s-msi", dev_name(dev));
>>>> + if (!devname)
>>>> + return -ENOMEM;
>>>> +
>>>> + ret = rzg3s_pcie_msi_allocate_domains(msi);
>>>> + if (ret)
>>>> + return ret;
>>>> +
>>>> + ret = request_irq(msi->irq, rzg3s_pcie_msi_irq, 0, devname, host);
>>>
>>> Should this be devm_request_irq()? Most drivers use it, although
>>> pci-tegra.c and pcie-apple.c do not. Maybe there's some special
>>> rule about using request_irq() even though the driver uses devm in
>>> general? I dunno.
>>
>> In general is not good to mix devm cleanups with driver specific
>> one.
>>
>> As it was requested to drop the devm cleanups from this driver
>> (especially devm_pm_runtime_enable() which enables the also the
>> clocks) I switched the initial devm_request_irq() to request_irq()
>> to avoid keeping the interrupt requested on error path, after
>> driver's probed was executed, until devm cleanups are called, and
>> potentially having it firing w/o hardware resourced being enabled
>> (e.g. clocks), and potentially reading HW registers.
>
> I couldn't find that request to drop devm,
In v3 and v4 it was requested to drop the devm_add_action_or_reset() who's
purpose was to allow using the currently generalized devm helpers (like
devm_request_threaded_irq()) and don't mix devm cleanup with driver
specific cleanup:
v4 -
https://lore.kernel.org/all/pnph54wv3736lemzren64ig4karlulffkvmc3dzgrhgyv2cpwu@2mcgvlqdr6wu/
v3 -
https://lore.kernel.org/all/ddxayjj5wcuuish4kvyluzrujkes5seo7zlusmomyjfjcgzcyj@xe3zzzmy2zaj/
> and I don't see where
> devm_pm_runtime_enable() enables clocks.
Sorry, I wanted to refer to pm_runtime_resume_and_get() and its devm
cleanup helper implemented in v3, v4:
+ /*
+ * Controller clocks are part of a clock power domain. Enable them
+ * through runtime PM.
+ */
+ ret = pm_runtime_resume_and_get(dev);
+ if (ret)
+ return ret;
+
+ ret = devm_add_action_or_reset(dev, rzg3s_pcie_pm_runtime_put, dev);
+ if (ret)
+ return ret;
>
>> E.g., accessing the HW registers while clocks are disabled on the
>> SoC I'm working with leads to synchronous aborts.
>>
>> So, I only kept the devm helpers for memory allocations, resets
>> assert/de-assert and the mutex initialization.
>
> I'm OK with request_irq() here since you have a good reason. This
> problem of accessing registers while clocks are disabled sounds
> familiar, so I think other hardware has a similar issue. It would be
> nice if everybody handled it the same way.
>
> I don't know enough to identify other similar hardware and see how
> they handled it (or identify drivers that *don't* handle it). It
> might be worth a one-line comment here to help future code readers.
OK, I'll add a comment on top of request_irq() call.
>
>>>> +static int rzg3s_pcie_intx_setup(struct rzg3s_pcie_host *host)
>>>> +{
>>>> + struct device *dev = host->dev;
>>>> +
>>>> + for (int i = 0; i < PCI_NUM_INTX; i++) {
>>>> + struct platform_device *pdev = to_platform_device(dev);
>>>
>>> Looks like this should be outside the loop.
>>
>> OK, I kept it here as it is used only inside this block.
>
> Ah, I see the motivation. I suppose the compiler is smarter than I am
> and hoists it out of the loop anyway, but I think it is easier for
> humans to read if the loop only contains things that change for each
> iteration.
OK, I'll move pdev out of this block.
>
>>>> + char irq_name[5] = {0};
>>>> + int irq;
>>>> +
>>>> + scnprintf(irq_name, ARRAY_SIZE(irq_name), "int%c", 'a' + i);
>>>> +
>>>> + irq = platform_get_irq_byname(pdev, irq_name);
>>>> + if (irq < 0)
>>>> + return dev_err_probe(dev, -EINVAL,
>>>> + "Failed to parse and map INT%c IRQ\n",
>>>> + 'A' + i);
>>>> +
>>>> + host->intx_irqs[i] = irq;
>>>> + irq_set_chained_handler_and_data(irq,
>>>> + rzg3s_pcie_intx_irq_handler,
>>>> + host);
>>>> + }
>
>> + host->intx_domain = irq_domain_create_linear(of_fwnode_handle(dev->of_node),
>> + PCI_NUM_INTX,
>> + &rzg3s_pcie_intx_domain_ops,
>> + host);
>> ...
>> + irq_domain_update_bus_token(host->intx_domain, DOMAIN_BUS_WIRED);
>> +
>
> Can we use dev_fwnode(dev) here instead of of_fwnode_handle() so it
> matches the one in rzg3s_pcie_msi_allocate_domains()?
Sure, I'll update it next time.
>
> I think irq_domain_update_bus_token() is needed here because
> host->intx_domain and msi->domain are identified by the same fwnode,
> and this will be easier to see if we get the fwnode the same way.
>
> (See 61d0a000b774 ("genirq/irqdomain: Add irq_domain_update_bus_token
> helper"). I wish irq_domain_update_bus_token() had a function comment
> to this effect. Maybe even a mention in Documentation/.)
>
> It would also help code readers if we could use function names similar
> to other drivers. For instance, rzg3s_pcie_intx_setup() creates the
> INTx IRQ domain, but no other driver uses a name like *_intx_setup().
> The general consensus seems to be *_pcie_init_irq_domain().
I tried initially to align all the function name with the patterns I found
though the tree as you suggested.
With the pattern for MSI/INTx configuration I wanted to re-use the same
probe code on resume path. My point was to re-use the
rzg3s_pcie_host_setup() on resume, and on resume to configure only the HW
registers for MSI, INTx (as the rest is not necessary) by passing different
MSI, INTx setup functions along with their correspondent cleanup functions
for failures.
For the function names rzg3s_pcie_intx_setup(), rzg3s_pcie_msi_setup() I
took as reference (probably) tegra_pcie_msi_setup().
I'll try to switch to *_pcie_init_irq_domain() pattern if this is the
preferred one.
Thank you for your review,
Claudiu
Powered by blists - more mailing lists