[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220211182137.GA2492@lpieralisi>
Date: Fri, 11 Feb 2022 18:21:37 +0000
From: Lorenzo Pieralisi <lorenzo.pieralisi@....com>
To: Pali Rohár <pali@...nel.org>
Cc: robh+dt@...nel.org, Bjorn Helgaas <bhelgaas@...gle.com>,
Thomas Petazzoni <thomas.petazzoni@...tlin.com>,
Krzysztof Wilczyński <kw@...ux.com>,
Marek Behún <kabel@...nel.org>,
Russell King <rmk+kernel@...linux.org.uk>,
Marc Zyngier <maz@...nel.org>, linux-pci@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-arm-kernel@...ts.infradead.org
Subject: Re: [PATCH v2 10/11] PCI: mvebu: Implement support for legacy INTx
interrupts
On Fri, Feb 11, 2022 at 06:52:02PM +0100, Pali Rohár wrote:
[...]
> > > @@ -1121,6 +1247,21 @@ static int mvebu_pcie_parse_port(struct mvebu_pcie *pcie,
> > > port->io_attr = -1;
> > > }
> > >
> > > + /*
> > > + * Old DT bindings do not contain "intx" interrupt
> > > + * so do not fail probing driver when interrupt does not exist.
> > > + */
> > > + port->intx_irq = of_irq_get_byname(child, "intx");
> > > + if (port->intx_irq == -EPROBE_DEFER) {
> > > + ret = port->intx_irq;
> > > + goto err;
> > > + }
> > > + if (port->intx_irq <= 0) {
> > > + dev_warn(dev, "%s: legacy INTx interrupts cannot be masked individually, "
> > > + "%pOF does not contain intx interrupt\n",
> > > + port->name, child);
> >
> > Here you end up with a new warning on existing firmware. Is it
> > legitimate ? I would remove the dev_warn().
>
> I added this warning in v2 because Marc wanted it.
>
> Should I (again) remove it in v3?
No, I asked a question and gave an opinion, I appreciate Marc's concern
so leave it (ie not everyone running a new kernel with new warnings on
existing firmware would be happy - maybe it is a good way of forcing a
firmware upgrade, you will tell me).
Lorenzo
> > Rob certainly has more insightful advice on this.
> >
> > Thanks,
> > Lorenzo
> >
> > > + }
> > > +
> > > reset_gpio = of_get_named_gpio_flags(child, "reset-gpios", 0, &flags);
> > > if (reset_gpio == -EPROBE_DEFER) {
> > > ret = reset_gpio;
> > > @@ -1317,6 +1458,7 @@ static int mvebu_pcie_probe(struct platform_device *pdev)
> > >
> > > for (i = 0; i < pcie->nports; i++) {
> > > struct mvebu_pcie_port *port = &pcie->ports[i];
> > > + int irq = port->intx_irq;
> > >
> > > child = port->dn;
> > > if (!child)
> > > @@ -1344,6 +1486,22 @@ static int mvebu_pcie_probe(struct platform_device *pdev)
> > > continue;
> > > }
> > >
> > > + if (irq > 0) {
> > > + ret = mvebu_pcie_init_irq_domain(port);
> > > + if (ret) {
> > > + dev_err(dev, "%s: cannot init irq domain\n",
> > > + port->name);
> > > + pci_bridge_emul_cleanup(&port->bridge);
> > > + devm_iounmap(dev, port->base);
> > > + port->base = NULL;
> > > + mvebu_pcie_powerdown(port);
> > > + continue;
> > > + }
> > > + irq_set_chained_handler_and_data(irq,
> > > + mvebu_pcie_irq_handler,
> > > + port);
> > > + }
> > > +
> > > /*
> > > * PCIe topology exported by mvebu hw is quite complicated. In
> > > * reality has something like N fully independent host bridges
> > > @@ -1448,6 +1606,7 @@ static int mvebu_pcie_remove(struct platform_device *pdev)
> > >
> > > for (i = 0; i < pcie->nports; i++) {
> > > struct mvebu_pcie_port *port = &pcie->ports[i];
> > > + int irq = port->intx_irq;
> > >
> > > if (!port->base)
> > > continue;
> > > @@ -1458,7 +1617,17 @@ static int mvebu_pcie_remove(struct platform_device *pdev)
> > > mvebu_writel(port, cmd, PCIE_CMD_OFF);
> > >
> > > /* Mask all interrupt sources. */
> > > - mvebu_writel(port, 0, PCIE_MASK_OFF);
> > > + mvebu_writel(port, ~PCIE_INT_ALL_MASK, PCIE_INT_UNMASK_OFF);
> > > +
> > > + /* Clear all interrupt causes. */
> > > + mvebu_writel(port, ~PCIE_INT_ALL_MASK, PCIE_INT_CAUSE_OFF);
> > > +
> > > + if (irq > 0)
> > > + irq_set_chained_handler_and_data(irq, NULL, NULL);
> > > +
> > > + /* Remove IRQ domains. */
> > > + if (port->intx_irq_domain)
> > > + irq_domain_remove(port->intx_irq_domain);
> > >
> > > /* Free config space for emulated root bridge. */
> > > pci_bridge_emul_cleanup(&port->bridge);
> > > --
> > > 2.20.1
> > >
Powered by blists - more mailing lists