lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <89c13962f5502a89d48f1efb7a6203d155a7e18d.camel@redhat.com>
Date:   Tue, 21 Nov 2023 16:57:05 -0500
From:   Radu Rendec <rrendec@...hat.com>
To:     Thomas Gleixner <tglx@...utronix.de>
Cc:     Jingoo Han <jingoohan1@...il.com>,
        Gustavo Pimentel <gustavo.pimentel@...opsys.com>,
        Lorenzo Pieralisi <lpieralisi@...nel.org>,
        Krzysztof Wilczynski <kw@...ux.com>,
        Rob Herring <robh@...nel.org>,
        Bjorn Helgaas <bhelgaas@...gle.com>,
        Marc Zyngier <maz@...nel.org>, linux-pci@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/1] PCI: dwc: Use regular interrupt instead of chained

Hello Thomas,

On Thu, 2023-06-29 at 14:30 -0400, Radu Rendec wrote:
> The DesignWare PCIe host driver uses a chained interrupt to demultiplex
> the downstream MSI interrupts. On Qualcomm SA8540P Ride, enabling both
> pcie2a and pcie3a at the same time can create an interrupt storm where
> the parent interrupt fires continuously, even though reading the PCIe
> host registers doesn't identify any child MSI interrupt source. This
> effectively locks up CPU0, which spends all the time servicing these
> interrupts.
> 
> This is a clear example of how bypassing the interrupt core by using
> chained interrupts can be very dangerous if the hardware misbehaves.
> 
> Convert the driver to use a regular interrupt for the demultiplex
> handler. This allows the interrupt storm detector to detect the faulty
> interrupt and disable it, allowing the system to run normally.

Kindly bringing the chained interrupts issue back to your attention.
Last week at Plumbers, Brian Masney spoke to you about this, and you
agreed to have me send another reply to this thread to bump it to the
top of your inbox. Here it is.

Essentially, you made it very clear in a different thread [1] that you
wanted to get rid of chained interrupts altogether. However, that would
have the side effect of exposing the parent interrupt affinity control
in procfs, which in Marc's opinion [2] would break the userspace ABI.

I drafted a proposal [3] that tries to solve both problems. But we
really need some maintainer guidance here, to at least agree on a high
level description of what the solution would look like. Working on
patches that get rejected because the approach is conceptually wrong
is not very practical or productive. Thanks in advance!

Best regards,
Radu

[1] https://lore.kernel.org/all/877csohcll.ffs@tglx/
[2] https://lore.kernel.org/all/874k0bf7f7.wl-maz@kernel.org/
[3] https://lore.kernel.org/all/365dbb61921ff37862c91862d31d75fec2a51185.camel@redhat.com/

> Signed-off-by: Radu Rendec <rrendec@...hat.com>
> ---
>  .../pci/controller/dwc/pcie-designware-host.c | 35 +++++++++----------
>  1 file changed, 17 insertions(+), 18 deletions(-)
> 
> diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c
> index 9952057c8819c..b603796d415d7 100644
> --- a/drivers/pci/controller/dwc/pcie-designware-host.c
> +++ b/drivers/pci/controller/dwc/pcie-designware-host.c
> @@ -83,18 +83,9 @@ irqreturn_t dw_handle_msi_irq(struct dw_pcie_rp *pp)
>         return ret;
>  }
>  
> -/* Chained MSI interrupt service routine */
> -static void dw_chained_msi_isr(struct irq_desc *desc)
> +static irqreturn_t dw_pcie_msi_isr(int irq, void *dev_id)
>  {
> -       struct irq_chip *chip = irq_desc_get_chip(desc);
> -       struct dw_pcie_rp *pp;
> -
> -       chained_irq_enter(chip, desc);
> -
> -       pp = irq_desc_get_handler_data(desc);
> -       dw_handle_msi_irq(pp);
> -
> -       chained_irq_exit(chip, desc);
> +       return dw_handle_msi_irq(dev_id);
>  }
>  
>  static void dw_pci_setup_msi_msg(struct irq_data *d, struct msi_msg *msg)
> @@ -254,20 +245,21 @@ int dw_pcie_allocate_domains(struct dw_pcie_rp *pp)
>         return 0;
>  }
>  
> -static void dw_pcie_free_msi(struct dw_pcie_rp *pp)
> +static void __dw_pcie_free_msi(struct dw_pcie_rp *pp, u32 num_ctrls)
>  {
>         u32 ctrl;
>  
> -       for (ctrl = 0; ctrl < MAX_MSI_CTRLS; ctrl++) {
> +       for (ctrl = 0; ctrl < num_ctrls; ctrl++) {
>                 if (pp->msi_irq[ctrl] > 0)
> -                       irq_set_chained_handler_and_data(pp->msi_irq[ctrl],
> -                                                        NULL, NULL);
> +                       free_irq(pp->msi_irq[ctrl], pp);
>         }
>  
>         irq_domain_remove(pp->msi_domain);
>         irq_domain_remove(pp->irq_domain);
>  }
>  
> +#define dw_pcie_free_msi(pp) __dw_pcie_free_msi(pp, MAX_MSI_CTRLS)
> +
>  static void dw_pcie_msi_init(struct dw_pcie_rp *pp)
>  {
>         struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
> @@ -361,9 +353,16 @@ static int dw_pcie_msi_host_init(struct dw_pcie_rp *pp)
>                 return ret;
>  
>         for (ctrl = 0; ctrl < num_ctrls; ctrl++) {
> -               if (pp->msi_irq[ctrl] > 0)
> -                       irq_set_chained_handler_and_data(pp->msi_irq[ctrl],
> -                                                   dw_chained_msi_isr, pp);
> +               if (pp->msi_irq[ctrl] > 0) {
> +                       ret = request_irq(pp->msi_irq[ctrl], dw_pcie_msi_isr, 0,
> +                                         dev_name(dev), pp);
> +                       if (ret) {
> +                               dev_err(dev, "Failed to request irq %d: %d\n",
> +                                       pp->msi_irq[ctrl], ret);
> +                               __dw_pcie_free_msi(pp, ctrl);
> +                               return ret;
> +                       }
> +               }
>         }
>  
>         /*

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ