lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <87ed0btpfj.wl-maz@kernel.org>
Date: Thu, 06 Feb 2025 09:04:00 +0000
From: Marc Zyngier <maz@...nel.org>
To: Brian Norris <briannorris@...omium.org>
Cc: Jingoo Han <jingoohan1@...il.com>,	Manivannan Sadhasivam
 <manivannan.sadhasivam@...aro.org>,	linux-pci@...r.kernel.org,	Rob Herring
 <robh@...nel.org>,	Lorenzo Pieralisi <lpieralisi@...nel.org>,	Krzysztof
 WilczyƄski <kw@...ux.com>,	Bjorn Helgaas
 <bhelgaas@...gle.com>,	linux-kernel@...r.kernel.org,	Brian Norris
 <briannorris@...gle.com>
Subject: Re: [PATCH v2] PCI: dwc: Use level-triggered handler for MSI IRQs

On Wed, 05 Feb 2025 23:16:36 +0000,
Brian Norris <briannorris@...omium.org> wrote:
> 
> From: Brian Norris <briannorris@...gle.com>
> 
> Per Synopsis's documentation [1], the msi_ctrl_int signal is
> level-triggered, not edge-triggered.
> 
> The use of handle_edge_trigger() was chosen in commit 7c5925afbc58
> ("PCI: dwc: Move MSI IRQs allocation to IRQ domains hierarchical API"),
> which actually dropped preexisting use of handle_level_trigger().
> Looking at the patch history, this change was only made by request:
> 
>   Subject: Re: [PATCH v6 1/9] PCI: dwc: Add IRQ chained API support
>   https://lore.kernel.org/all/04d3d5b6-9199-218d-476f-c77d04b8d2e7@arm.com/
> 
>   "Are you sure about this "handle_level_irq"? MSIs are definitely edge
>    triggered, not level."
> 
> However, while the underlying MSI protocol is edge-triggered in a sense,
> the DesignWare IP is actually level-triggered.

You are confusing two things:

- MSIs are edge triggered. No ifs, no buts. That's because you can't
  "unwrite" something. Even the so-called level-triggered MSIs are
  build on a pair of edges (one up, one down).

- The DisgustWare IP multiplexes MSIs onto a single interrupt, and
  *latches* them, presenting a level sensitive signal *for the
  latch*. Not for the MSIs themselves.

>
> So, let's switch back to level-triggered.
> 
> In many cases, the distinction doesn't really matter here, because this
> signal is hidden behind another (chained) top-level IRQ which can be
> masked on its own. But it's still wise to manipulate this interrupt line
> according to its actual specification -- specifically, to mask it while
> we handle it.

The distinction absolutely matters, because you are not dealing with
the actual MSIs, but with a latch.

> 
> [1] From:
> 
>   DesignWare Cores PCI Express RP Controller Reference Manual
>   Version 6.00a, June 2022
>   Section 2.89 MSI Interface Signals
> 
> msi_ctrl_int is described as:
> 
>   "Asserted when an MSI interrupt is pending. De-asserted when there is
>   no MSI interrupt pending.
>   ...
>   Active State: High (level)"
> 
> It also points at the databook for more info. One relevant excerpt from
> the databook:
> 
>   DesignWare Cores PCI Express Controller Databook
>   Version 6.00a, June 2022
>   Section 3.9.2.3 iMSI-RX: Integrated MSI Receiver [AXI Bridge]
> 
>   "When any status bit remains set, then msi_ctrl_int remains asserted.
>   The interrupt status register provides a status bit for up to 32
>   interrupt vectors per Endpoint. When the decoded interrupt vector is
>   enabled but is masked, then the controller sets the corresponding bit
>   in interrupt status register but the it [sic] does not assert the
>   top-level controller output msi_ctrl_int.

Key word: *output*. That is the level-triggered line. Not the MSIs,
which are *input* signals to the mux.

> 
> Signed-off-by: Brian Norris <briannorris@...gle.com>
> Signed-off-by: Brian Norris <briannorris@...omium.org>
> ---
> 
> Changes in v2:
>  * add documentation references
> 
>  drivers/pci/controller/dwc/pcie-designware-host.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c
> index ffaded8f2df7..89a1207754d3 100644
> --- a/drivers/pci/controller/dwc/pcie-designware-host.c
> +++ b/drivers/pci/controller/dwc/pcie-designware-host.c
> @@ -198,7 +198,7 @@ static int dw_pcie_irq_domain_alloc(struct irq_domain *domain,
>  	for (i = 0; i < nr_irqs; i++)
>  		irq_domain_set_info(domain, virq + i, bit + i,
>  				    pp->msi_irq_chip,
> -				    pp, handle_edge_irq,
> +				    pp, handle_level_irq,
>  				    NULL, NULL);
>  
>  	return 0;

I don't buy this, at least not without further justification based on
the programming model of the mux. It also breaks the semantics of
interrupt being made pending while we were handling them (retrigger
being one).

	M.

-- 
Without deviation from the norm, progress is not possible.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ