[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <33d2feb221c2ca89a4d09876a00c40ed0a893118.camel@linux.ibm.com>
Date: Tue, 02 Dec 2025 19:14:45 +0100
From: Gerd Bayer <gbayer@...ux.ibm.com>
To: Tobias Schumacher <ts@...ux.ibm.com>, Heiko Carstens
<hca@...ux.ibm.com>,
Vasily Gorbik <gor@...ux.ibm.com>,
Alexander Gordeev
<agordeev@...ux.ibm.com>,
Christian Borntraeger
<borntraeger@...ux.ibm.com>,
Sven Schnelle <svens@...ux.ibm.com>,
Niklas
Schnelle <schnelle@...ux.ibm.com>,
Gerald Schaefer
<gerald.schaefer@...ux.ibm.com>,
Halil Pasic <pasic@...ux.ibm.com>,
Matthew Rosato <mjrosato@...ux.ibm.com>,
Thomas Gleixner
<tglx@...utronix.de>
Cc: linux-kernel@...r.kernel.org, linux-s390@...r.kernel.org
Subject: Re: [PATCH v7 2/2] s390/pci: Migrate s390 IRQ logic to IRQ domain
API
On Thu, 2025-11-27 at 16:07 +0100, Tobias Schumacher wrote:
> s390 is one of the last architectures using the legacy API for setup and
> teardown of PCI MSI IRQs. Migrate the s390 IRQ allocation and teardown
> to the MSI parent domain API. For details, see:
>
> https://lore.kernel.org/lkml/20221111120501.026511281@linutronix.de
>
> In detail, create an MSI parent domain for each PCI domain. When a PCI
> device sets up MSI or MSI-X IRQs, the library creates a per-device IRQ
> domain for this device, which is used by the device for allocating and
> freeing IRQs.
>
> The per-device domain delegates this allocation and freeing to the
> parent-domain. In the end, the corresponding callbacks of the parent
> domain are responsible for allocating and freeing the IRQs.
>
> The allocation is split into two parts:
> - zpci_msi_prepare() is called once for each device and allocates the
> required resources. On s390, each PCI function has its own airq
> vector and a summary bit, which must be configured once per function.
> This is done in prepare().
> - zpci_msi_alloc() can be called multiple times for allocating one or
> more MSI/MSI-X IRQs. This creates a mapping between the virtual IRQ
> number in the kernel and the hardware IRQ number.
>
> Freeing is split into two counterparts:
> - zpci_msi_free() reverts the effects of zpci_msi_alloc() and
> - zpci_msi_teardown() reverts the effects of zpci_msi_prepare(). This is
> called once when all IRQs are freed before a device is removed.
>
> Since the parent domain in the end allocates the IRQs, the hwirq
> encoding must be unambiguous for all IRQs of all devices. This is
> achieved by encoding the hwirq using the devfn and the MSI index.
>
> Signed-off-by: Tobias Schumacher <ts@...ux.ibm.com>
> ---
> arch/s390/Kconfig | 1 +
> arch/s390/include/asm/pci.h | 5 +
> arch/s390/pci/pci.c | 6 +
> arch/s390/pci/pci_bus.c | 18 ++-
> arch/s390/pci/pci_irq.c | 310 ++++++++++++++++++++++++++++----------------
> 5 files changed, 224 insertions(+), 116 deletions(-)
>
[ ... snip ... ]
> diff --git a/arch/s390/pci/pci_irq.c b/arch/s390/pci/pci_irq.c
> index e73be96ce5fe6473fc193d65b8f0ff635d6a98ba..2ac0fab605a83a2f06be6a0a68718e528125ced6 100644
> --- a/arch/s390/pci/pci_irq.c
> +++ b/arch/s390/pci/pci_irq.c
> @@ -290,146 +325,196 @@ static int __alloc_airq(struct zpci_dev *zdev, int msi_vecs,
> return 0;
> }
>
> -int arch_setup_msi_irqs(struct pci_dev *pdev, int nvec, int type)
> +bool arch_restore_msi_irqs(struct pci_dev *pdev)
> {
> - unsigned int hwirq, msi_vecs, irqs_per_msi, i, cpu;
> struct zpci_dev *zdev = to_zpci(pdev);
> - struct msi_desc *msi;
> - struct msi_msg msg;
> - unsigned long bit;
> - int cpu_addr;
> - int rc, irq;
>
> + zpci_set_irq(zdev);
> + return true;
> +}
>
It's always a little tricky to distinguish which code handles both MSI
and MSI-X or just MSI proper when routines have _msi_ in their name.
But apparently, both __pci_restore_msi_state() and
__pci_restore_msix_state() inside pci_restore_msi_state() do call
arch_restore_msi_irqs() - so life is good!
[ ... snip ... ]
> +static void zpci_msi_domain_free(struct irq_domain *domain, unsigned int virq,
> + unsigned int nr_irqs)
> +{
> + struct irq_data *d;
> + int i;
>
> - return (zdev->msi_nr_irqs == nvec) ? 0 : zdev->msi_nr_irqs;
> + for (i = 0; i < nr_irqs; i++) {
> + d = irq_domain_get_irq_data(domain, virq + i);
> + irq_domain_reset_irq_data(d);
Question: zpci_msi_alloc_domain() did modify airq data, can this be
left as is in zpci_msi_domain_free()?
> + }
> }
>
[ ... snip ... ]
> +
> +int zpci_create_parent_msi_domain(struct zpci_bus *zbus)
> +{
> + char fwnode_name[18];
>
> - if (zdev->aisb != -1UL) {
> - zpci_ibv[zdev->aisb] = NULL;
> - airq_iv_free_bit(zpci_sbv, zdev->aisb);
> - zdev->aisb = -1UL;
> + snprintf(fwnode_name, sizeof(fwnode_name), "ZPCI_MSI_DOM_%04x", zbus->domain_nr);
> + struct irq_domain_info info = {
> + .fwnode = irq_domain_alloc_named_fwnode(fwnode_name),
> + .ops = &zpci_msi_domain_ops,
> + };
> +
> + if (!info.fwnode) {
> + pr_err("Failed to allocate fwnode for MSI IRQ domain\n");
> + return -ENOMEM;
> }
> - if (zdev->aibv) {
> - airq_iv_release(zdev->aibv);
> - zdev->aibv = NULL;
> +
> + if (irq_delivery == FLOATING)
> + zpci_msi_parent_ops.required_flags |= MSI_FLAG_NO_AFFINITY;
Add empty line here, so the intent is clear that the following
assignment is executed unconditionally.
> + zbus->msi_parent_domain = msi_create_parent_irq_domain(&info, &zpci_msi_parent_ops);
> + if (!zbus->msi_parent_domain) {
> + irq_domain_free_fwnode(info.fwnode);
> + pr_err("Failed to create MSI IRQ domain\n");
> + return -ENOMEM;
> }
>
> - if ((irq_delivery == DIRECTED) && zdev->msi_first_bit != -1U)
> - airq_iv_free(zpci_ibv[0], zdev->msi_first_bit, zdev->msi_nr_irqs);
> + return 0;
> }
>
[ ... snip ... ]
> @@ -466,6 +551,7 @@ static int __init zpci_directed_irq_init(void)
> * is only done on the first vector.
> */
> zpci_ibv[cpu] = airq_iv_create(cache_line_size() * BITS_PER_BYTE,
> + AIRQ_IV_PTR |
> AIRQ_IV_DATA |
> AIRQ_IV_CACHELINE |
> (!cpu ? AIRQ_IV_ALLOC : 0), NULL);
This looks very good to me already. Unfortunately, I was unable to
relieve my MSI vs. MSI-X anxiety regarding arch_restore_msi_irqs() with
a test since the only MSI-using PCI function (ISM) is not supporting
PCI auto-recovery :(
But a mlx5 VF now recovers just fine!
Thanks,
Gerd
Powered by blists - more mailing lists