[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87k0ey9122.wl-maz@kernel.org>
Date: Mon, 17 Jan 2022 09:14:13 +0000
From: Marc Zyngier <maz@...nel.org>
To: John Garry <john.garry@...wei.com>
Cc: Thomas Gleixner <tglx@...utronix.de>,
chenxiang <chenxiang66@...ilicon.com>,
Shameer Kolothum <shameerali.kolothum.thodi@...wei.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"liuqi (BA)" <liuqi115@...wei.com>
Subject: Re: PCI MSI issue for maxcpus=1
On Sun, 16 Jan 2022 12:07:59 +0000,
Marc Zyngier <maz@...nel.org> wrote:
>
> On Fri, 07 Jan 2022 11:24:38 +0000,
> John Garry <john.garry@...wei.com> wrote:
> >
> > Hi Marc,
> >
> > >> So it's the driver call to pci_alloc_irq_vectors_affinity() which
> > >> errors [1]:
> > >>
> > >> [ 9.619070] hisi_sas_v3_hw: probe of 0000:74:02.0 failed with error -2
> > > Can you log what error is returned from pci_alloc_irq_vectors_affinity()?
> >
> > -EINVAL
> >
> > >
> > >> Some details:
> > >> - device supports 32 MSI
> > >> - min and max msi for that function is 17 and 32, respect.
> > > This 17 is a bit odd, owing to the fact that MultiMSI can only deal
> > > with powers of 2. You will always allocate 32 in this case. Not sure
> > > why that'd cause an issue though. Unless...
> >
> > Even though 17 is the min, we still try for nvec=32 in
> > msi_capability_init() as possible CPUs is 96.
> >
> > >
> > >> - affd pre and post are 16 and 0, respect.
> > >>
> > >> I haven't checked to see what the issue is yet and I think that the
> > >> pci_alloc_irq_vectors_affinity() usage is ok...
> > > ... we really end-up with desc->nvec_used == 32 and try to activate
> > > past vector 17 (which is likely to fail). Could you please check this?
> >
> > Yeah, that looks to fail. Reason being that in the GIC ITS driver when
> > we try to activate the irq for this managed interrupt all cpus in the
> > affinity mask are offline. Calling its_irq_domain_activate() ->
> > its_select_cpu() it gives cpu=nr_cpu_ids. The affinity mask for that
> > interrupt is 24-29.
>
> I guess that for managed interrupts, it shouldn't matter, as these
> interrupts should only be used when the relevant CPUs come online.
>
> Would something like below help? Totally untested, as I don't have a
> Multi-MSI capable device that I can plug in a GICv3 system (maybe I
> should teach that to a virtio device...).
Actually, if the CPU online status doesn't matter for managed affinity
interrupts, then the correct fix is this:
diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
index d25b7a864bbb..af4e72a6be63 100644
--- a/drivers/irqchip/irq-gic-v3-its.c
+++ b/drivers/irqchip/irq-gic-v3-its.c
@@ -1624,7 +1624,7 @@ static int its_select_cpu(struct irq_data *d,
cpu = cpumask_pick_least_loaded(d, tmpmask);
} else {
- cpumask_and(tmpmask, irq_data_get_affinity_mask(d), cpu_online_mask);
+ cpumask_copy(tmpmask, irq_data_get_affinity_mask(d));
/* If we cannot cross sockets, limit the search to that node */
if ((its_dev->its->flags & ITS_FLAGS_WORKAROUND_CAVIUM_23144) &&
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
Powered by blists - more mailing lists