lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Sun, 16 Jan 2022 12:07:59 +0000 From: Marc Zyngier <maz@...nel.org> To: John Garry <john.garry@...wei.com> Cc: Thomas Gleixner <tglx@...utronix.de>, chenxiang <chenxiang66@...ilicon.com>, Shameer Kolothum <shameerali.kolothum.thodi@...wei.com>, "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>, "liuqi (BA)" <liuqi115@...wei.com> Subject: Re: PCI MSI issue for maxcpus=1 On Fri, 07 Jan 2022 11:24:38 +0000, John Garry <john.garry@...wei.com> wrote: > > Hi Marc, > > >> So it's the driver call to pci_alloc_irq_vectors_affinity() which > >> errors [1]: > >> > >> [ 9.619070] hisi_sas_v3_hw: probe of 0000:74:02.0 failed with error -2 > > Can you log what error is returned from pci_alloc_irq_vectors_affinity()? > > -EINVAL > > > > >> Some details: > >> - device supports 32 MSI > >> - min and max msi for that function is 17 and 32, respect. > > This 17 is a bit odd, owing to the fact that MultiMSI can only deal > > with powers of 2. You will always allocate 32 in this case. Not sure > > why that'd cause an issue though. Unless... > > Even though 17 is the min, we still try for nvec=32 in > msi_capability_init() as possible CPUs is 96. > > > > >> - affd pre and post are 16 and 0, respect. > >> > >> I haven't checked to see what the issue is yet and I think that the > >> pci_alloc_irq_vectors_affinity() usage is ok... > > ... we really end-up with desc->nvec_used == 32 and try to activate > > past vector 17 (which is likely to fail). Could you please check this? > > Yeah, that looks to fail. Reason being that in the GIC ITS driver when > we try to activate the irq for this managed interrupt all cpus in the > affinity mask are offline. Calling its_irq_domain_activate() -> > its_select_cpu() it gives cpu=nr_cpu_ids. The affinity mask for that > interrupt is 24-29. I guess that for managed interrupts, it shouldn't matter, as these interrupts should only be used when the relevant CPUs come online. Would something like below help? Totally untested, as I don't have a Multi-MSI capable device that I can plug in a GICv3 system (maybe I should teach that to a virtio device...). Thanks, M. diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c index d25b7a864bbb..850407294adb 100644 --- a/drivers/irqchip/irq-gic-v3-its.c +++ b/drivers/irqchip/irq-gic-v3-its.c @@ -1632,6 +1632,10 @@ static int its_select_cpu(struct irq_data *d, cpumask_and(tmpmask, tmpmask, cpumask_of_node(node)); cpu = cpumask_pick_least_loaded(d, tmpmask); + + /* If all the possible CPUs are offline, just pick a victim. */ + if (cpu == nr_cpu_ids) + cpu = cpumask_pick_least_loaded(d, irq_data_get_affinity_mask(d)); } out: free_cpumask_var(tmpmask); -- Without deviation from the norm, progress is not possible.
Powered by blists - more mailing lists