[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87a6g8vp8k.wl-maz@kernel.org>
Date: Thu, 06 Jan 2022 15:49:15 +0000
From: Marc Zyngier <maz@...nel.org>
To: John Garry <john.garry@...wei.com>
Cc: Thomas Gleixner <tglx@...utronix.de>,
chenxiang <chenxiang66@...ilicon.com>,
Shameer Kolothum <shameerali.kolothum.thodi@...wei.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"liuqi (BA)" <liuqi115@...wei.com>
Subject: Re: PCI MSI issue for maxcpus=1
Hi John,
On Wed, 05 Jan 2022 11:23:47 +0000,
John Garry <john.garry@...wei.com> wrote:
>
> Hi Marc,
>
> Just a heads up, I noticed that commit 4c457e8cb75e ("genirq/msi:
> Activate Multi-MSI early when MSI_FLAG_ACTIVATE_EARLY is set") is
> causing an issue on our arm64 D06 board where the SAS driver probe
> fails for maxcpus=1.
>
> This seems different to issue [0].
>
> So it's the driver call to pci_alloc_irq_vectors_affinity() which
> errors [1]:
>
> [ 9.619070] hisi_sas_v3_hw: probe of 0000:74:02.0 failed with error -2
Can you log what error is returned from pci_alloc_irq_vectors_affinity()?
> Some details:
> - device supports 32 MSI
> - min and max msi for that function is 17 and 32, respect.
This 17 is a bit odd, owing to the fact that MultiMSI can only deal
with powers of 2. You will always allocate 32 in this case. Not sure
why that'd cause an issue though. Unless...
> - affd pre and post are 16 and 0, respect.
>
> I haven't checked to see what the issue is yet and I think that the
> pci_alloc_irq_vectors_affinity() usage is ok...
... we really end-up with desc->nvec_used == 32 and try to activate
past vector 17 (which is likely to fail). Could you please check this?
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
Powered by blists - more mailing lists