[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <adbcd493-3821-b0d7-c4e4-4fcd92dd5a14@quicinc.com>
Date: Mon, 25 Apr 2022 10:52:26 -0600
From: Jeffrey Hugo <quic_jhugo@...cinc.com>
To: Wei Liu <wei.liu@...nel.org>
CC: <kys@...rosoft.com>, <haiyangz@...rosoft.com>,
<sthemmin@...rosoft.com>, <decui@...rosoft.com>,
<lorenzo.pieralisi@....com>, <robh@...nel.org>, <kw@...ux.com>,
<bhelgaas@...gle.com>, <jakeo@...rosoft.com>,
<bjorn.andersson@...aro.org>, <linux-hyperv@...r.kernel.org>,
<linux-pci@...r.kernel.org>, <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2] PCI: hv: Fix multi-MSI to allow more than one MSI
vector
On 4/25/2022 9:49 AM, Wei Liu wrote:
> On Mon, Apr 25, 2022 at 03:33:44PM +0000, Wei Liu wrote:
>> On Wed, Apr 20, 2022 at 08:13:22AM -0600, Jeffrey Hugo wrote:
>>> On 4/13/2022 7:36 AM, Jeffrey Hugo wrote:
>>>> If the allocation of multiple MSI vectors for multi-MSI fails in the core
>>>> PCI framework, the framework will retry the allocation as a single MSI
>>>> vector, assuming that meets the min_vecs specified by the requesting
>>>> driver.
>>>>
>>>> Hyper-V advertises that multi-MSI is supported, but reuses the VECTOR
>>>> domain to implement that for x86. The VECTOR domain does not support
>>>> multi-MSI, so the alloc will always fail and fallback to a single MSI
>>>> allocation.
>>>>
>>>> In short, Hyper-V advertises a capability it does not implement.
>>>>
>>>> Hyper-V can support multi-MSI because it coordinates with the hypervisor
>>>> to map the MSIs in the IOMMU's interrupt remapper, which is something the
>>>> VECTOR domain does not have. Therefore the fix is simple - copy what the
>>>> x86 IOMMU drivers (AMD/Intel-IR) do by removing
>>>> X86_IRQ_ALLOC_CONTIGUOUS_VECTORS after calling the VECTOR domain's
>>>> pci_msi_prepare().
>>>>
>>>> Fixes: 4daace0d8ce8 ("PCI: hv: Add paravirtual PCI front-end for Microsoft Hyper-V VMs")
>>>> Signed-off-by: Jeffrey Hugo <quic_jhugo@...cinc.com>
>>>> Reviewed-by: Dexuan Cui <decui@...rosoft.com>
>>>> ---
>>>
>>> Ping?
>>>
>>> I don't see this in -next, nor have I seen any replies. It is possible I
>>> have missed some kind of update, but currently I'm wondering if this change
>>> is progressing or not. If there is some kind of process used in this area,
>>> I'm not familiar with it, so I would appreciate an introduction.
>>
>> I expect the PCI maintainers to pick this up. If I don't see this picked
>> up in this week I will apply it to hyperv-next.
>
> Actually I will pick this up via hyperv-next, because there is another
> series which will also touch this driver but at the some time depend on
> vmbus changes. I can fix up any potential conflicts easily.
Sounds good to me. Let me know if you do run into conflicts, and I can
help.
-Jeff
Powered by blists - more mailing lists