[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <ec4a8705-3bdd-96ca-9ef6-ac565874bdaf@quicinc.com>
Date: Wed, 11 May 2022 11:53:49 -0600
From: Jeffrey Hugo <quic_jhugo@...cinc.com>
To: Wei Liu <wei.liu@...nel.org>
CC: <kys@...rosoft.com>, <haiyangz@...rosoft.com>,
<sthemmin@...rosoft.com>, <decui@...rosoft.com>,
<lorenzo.pieralisi@....com>, <robh@...nel.org>, <kw@...ux.com>,
<bhelgaas@...gle.com>, <jakeo@...rosoft.com>,
<dazhan@...rosoft.com>, <linux-hyperv@...r.kernel.org>,
<linux-pci@...r.kernel.org>, <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2 0/2] hyperv compose_msi_msg fixups
On 5/11/2022 11:51 AM, Wei Liu wrote:
> On Wed, May 11, 2022 at 09:22:11AM -0600, Jeffrey Hugo wrote:
>> While multi-MSI appears to work with pci-hyperv.c, there was a concern about
>> how linux was doing the ITRE allocations. Patch 2 addresses the concern.
>>
>> However, patch 2 exposed an issue with how compose_msi_msg() was freeing a
>> previous allocation when called for the Nth time. Imagine a driver using
>> pci_alloc_irq_vectors() to request 32 MSIs. This would cause compose_msi_msg()
>> to be called 32 times, once for each MSI. With patch 2, MSI0 would allocate
>> the ITREs needed, and MSI1-31 would use the cached information. Then the driver
>> uses request_irq() on MSI1-17. This would call compose_msi_msg() again on those
>> MSIs, which would again use the cached information. Then unmask() would be
>> called to retarget the MSIs to the right VCPU vectors. Finally, the driver
>> calls request_irq() on MSI0. This would call conpose_msi_msg(), which would
>> free the block of 32 MSIs, and allocate a new block. This would undo the
>> retarget of MSI1-17, and likely leave those MSIs targeting invalid VCPU vectors.
>> This is addressed by patch 1, which is introduced first to prevent a regression.
>>
>> Jeffrey Hugo (2):
>> PCI: hv: Reuse existing ITRE allocation in compose_msi_msg()
>> PCI: hv: Fix interrupt mapping for multi-MSI
>
> Applied this version to hyperv-next. Thanks.
Thanks for picking it up. Sorry about the confusion.
Powered by blists - more mailing lists