[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220318174848.290621-3-parri.andrea@gmail.com>
Date: Fri, 18 Mar 2022 18:48:48 +0100
From: "Andrea Parri (Microsoft)" <parri.andrea@...il.com>
To: KY Srinivasan <kys@...rosoft.com>,
Haiyang Zhang <haiyangz@...rosoft.com>,
Stephen Hemminger <sthemmin@...rosoft.com>,
Wei Liu <wei.liu@...nel.org>, Dexuan Cui <decui@...rosoft.com>,
Michael Kelley <mikelley@...rosoft.com>,
Wei Hu <weh@...rosoft.com>,
Lorenzo Pieralisi <lorenzo.pieralisi@....com>,
Rob Herring <robh@...nel.org>,
Krzysztof Wilczynski <kw@...ux.com>,
Bjorn Helgaas <bhelgaas@...gle.com>
Cc: linux-pci@...r.kernel.org, linux-hyperv@...r.kernel.org,
linux-kernel@...r.kernel.org,
"Andrea Parri (Microsoft)" <parri.andrea@...il.com>
Subject: [PATCH 2/2] PCI: hv: Fix synchronization between channel callback and hv_compose_msi_msg()
Dexuan wrote:
"[...] when we disable AccelNet, the host PCI VSP driver sends a
PCI_EJECT message first, and the channel callback may set
hpdev->state to hv_pcichild_ejecting on a different CPU. This can
cause hv_compose_msi_msg() to exit from the loop and 'return', and
the on-stack variable 'ctxt' is invalid. Now, if the response
message from the host arrives, the channel callback will try to
access the invalid 'ctxt' variable, and this may cause a crash."
Schematically:
Hyper-V sends PCI_EJECT msg
hv_pci_onchannelcallback()
state = hv_pcichild_ejecting
hv_compose_msi_msg()
alloc and init comp_pkt
state == hv_pcichild_ejecting
Hyper-V sends VM_PKT_COMP msg
hv_pci_onchannelcallback()
retrieve address of comp_pkt
'free' comp_pkt and return
comp_pkt->completion_func()
Dexuan also showed how the crash can be triggered after introducing
suitable delays in the driver code, thus validating the 'assumption'
that the host can still normally respond to the guest's compose_msi
request after the host has started to eject the PCI device.
Fix the synchronization by leveraging the IDR lock. Retrieve the
address of the completion packet *and call the completion function
within a same critical section: if an address (request ID) is found
in the channel callback, the critical section precedes the removal
of the address in hv_compose_msi_msg().
Fixes: de0aa7b2f97d3 ("PCI: hv: Fix 2 hang issues in hv_compose_msi_msg()")
Reported-by: Wei Hu <weh@...rosoft.com>
Reported-by: Dexuan Cui <decui@...rosoft.com>
Suggested-by: Michael Kelley <mikelley@...rosoft.com>
Signed-off-by: Andrea Parri (Microsoft) <parri.andrea@...il.com>
---
drivers/pci/controller/pci-hyperv.c | 13 +++++++++++--
1 file changed, 11 insertions(+), 2 deletions(-)
diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c
index fbc62aab08fdc..dddd7e4d0352d 100644
--- a/drivers/pci/controller/pci-hyperv.c
+++ b/drivers/pci/controller/pci-hyperv.c
@@ -495,7 +495,8 @@ struct hv_pcibus_device {
spinlock_t device_list_lock; /* Protect lists below */
void __iomem *cfg_addr;
- spinlock_t idr_lock; /* Serialize accesses to the IDR */
+ /* Serialize accesses to the IDR; see also hv_pci_onchannelcallback(). */
+ spinlock_t idr_lock;
struct idr idr; /* Map guest memory addresses */
struct list_head children;
@@ -2797,16 +2798,24 @@ static void hv_pci_onchannelcallback(void *context)
}
spin_lock_irqsave(&hbus->idr_lock, flags);
comp_packet = (struct pci_packet *)idr_find(&hbus->idr, req_id);
- spin_unlock_irqrestore(&hbus->idr_lock, flags);
if (!comp_packet) {
+ spin_unlock_irqrestore(&hbus->idr_lock, flags);
dev_warn_ratelimited(&hbus->hdev->device,
"Request ID not found\n");
break;
}
response = (struct pci_response *)buffer;
+ /*
+ * Call ->completion_func() within the critical section to make
+ * sure that the packet pointer is still valid during the call:
+ * here 'valid' means that there's a task still waiting for the
+ * completion, and that the packet data is still on the waiting
+ * task's stack/has not already been freed by the waiting task.
+ */
comp_packet->completion_func(comp_packet->compl_ctxt,
response,
bytes_recvd);
+ spin_unlock_irqrestore(&hbus->idr_lock, flags);
break;
case VM_PKT_DATA_INBAND:
--
2.25.1
Powered by blists - more mailing lists