[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <SA1PR21MB13356B7C8DD4DA9CC20A880ABF899@SA1PR21MB1335.namprd21.prod.outlook.com>
Date: Wed, 29 Mar 2023 08:21:14 +0000
From: Dexuan Cui <decui@...rosoft.com>
To: Long Li <longli@...rosoft.com>,
"bhelgaas@...gle.com" <bhelgaas@...gle.com>,
"davem@...emloft.net" <davem@...emloft.net>,
"edumazet@...gle.com" <edumazet@...gle.com>,
Haiyang Zhang <haiyangz@...rosoft.com>,
Jake Oshins <jakeo@...rosoft.com>,
"kuba@...nel.org" <kuba@...nel.org>, "kw@...ux.com" <kw@...ux.com>,
KY Srinivasan <kys@...rosoft.com>,
"leon@...nel.org" <leon@...nel.org>,
"linux-pci@...r.kernel.org" <linux-pci@...r.kernel.org>,
"lpieralisi@...nel.org" <lpieralisi@...nel.org>,
"Michael Kelley (LINUX)" <mikelley@...rosoft.com>,
"pabeni@...hat.com" <pabeni@...hat.com>,
"robh@...nel.org" <robh@...nel.org>,
"saeedm@...dia.com" <saeedm@...dia.com>,
"wei.liu@...nel.org" <wei.liu@...nel.org>,
"boqun.feng@...il.com" <boqun.feng@...il.com>
CC: "linux-hyperv@...r.kernel.org" <linux-hyperv@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-rdma@...r.kernel.org" <linux-rdma@...r.kernel.org>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: RE: [PATCH 1/6] PCI: hv: fix a race condition bug in
hv_pci_query_relations()
> From: Long Li <longli@...rosoft.com>
> Sent: Tuesday, March 28, 2023 9:49 AM
> > --- a/drivers/pci/controller/pci-hyperv.c
> > +++ b/drivers/pci/controller/pci-hyperv.c
> > @@ -3308,6 +3308,19 @@ static int hv_pci_query_relations(struct
> hv_device
> > *hdev)
> > if (!ret)
> > ret = wait_for_response(hdev, &comp);
> >
> > + /*
> > + * In the case of fast device addition/removal, it's possible that
> > + * vmbus_sendpacket() or wait_for_response() returns -ENODEV but
> > we
> > + * already got a PCI_BUS_RELATIONS* message from the host and the
> > + * channel callback already scheduled a work to hbus->wq, which can
> > be
> > + * running survey_child_resources() -> complete(&hbus-
> > >survey_event),
> > + * even after hv_pci_query_relations() exits and the stack variable
> > + * 'comp' is no longer valid. This can cause a strange hang issue
> > + * or sometimes a page fault. Flush hbus->wq before we exit from
> > + * hv_pci_query_relations() to avoid the issues.
> > + */
> > + flush_workqueue(hbus->wq);
>
> Is it possible for PCI_BUS_RELATIONS to be scheduled arrive after calling
> flush_workqueue(hbus->wq)?
It's possible, but that doesn't matter:
hv_pci_query_relations() is called only once, and it sets hbus->survey_event
to point to the stack variable 'comp'. The first survey_child_resources()
calls complete() for the 'comp' and sets hbus->survey_event to NULL.
When the second survey_child_resources() is called, hbus->survey_event
is NULL, so survey_child_resources() returns immediately.
According to my test, after hv_pci_enter_d0() posts PCI_BUS_D0ENTRY,
the guest receives a second PCI_BUS_RELATIONS2 message, which is
the same as the first PCI_BUS_RELATIONS2 message, which is basically
a no-op in pci_devices_present_work(), especially with the
newly-introduced per-hbus state_lock mutex.
Powered by blists - more mailing lists