[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191003221007.GA209602@google.com>
Date: Thu, 3 Oct 2019 17:10:07 -0500
From: Bjorn Helgaas <helgaas@...nel.org>
To: CREGUT Pierre IMT/OLN <pierre.cregut@...nge.com>
Cc: linux-pci@...r.kernel.org, linux-kernel@...r.kernel.org,
Donald Dutile <ddutile@...hat.com>,
Alexander Duyck <alexander.h.duyck@...el.com>,
Jakub Kicinski <jakub.kicinski@...ronome.com>
Subject: Re: [PATCH] PCI/IOV: update num_VFs earlier
[+cc Don, Alex, Jakub]
On Thu, Oct 03, 2019 at 11:04:45AM +0200, CREGUT Pierre IMT/OLN wrote:
> Le 02/10/2019 à 01:45, Bjorn Helgaas a écrit :
> > On Fri, Apr 26, 2019 at 10:11:54AM +0200, CREGUT Pierre IMT/OLN wrote:
> > > I also initially thought that kobject_uevent generated the netlink event
> > > but this is not the case. This is generated by the specific driver in use.
> > > For the Intel i40e driver, this is the call to i40e_do_reset_safe in
> > > i40e_pci_sriov_configure that sends the event.
> > > It is followed by i40e_pci_sriov_enable that calls i40e_alloc_vfs that
> > > finally calls the generic pci_enable_sriov function.
> > I don't know anything about netlink. The script from the bugzilla
> > (https://bugzilla.kernel.org/show_bug.cgi?id=202991) looks like it
> > runs
> >
> > ip monitor dev enp9s0f2
> >
> > What are the actual netlink events you see? Are they related to a
> > device being removed?
>
> We have netlink events both when num_vfs goes from 0 to N and from N to 0.
> Indeed you have to go to 0 before going to M with M != N.
Right.
> On an Intel card, when one goes from 0 to N, the netlink event is
> sent "early". The value of num_vfs is still 0 and you get the
> impression that the number of VFS has not changed. As the meaning of
> those events is overloaded, you have to wait an arbitrary amount of
> time until it settles (there will be no other event). There is no
> such problem when it goes from N to 0 because of implementation
> details but it may be different for another brand.
I hadn't looked far enough. I think the "remove" netlink events are
probably from the i40e_do_reset_safe() path, which eventually calls
free_netdev() and put_device().
The pci_enable_sriov() path calls the driver's ->probe method, and I
suspect the "add" netlink events are emitted there.
> > When we change num_VFs, I think we have to disable any existing VFs
> > before enabling the new num_VFs, so if you trigger on a netlink
> > "remove" event, I wouldn't be surprised that reading sriov_numvfs
> > would give a zero until the new VFs are enabled.
> Yes but we are speaking of the event sent when num_vfs is changed from 0 to
> N
> > [...]
> > I thought this was a good idea, but
> >
> > - It does break the device_lock() encapsulation a little bit:
> > sriov_numvfs_store() uses device_lock(), which happens to be
> > implemented as "mutex_lock(&dev->mutex)", but we really shouldn't
> > rely on that implementation, and
> The use of device_lock was the cheapest solution. It is true that
> lock and trylock are exposed by device.h but not is_locked. To
> respect the abstraction, we would have to lock the device (at least
> use trylock but it means locking when we can access the value, in
> that case we may just make reading num_vfs blocking ?).
>
> The other solution is to record the state of freshness of num_vfs
> but it means a new Boolean in the pci_sriov data-structure.
>
> > - The netlink events are being generated via the NIC driver, and I'm
> > a little hesitant about changing the PCI core to deal with timing
> > issues "over there".
>
> NIC drivers send netlink events when their state change, but it is
> the core that changes the value of num_vfs. So I would think it is
> the core responsibility to make sure the exposed value makes sense
> and it would be better to ignore the details of the driver
> implementation.
Yes, I think you're right. And I like your previous suggestion of
just locking the device in the reader. I'm not enough of a sysfs
expert to know if there's a good reason to avoid a lock there. Does
the following look reasonable to you?
commit 0940fc95da45
Author: Pierre Crégut <pierre.cregut@...nge.com>
Date: Wed Sep 11 09:27:36 2019 +0200
PCI/IOV: Serialize sysfs sriov_numvfs reads vs writes
When sriov_numvfs is being updated, drivers may notify about new devices
before they are reflected in sriov->num_VFs, so concurrent sysfs reads
previously returned stale values.
Serialize the sysfs read vs the write so the read returns the correct
num_VFs value.
Link: https://bugzilla.kernel.org/show_bug.cgi?id=202991
Link: https://lore.kernel.org/r/20190911072736.32091-1-pierre.cregut@orange.com
Signed-off-by: Pierre Crégut <pierre.cregut@...nge.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@...gle.com>
diff --git a/drivers/pci/iov.c b/drivers/pci/iov.c
index b3f972e8cfed..e77562aabbae 100644
--- a/drivers/pci/iov.c
+++ b/drivers/pci/iov.c
@@ -254,8 +254,14 @@ static ssize_t sriov_numvfs_show(struct device *dev,
char *buf)
{
struct pci_dev *pdev = to_pci_dev(dev);
+ u16 num_vfs;
+
+ /* Serialize vs sriov_numvfs_store() so readers see valid num_VFs */
+ device_lock(&pdev->dev);
+ num_vfs = pdev->sriov->num_VFs;
+ device_lock(&pdev->dev);
- return sprintf(buf, "%u\n", pdev->sriov->num_VFs);
+ return sprintf(buf, "%u\n", num_vfs);
}
/*
Powered by blists - more mailing lists