[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <130e73dc54db174e4c7e4267aceee78c246986d6.camel@intel.com>
Date: Thu, 3 Oct 2019 22:37:46 +0000
From: "Duyck, Alexander H" <alexander.h.duyck@...el.com>
To: "helgaas@...nel.org" <helgaas@...nel.org>,
"pierre.cregut@...nge.com" <pierre.cregut@...nge.com>
CC: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-pci@...r.kernel.org" <linux-pci@...r.kernel.org>,
"jakub.kicinski@...ronome.com" <jakub.kicinski@...ronome.com>,
"ddutile@...hat.com" <ddutile@...hat.com>
Subject: Re: [PATCH] PCI/IOV: update num_VFs earlier
On Thu, 2019-10-03 at 17:10 -0500, Bjorn Helgaas wrote:
> [+cc Don, Alex, Jakub]
>
> On Thu, Oct 03, 2019 at 11:04:45AM +0200, CREGUT Pierre IMT/OLN wrote:
> > Le 02/10/2019 à 01:45, Bjorn Helgaas a écrit :
> > > On Fri, Apr 26, 2019 at 10:11:54AM +0200, CREGUT Pierre IMT/OLN wrote:
> > > > I also initially thought that kobject_uevent generated the netlink event
> > > > but this is not the case. This is generated by the specific driver in use.
> > > > For the Intel i40e driver, this is the call to i40e_do_reset_safe in
> > > > i40e_pci_sriov_configure that sends the event.
> > > > It is followed by i40e_pci_sriov_enable that calls i40e_alloc_vfs that
> > > > finally calls the generic pci_enable_sriov function.
> > > I don't know anything about netlink. The script from the bugzilla
> > > (https://bugzilla.kernel.org/show_bug.cgi?id=202991) looks like it
> > > runs
> > >
> > > ip monitor dev enp9s0f2
> > >
> > > What are the actual netlink events you see? Are they related to a
> > > device being removed?
> >
> > We have netlink events both when num_vfs goes from 0 to N and from N to 0.
> > Indeed you have to go to 0 before going to M with M != N.
>
> Right.
It doesn't make sense to monitor netlink for SR-IOV changes. All you are
catching is the incidental effect of the reset causing the link to bounce.
For example if you run "ip link set enp9s0f2 down" and then run your test
you will never get any notifications when you alter the num_vfs because
the link won't bounce because the link is already down.
It isn't surprising that you would see no VFs enabled as the act of
bringing down the interface to reconfigure it will give you a netlink
event. At that point we haven't even enabled SR-IOV yet, we were just
shutting down the existing config before we attempt to enable SR-IOV.
> > On an Intel card, when one goes from 0 to N, the netlink event is
> > sent "early". The value of num_vfs is still 0 and you get the
> > impression that the number of VFS has not changed. As the meaning of
> > those events is overloaded, you have to wait an arbitrary amount of
> > time until it settles (there will be no other event). There is no
> > such problem when it goes from N to 0 because of implementation
> > details but it may be different for another brand.
>
> I hadn't looked far enough. I think the "remove" netlink events are
> probably from the i40e_do_reset_safe() path, which eventually calls
> free_netdev() and put_device().
>
> The pci_enable_sriov() path calls the driver's ->probe method, and I
> suspect the "add" netlink events are emitted there.
So the issue as I see it is that this is a naive approach to how to
monitor for VFs being added or removed. All the script in the bugzilla
really does is catch resets when the interface is up.
Ideally we shouldn't even have the driver have to do the reset except for
the fact that it has to re-partition the device to split up resources.
> > > When we change num_VFs, I think we have to disable any existing VFs
> > > before enabling the new num_VFs, so if you trigger on a netlink
> > > "remove" event, I wouldn't be surprised that reading sriov_numvfs
> > > would give a zero until the new VFs are enabled.
> > Yes but we are speaking of the event sent when num_vfs is changed from 0 to
> > N
> > > [...]
> > > I thought this was a good idea, but
> > >
> > > - It does break the device_lock() encapsulation a little bit:
> > > sriov_numvfs_store() uses device_lock(), which happens to be
> > > implemented as "mutex_lock(&dev->mutex)", but we really shouldn't
> > > rely on that implementation, and
> > The use of device_lock was the cheapest solution. It is true that
> > lock and trylock are exposed by device.h but not is_locked. To
> > respect the abstraction, we would have to lock the device (at least
> > use trylock but it means locking when we can access the value, in
> > that case we may just make reading num_vfs blocking ?).
> >
> > The other solution is to record the state of freshness of num_vfs
> > but it means a new Boolean in the pci_sriov data-structure.
> > > - The netlink events are being generated via the NIC driver, and I'm
> > > a little hesitant about changing the PCI core to deal with timing
> > > issues "over there".
> >
> > NIC drivers send netlink events when their state change, but it is
> > the core that changes the value of num_vfs. So I would think it is
> > the core responsibility to make sure the exposed value makes sense
> > and it would be better to ignore the details of the driver
> > implementation.
>
> Yes, I think you're right. And I like your previous suggestion of
> just locking the device in the reader. I'm not enough of a sysfs
> expert to know if there's a good reason to avoid a lock there. Does
> the following look reasonable to you?
>
>
> commit 0940fc95da45
> Author: Pierre Crégut <pierre.cregut@...nge.com>
> Date: Wed Sep 11 09:27:36 2019 +0200
>
> PCI/IOV: Serialize sysfs sriov_numvfs reads vs writes
>
> When sriov_numvfs is being updated, drivers may notify about new devices
> before they are reflected in sriov->num_VFs, so concurrent sysfs reads
> previously returned stale values.
>
> Serialize the sysfs read vs the write so the read returns the correct
> num_VFs value.
>
> Link: https://bugzilla.kernel.org/show_bug.cgi?id=202991
> Link: https://lore.kernel.org/r/20190911072736.32091-1-pierre.cregut@orange.com
> Signed-off-by: Pierre Crégut <pierre.cregut@...nge.com>
> Signed-off-by: Bjorn Helgaas <bhelgaas@...gle.com>
>
> diff --git a/drivers/pci/iov.c b/drivers/pci/iov.c
> index b3f972e8cfed..e77562aabbae 100644
> --- a/drivers/pci/iov.c
> +++ b/drivers/pci/iov.c
> @@ -254,8 +254,14 @@ static ssize_t sriov_numvfs_show(struct device *dev,
> char *buf)
> {
> struct pci_dev *pdev = to_pci_dev(dev);
> + u16 num_vfs;
> +
> + /* Serialize vs sriov_numvfs_store() so readers see valid num_VFs */
> + device_lock(&pdev->dev);
> + num_vfs = pdev->sriov->num_VFs;
> + device_lock(&pdev->dev);
>
> - return sprintf(buf, "%u\n", pdev->sriov->num_VFs);
> + return sprintf(buf, "%u\n", num_vfs);
> }
>
> /*
I think this would probably be a good way to go. Then if the device has
some sort of issues enabling SR-IOV we don't have an unknown state when
this is being read. It is either set or it is not, and we prevent reading
the state while it is being altered.
Powered by blists - more mailing lists