[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <40a2c8c9-819f-b30e-e151-2ea224961a57@linux.ibm.com>
Date: Wed, 18 Jan 2023 09:15:32 -0500
From: Matthew Rosato <mjrosato@...ux.ibm.com>
To: Alex Williamson <alex.williamson@...hat.com>
Cc: pbonzini@...hat.com, jgg@...dia.com, cohuck@...hat.com,
farman@...ux.ibm.com, pmorel@...ux.ibm.com,
borntraeger@...ux.ibm.com, frankja@...ux.ibm.com,
imbrenda@...ux.ibm.com, david@...hat.com, akrowiak@...ux.ibm.com,
jjherne@...ux.ibm.com, pasic@...ux.ibm.com,
zhenyuw@...ux.intel.com, zhi.a.wang@...el.com, seanjc@...gle.com,
linux-s390@...r.kernel.org, kvm@...r.kernel.org,
intel-gvt-dev@...ts.freedesktop.org,
intel-gfx@...ts.freedesktop.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v4] vfio: fix potential deadlock on vfio group lock
On 1/17/23 4:22 PM, Alex Williamson wrote:
> On Fri, 13 Jan 2023 19:03:51 -0500
> Matthew Rosato <mjrosato@...ux.ibm.com> wrote:
>
>> Currently it is possible that the final put of a KVM reference comes from
>> vfio during its device close operation. This occurs while the vfio group
>> lock is held; however, if the vfio device is still in the kvm device list,
>> then the following call chain could result in a deadlock:
>>
>> kvm_put_kvm
>> -> kvm_destroy_vm
>> -> kvm_destroy_devices
>> -> kvm_vfio_destroy
>> -> kvm_vfio_file_set_kvm
>> -> vfio_file_set_kvm
>> -> group->group_lock/group_rwsem
>>
>> Avoid this scenario by having vfio core code acquire a KVM reference
>> the first time a device is opened and hold that reference until right
>> after the group lock is released after the last device is closed.
>>
>> Fixes: 421cfe6596f6 ("vfio: remove VFIO_GROUP_NOTIFY_SET_KVM")
>> Reported-by: Alex Williamson <alex.williamson@...hat.com>
>> Signed-off-by: Matthew Rosato <mjrosato@...ux.ibm.com>
>> ---
>> Changes from v3:
>> * Can't check for open_count after the group lock has been dropped because
>> it would be possible for the count to change again once the group lock
>> is dropped (Jason)
>> Solve this by stashing a copy of the kvm and put_kvm while the group
>> lock is held, nullifying the device copies of these in device_close()
>> as soon as the open_count reaches 0, and then checking to see if the
>> device->kvm changed before dropping the group lock. If it changed
>> during close, we can drop the reference using the stashed kvm and put
>> function after dropping the group lock.
>>
>> Changes from v2:
>> * Re-arrange vfio_kvm_set_kvm_safe error path to still trigger
>> device_open with device->kvm=NULL (Alex)
>> * get device->dev_set->lock when checking device->open_count (Alex)
>> * but don't hold it over the kvm_put_kvm (Jason)
>> * get kvm_put symbol upfront and stash it in device until close (Jason)
>> * check CONFIG_HAVE_KVM to avoid build errors on architectures without
>> KVM support
>>
>> Changes from v1:
>> * Re-write using symbol get logic to get kvm ref during first device
>> open, release the ref during device fd close after group lock is
>> released
>> * Drop kvm get/put changes to drivers; now that vfio core holds a
>> kvm ref until sometime after the device_close op is called, it
>> should be fine for drivers to get and put their own references to it.
>> ---
>> drivers/vfio/group.c | 23 +++++++++++++--
>> drivers/vfio/vfio.h | 9 ++++++
>> drivers/vfio/vfio_main.c | 61 +++++++++++++++++++++++++++++++++++++---
>> include/linux/vfio.h | 2 +-
>> 4 files changed, 87 insertions(+), 8 deletions(-)
>>
>> diff --git a/drivers/vfio/group.c b/drivers/vfio/group.c
>> index bb24b2f0271e..b396c17d7390 100644
>> --- a/drivers/vfio/group.c
>> +++ b/drivers/vfio/group.c
>> @@ -165,9 +165,9 @@ static int vfio_device_group_open(struct vfio_device *device)
>> }
>>
>> /*
>> - * Here we pass the KVM pointer with the group under the lock. If the
>> - * device driver will use it, it must obtain a reference and release it
>> - * during close_device.
>> + * Here we pass the KVM pointer with the group under the lock. A
>> + * reference will be obtained the first time the device is opened and
>> + * will be held until the open_count reaches 0.
>> */
>> ret = vfio_device_open(device, device->group->iommufd,
>> device->group->kvm);
>> @@ -179,9 +179,26 @@ static int vfio_device_group_open(struct vfio_device *device)
>>
>> void vfio_device_group_close(struct vfio_device *device)
>> {
>> + void (*put_kvm)(struct kvm *kvm);
>> + struct kvm *kvm;
>> +
>> mutex_lock(&device->group->group_lock);
>> + kvm = device->kvm;
>> + put_kvm = device->put_kvm;
>> vfio_device_close(device, device->group->iommufd);
>> + if (kvm == device->kvm)
>> + kvm = NULL;
>
> Hmm, so we're using whether the device->kvm pointer gets cleared in
> last_close to detect whether we should put the kvm reference. That's a
> bit obscure. Our get and put is also asymmetric.
>
> Did we decide that we couldn't do this via a schedule_work() from the
> last_close function, ie. implementing our own version of an async put?
> It seems like that potentially has a cleaner implementation, symmetric
> call points, handling all the storing and clearing of kvm related
> pointers within the get/put wrappers, passing only a vfio_device to the
> put wrapper, using the "vfio_device_" prefix for both. Potentially
> we'd just want an unconditional flush outside of lock here for
> deterministic release.
>
> What's the downside? Thanks,
>
I did mention something like this as a possibility when discussing v3.. It's doable, the main issue with doing schedule_work() of an async put is that we can't actually use the device->kvm / device->put_kvm values during the scheduled work task as they become unreliable once we drop the group lock -- e.g. schedule_work() some put call while under the group lock, drop the group lock and then another thread gets the group lock and does a new open_device() before that async put task fires; device->kvm and (less likely) put_kvm might have changed in between.
I think in that case we would need to stash the kvm and put_kvm values in some secondary structure to be processed off a queue by the schedule_work task (an example of what I mean would be bio_dirty_list in block/bio.c). Very unlikely that this queue would ever have more than 1 element in it.
Powered by blists - more mailing lists