lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 8 Jun 2023 13:16:12 -0700
From:   Sidhartha Kumar <sidhartha.kumar@...cle.com>
To:     Alex Williamson <alex.williamson@...hat.com>
Cc:     linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
        khalid.aziz@...cle.com
Subject: Re: [PATCH] vfio/iommu_type1: acquire iommu lock in
 vfio_iommu_type1_release()

On 6/7/23 12:40 PM, Alex Williamson wrote:
> On Wed,  7 Jun 2023 12:07:52 -0700
> Sidhartha Kumar <sidhartha.kumar@...cle.com> wrote:
> 
>>  From vfio_iommu_type1_release() there is a code path:
>>
>> vfio_iommu_unmap_unpin_all()
>>   vfio_remove_dma()
>>      vfio_unmap_unpin()
>>        unmap_unpin_slow()
>>          vfio_unpin_pages_remote()
>>            vfio_find_vpfn()
>>
>> This path is taken without acquiring the iommu lock so it could lead to
>> a race condition in the traversal of the pfn_list rb tree.
> 
> What's the competing thread for the race, vfio_remove_dma() tests:
> 
> 	WARN_ON(!RB_EMPTY_ROOT(&dma->pfn_list));
> 
> The fix is not unreasonable, but is this a theoretical fix upstream
> that's tickled by some downstream additions, or are we actually
> competing against page pinning by an mdev driver after the container is
> released?  Thanks,
> 

Hello,

In a stress test of starting and stopping multiple VMs for a few days we 
observed a memory leak that occurs after a few days. These guests have 
their memory pinned via the pin_user_pages_remote() call in 
vaddr_get_pfns(). From examining the vfio/iommu_type1 code this 
potential race condition was noticed, but we have not root caused this 
race to be the cause of the memory leak.

Thanks,
Sidhartha Kumar
> Alex
> 
>> The lack of
>> the iommu lock in vfio_iommu_type1_release() was confirmed by adding a
>>
>> WARN_ON(!mutex_is_locked(&iommu->lock))
>>
>> which was reported in dmesg. Fix this potential race by adding a iommu
>> lock and release in vfio_iommu_type1_release().
>>
>> Suggested-by: Khalid Aziz <khalid.aziz@...cle.com>
>> Signed-off-by: Sidhartha Kumar <sidhartha.kumar@...cle.com>
>> ---
>>   drivers/vfio/vfio_iommu_type1.c | 2 ++
>>   1 file changed, 2 insertions(+)
>>
>> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
>> index 306e6f1d1c70e..7d2fea1b483dc 100644
>> --- a/drivers/vfio/vfio_iommu_type1.c
>> +++ b/drivers/vfio/vfio_iommu_type1.c
>> @@ -2601,7 +2601,9 @@ static void vfio_iommu_type1_release(void *iommu_data)
>>   		kfree(group);
>>   	}
>>   
>> +	mutex_lock(&iommu->lock);
>>   	vfio_iommu_unmap_unpin_all(iommu);
>> +	mutex_unlock(&iommu->lock);
>>   
>>   	list_for_each_entry_safe(domain, domain_tmp,
>>   				 &iommu->domain_list, next) {
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ