[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2933056a-1f4d-49bd-ae62-5571a222c223@redhat.com>
Date: Mon, 16 Jun 2025 10:27:00 +0200
From: David Hildenbrand <david@...hat.com>
To: lizhe.67@...edance.com, alex.williamson@...hat.com
Cc: kvm@...r.kernel.org, linux-kernel@...r.kernel.org, peterx@...hat.com
Subject: Re: [PATCH v3 2/2] vfio/type1: optimize vfio_unpin_pages_remote() for
large folio
On 16.06.25 10:14, David Hildenbrand wrote:
> On 16.06.25 09:52, lizhe.67@...edance.com wrote:
>> From: Li Zhe <lizhe.67@...edance.com>
>>
>> When vfio_unpin_pages_remote() is called with a range of addresses that
>> includes large folios, the function currently performs individual
>> put_pfn() operations for each page. This can lead to significant
>> performance overheads, especially when dealing with large ranges of pages.
>>
>> This patch optimize this process by batching the put_pfn() operations.
>>
>> The performance test results, based on v6.15, for completing the 16G VFIO
>> IOMMU DMA unmapping, obtained through unit test[1] with slight
>> modifications[2], are as follows.
>>
>> Base(v6.15):
>> ./vfio-pci-mem-dma-map 0000:03:00.0 16
>> ------- AVERAGE (MADV_HUGEPAGE) --------
>> VFIO MAP DMA in 0.047 s (338.6 GB/s)
>> VFIO UNMAP DMA in 0.138 s (116.2 GB/s)
>> ------- AVERAGE (MAP_POPULATE) --------
>> VFIO MAP DMA in 0.280 s (57.2 GB/s)
>> VFIO UNMAP DMA in 0.312 s (51.3 GB/s)
>> ------- AVERAGE (HUGETLBFS) --------
>> VFIO MAP DMA in 0.052 s (308.3 GB/s)
>> VFIO UNMAP DMA in 0.139 s (115.1 GB/s)
>>
>> Map[3] + This patchset:
>> ------- AVERAGE (MADV_HUGEPAGE) --------
>> VFIO MAP DMA in 0.027 s (598.2 GB/s)
>> VFIO UNMAP DMA in 0.049 s (328.7 GB/s)
>> ------- AVERAGE (MAP_POPULATE) --------
>> VFIO MAP DMA in 0.289 s (55.3 GB/s)
>> VFIO UNMAP DMA in 0.303 s (52.9 GB/s)
>> ------- AVERAGE (HUGETLBFS) --------
>> VFIO MAP DMA in 0.032 s (506.8 GB/s)
>> VFIO UNMAP DMA in 0.049 s (326.7 GB/s)
>>
>> For large folio, we achieve an approximate 64% performance improvement
>> in the VFIO UNMAP DMA item. For small folios, the performance test
>> results appear to show no significant changes.
>>
>> [1]: https://github.com/awilliam/tests/blob/vfio-pci-mem-dma-map/vfio-pci-mem-dma-map.c
>> [2]: https://lore.kernel.org/all/20250610031013.98556-1-lizhe.67@bytedance.com/
>> [3]: https://lore.kernel.org/all/20250529064947.38433-1-lizhe.67@bytedance.com/
>>
>> Signed-off-by: Li Zhe <lizhe.67@...edance.com>
>> ---
>> drivers/vfio/vfio_iommu_type1.c | 55 +++++++++++++++++++++++++++------
>> 1 file changed, 46 insertions(+), 9 deletions(-)
>>
>> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
>> index e952bf8bdfab..09ecc546ece8 100644
>> --- a/drivers/vfio/vfio_iommu_type1.c
>> +++ b/drivers/vfio/vfio_iommu_type1.c
>> @@ -469,17 +469,28 @@ static bool is_invalid_reserved_pfn(unsigned long pfn)
>> return true;
>> }
>>
>> -static int put_pfn(unsigned long pfn, int prot)
>> +static inline void _put_pfns(struct page *page, int npages, int prot)
>> {
>> - if (!is_invalid_reserved_pfn(pfn)) {
>> - struct page *page = pfn_to_page(pfn);
>> + unpin_user_page_range_dirty_lock(page, npages, prot & IOMMU_WRITE);
>> +}
>>
>> - unpin_user_pages_dirty_lock(&page, 1, prot & IOMMU_WRITE);
>> - return 1;
>> +/*
>> + * The caller must ensure that these npages PFNs belong to the same folio.
>> + */
>> +static inline int put_pfns(unsigned long pfn, int npages, int prot)
>> +{
>> + if (!is_invalid_reserved_pfn(pfn)) {
>> + _put_pfns(pfn_to_page(pfn), npages, prot);
>> + return npages;
>> }
>> return 0;
>> }
>>
>> +static inline int put_pfn(unsigned long pfn, int prot)
>> +{
>> + return put_pfns(pfn, 1, prot);
>> +}
>> +
>> #define VFIO_BATCH_MAX_CAPACITY (PAGE_SIZE / sizeof(struct page *))
>>
>> static void __vfio_batch_init(struct vfio_batch *batch, bool single)
>> @@ -806,11 +817,37 @@ static long vfio_unpin_pages_remote(struct vfio_dma *dma, dma_addr_t iova,
>> bool do_accounting)
>> {
>> long unlocked = 0, locked = vpfn_pages(dma, iova, npage);
>> - long i;
>>
>> - for (i = 0; i < npage; i++)
>> - if (put_pfn(pfn++, dma->prot))
>> - unlocked++;
>> + while (npage) {
>> + long nr_pages = 1;
>> +
>> + if (!is_invalid_reserved_pfn(pfn)) {
>> + struct page *page = pfn_to_page(pfn);
>> + struct folio *folio = page_folio(page);
>> + long folio_pages_num = folio_nr_pages(folio);
>> +
>> + /*
>> + * For a folio, it represents a physically
>> + * contiguous set of bytes, and all of its pages
>> + * share the same invalid/reserved state.
>> + *
>> + * Here, our PFNs are contiguous. Therefore, if we
>> + * detect that the current PFN belongs to a large
>> + * folio, we can batch the operations for the next
>> + * nr_pages PFNs.
>> + */
>> + if (folio_pages_num > 1)
>> + nr_pages = min_t(long, npage,
>> + folio_pages_num -
>> + folio_page_idx(folio, page));
>> +
>> + _put_pfns(page, nr_pages, dma->prot);
>
>
> This is sneaky. You interpret the page pointer a an actual page array,
> assuming that it would give you the right values when advancing nr_pages
> in that array.
Just to add to this: unpin_user_page_range_dirty_lock() is not
universally save in the hugetlb scenario I described.
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists