[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250620032344.13382-1-lizhe.67@bytedance.com>
Date: Fri, 20 Jun 2025 11:23:41 +0800
From: lizhe.67@...edance.com
To: alex.williamson@...hat.com,
jgg@...pe.ca,
david@...hat.com
Cc: peterx@...hat.com,
kvm@...r.kernel.org,
linux-kernel@...r.kernel.org,
lizhe.67@...edance.com
Subject: [PATCH v5 0/3] vfio/type1: optimize vfio_unpin_pages_remote() for large folio
From: Li Zhe <lizhe.67@...edance.com>
This patchset is based on patch 'vfio/type1: optimize
vfio_pin_pages_remote() for large folios'[1].
When vfio_unpin_pages_remote() is called with a range of addresses
that includes large folios, the function currently performs individual
put_pfn() operations for each page. This can lead to significant
performance overheads, especially when dealing with large ranges of
pages. We can optimize this process by batching the put_pfn()
operations.
The first patch batches the vfio_find_vpfn() calls in function
vfio_unpin_pages_remote(). However, performance testing indicates that
this patch does not seem to have a significant impact. The primary
reason is that the vpfn rb tree is generally empty. Nevertheless, we
believe it can still offer performance benefits in certain scenarios
and also lays the groundwork for the third patch. The second patch
introduces a new member has_rsvd for struct vfio_dma, which will be
utilized by the third patch. The third patch, using the method described
earlier, optimizes the performance of vfio_unpin_pages_remote() for
large folio scenarios.
The performance test results, based on v6.15, for completing the 16G VFIO
IOMMU DMA unmapping, obtained through unit test[2] with slight
modifications[3], are as follows.
Base(v6.15):
./vfio-pci-mem-dma-map 0000:03:00.0 16
------- AVERAGE (MADV_HUGEPAGE) --------
VFIO MAP DMA in 0.047 s (338.6 GB/s)
VFIO UNMAP DMA in 0.138 s (116.2 GB/s)
------- AVERAGE (MAP_POPULATE) --------
VFIO MAP DMA in 0.280 s (57.2 GB/s)
VFIO UNMAP DMA in 0.312 s (51.3 GB/s)
------- AVERAGE (HUGETLBFS) --------
VFIO MAP DMA in 0.052 s (308.3 GB/s)
VFIO UNMAP DMA in 0.139 s (115.1 GB/s)
Map[1] + First patch:
------- AVERAGE (MADV_HUGEPAGE) --------
VFIO MAP DMA in 0.027 s (596.1 GB/s)
VFIO UNMAP DMA in 0.138 s (115.8 GB/s)
------- AVERAGE (MAP_POPULATE) --------
VFIO MAP DMA in 0.292 s (54.8 GB/s)
VFIO UNMAP DMA in 0.310 s (51.6 GB/s)
------- AVERAGE (HUGETLBFS) --------
VFIO MAP DMA in 0.032 s (506.5 GB/s)
VFIO UNMAP DMA in 0.140 s (114.1 GB/s)
Map[1] + This patchset:
------- AVERAGE (MADV_HUGEPAGE) --------
VFIO MAP DMA in 0.028 s (563.9 GB/s)
VFIO UNMAP DMA in 0.049 s (325.1 GB/s)
------- AVERAGE (MAP_POPULATE) --------
VFIO MAP DMA in 0.292 s (54.7 GB/s)
VFIO UNMAP DMA in 0.292 s (54.9 GB/s)
------- AVERAGE (HUGETLBFS) --------
VFIO MAP DMA in 0.033 s (491.3 GB/s)
VFIO UNMAP DMA in 0.049 s (323.9 GB/s)
The first patch appears to have negligible impact on the performance
of VFIO UNMAP DMA.
With the second and the third patch, we achieve an approximate 64%
performance improvement in the VFIO UNMAP DMA item for large folios.
For small folios, the performance test results appear to show no
significant changes.
[1]: https://lore.kernel.org/all/20250529064947.38433-1-lizhe.67@bytedance.com/
[2]: https://github.com/awilliam/tests/blob/vfio-pci-mem-dma-map/vfio-pci-mem-dma-map.c
[3]: https://lore.kernel.org/all/20250610031013.98556-1-lizhe.67@bytedance.com/
Changelogs:
v4->v5:
- Remove the unpin_user_folio_dirty_locked() interface introduced in
v4.
- Introduces a new member has_rsvd for struct vfio_dma. We use it to
determine whether there are any reserved or invalid pfns in the
region represented by this vfio_dma. If not, we can perform batch
put_pfn() operations by directly calling unpin_user_page_range_dirty_lock().
- Update the performance test results.
v3->v4:
- Introduce a new interface unpin_user_folio_dirty_locked(). Its
purpose is to conditionally mark a folio as dirty and unpin it.
This interface will be called in the VFIO DMA unmap process.
- Revert the related changes to put_pfn().
- Update the performance test results.
v2->v3:
- Split the original patch into two separate patches.
- Add several comments specific to large folio scenarios.
- Rename two variables.
- The update to iova has been removed within the loop in
vfio_unpin_pages_remote().
- Update the performance test results.
v1->v2:
- Refactor the implementation of the optimized code
v4: https://lore.kernel.org/all/20250617041821.85555-1-lizhe.67@bytedance.com/
v3: https://lore.kernel.org/all/20250616075251.89067-1-lizhe.67@bytedance.com/
v2: https://lore.kernel.org/all/20250610045753.6405-1-lizhe.67@bytedance.com/
v1: https://lore.kernel.org/all/20250605124923.21896-1-lizhe.67@bytedance.com/
Li Zhe (3):
vfio/type1: batch vfio_find_vpfn() in function
vfio_unpin_pages_remote()
vfio/type1: introduce a new member has_rsvd for struct vfio_dma
vfio/type1: optimize vfio_unpin_pages_remote() for large folio
drivers/vfio/vfio_iommu_type1.c | 31 ++++++++++++++++++++++---------
1 file changed, 22 insertions(+), 9 deletions(-)
--
2.20.1
Powered by blists - more mailing lists