[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250620032344.13382-4-lizhe.67@bytedance.com>
Date: Fri, 20 Jun 2025 11:23:44 +0800
From: lizhe.67@...edance.com
To: alex.williamson@...hat.com,
jgg@...pe.ca,
david@...hat.com
Cc: peterx@...hat.com,
kvm@...r.kernel.org,
linux-kernel@...r.kernel.org,
lizhe.67@...edance.com
Subject: [PATCH v5 3/3] vfio/type1: optimize vfio_unpin_pages_remote() for large folio
From: Li Zhe <lizhe.67@...edance.com>
When vfio_unpin_pages_remote() is called with a range of addresses that
includes large folios, the function currently performs individual
put_pfn() operations for each page. This can lead to significant
performance overheads, especially when dealing with large ranges of pages.
It would be very rare for reserved PFNs and non reserved will to be mixed
within the same range. So this patch utilizes the has_rsvd variable
introduced in the previous patch to determine whether batch put_pfn()
operations can be performed. Moreover, compared to put_pfn(),
unpin_user_page_range_dirty_lock() is capable of handling large folio
scenarios more efficiently.
The performance test results, based on v6.15, for completing the 16G VFIO
IOMMU DMA unmapping, obtained through unit test[1] with slight
modifications[2], are as follows.
Base(v6.15):
./vfio-pci-mem-dma-map 0000:03:00.0 16
------- AVERAGE (MADV_HUGEPAGE) --------
VFIO MAP DMA in 0.047 s (338.6 GB/s)
VFIO UNMAP DMA in 0.138 s (116.2 GB/s)
------- AVERAGE (MAP_POPULATE) --------
VFIO MAP DMA in 0.280 s (57.2 GB/s)
VFIO UNMAP DMA in 0.312 s (51.3 GB/s)
------- AVERAGE (HUGETLBFS) --------
VFIO MAP DMA in 0.052 s (308.3 GB/s)
VFIO UNMAP DMA in 0.139 s (115.1 GB/s)
Map[3] + This patchset:
------- AVERAGE (MADV_HUGEPAGE) --------
VFIO MAP DMA in 0.028 s (563.9 GB/s)
VFIO UNMAP DMA in 0.049 s (325.1 GB/s)
------- AVERAGE (MAP_POPULATE) --------
VFIO MAP DMA in 0.292 s (54.7 GB/s)
VFIO UNMAP DMA in 0.292 s (54.9 GB/s)
------- AVERAGE (HUGETLBFS) --------
VFIO MAP DMA in 0.033 s (491.3 GB/s)
VFIO UNMAP DMA in 0.049 s (323.9 GB/s)
For large folio, we achieve an approximate 64% performance improvement
in the VFIO UNMAP DMA item. For small folios, the performance test
results appear to show no significant changes.
[1]: https://github.com/awilliam/tests/blob/vfio-pci-mem-dma-map/vfio-pci-mem-dma-map.c
[2]: https://lore.kernel.org/all/20250610031013.98556-1-lizhe.67@bytedance.com/
[3]: https://lore.kernel.org/all/20250529064947.38433-1-lizhe.67@bytedance.com/
Suggested-by: Jason Gunthorpe <jgg@...pe.ca>
Signed-off-by: Li Zhe <lizhe.67@...edance.com>
---
drivers/vfio/vfio_iommu_type1.c | 20 ++++++++++++++++----
1 file changed, 16 insertions(+), 4 deletions(-)
diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
index 8827e315e3d8..88a54b44df5b 100644
--- a/drivers/vfio/vfio_iommu_type1.c
+++ b/drivers/vfio/vfio_iommu_type1.c
@@ -806,17 +806,29 @@ static long vfio_pin_pages_remote(struct vfio_dma *dma, unsigned long vaddr,
return pinned;
}
+static inline void put_valid_unreserved_pfns(unsigned long start_pfn,
+ unsigned long npage, int prot)
+{
+ unpin_user_page_range_dirty_lock(pfn_to_page(start_pfn), npage,
+ prot & IOMMU_WRITE);
+}
+
static long vfio_unpin_pages_remote(struct vfio_dma *dma, dma_addr_t iova,
unsigned long pfn, unsigned long npage,
bool do_accounting)
{
long unlocked = 0, locked = vpfn_pages(dma, iova, npage);
- long i;
- for (i = 0; i < npage; i++)
- if (put_pfn(pfn++, dma->prot))
- unlocked++;
+ if (dma->has_rsvd) {
+ long i;
+ for (i = 0; i < npage; i++)
+ if (put_pfn(pfn++, dma->prot))
+ unlocked++;
+ } else {
+ put_valid_unreserved_pfns(pfn, npage, dma->prot);
+ unlocked = npage;
+ }
if (do_accounting)
vfio_lock_acct(dma, locked - unlocked, true);
--
2.20.1
Powered by blists - more mailing lists