[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250616075251.89067-1-lizhe.67@bytedance.com>
Date: Mon, 16 Jun 2025 15:52:49 +0800
From: lizhe.67@...edance.com
To: alex.williamson@...hat.com
Cc: kvm@...r.kernel.org,
linux-kernel@...r.kernel.org,
david@...hat.com,
peterx@...hat.com,
lizhe.67@...edance.com
Subject: [PATCH v3 0/2] vfio/type1: optimize vfio_unpin_pages_remote() for large folio
From: Li Zhe <lizhe.67@...edance.com>
This patchset is based on patch 'vfio/type1: optimize
vfio_pin_pages_remote() for large folios'[1].
When vfio_unpin_pages_remote() is called with a range of addresses
that includes large folios, the function currently performs individual
put_pfn() operations for each page. This can lead to significant
performance overheads, especially when dealing with large ranges of
pages. We can optimize this process by batching the put_pfn()
operations.
The first patch batches the vfio_find_vpfn() calls in function
vfio_unpin_pages_remote(). However, performance testing indicates that
this patch does not seem to have a significant impact. The primary
reason is that the vpfn rb tree is generally empty. Nevertheless, we
believe it can still offer performance benefits in certain scenarios
and also lays the groundwork for the second patch. The second patch,
using the method described earlier, optimizes the performance of
vfio_unpin_pages_remote() for large folio scenarios.
The performance test results, based on v6.15, for completing the 16G VFIO
IOMMU DMA unmapping, obtained through unit test[2] with slight
modifications[3], are as follows.
Base(v6.15):
./vfio-pci-mem-dma-map 0000:03:00.0 16
------- AVERAGE (MADV_HUGEPAGE) --------
VFIO MAP DMA in 0.047 s (338.6 GB/s)
VFIO UNMAP DMA in 0.138 s (116.2 GB/s)
------- AVERAGE (MAP_POPULATE) --------
VFIO MAP DMA in 0.280 s (57.2 GB/s)
VFIO UNMAP DMA in 0.312 s (51.3 GB/s)
------- AVERAGE (HUGETLBFS) --------
VFIO MAP DMA in 0.052 s (308.3 GB/s)
VFIO UNMAP DMA in 0.139 s (115.1 GB/s)
Map[1] + first patch:
------- AVERAGE (MADV_HUGEPAGE) --------
VFIO MAP DMA in 0.027 s (596.1 GB/s)
VFIO UNMAP DMA in 0.138 s (115.8 GB/s)
------- AVERAGE (MAP_POPULATE) --------
VFIO MAP DMA in 0.292 s (54.8 GB/s)
VFIO UNMAP DMA in 0.310 s (51.6 GB/s)
------- AVERAGE (HUGETLBFS) --------
VFIO MAP DMA in 0.032 s (506.5 GB/s)
VFIO UNMAP DMA in 0.140 s (114.1 GB/s)
Map[1] + first + sencond patch:
------- AVERAGE (MADV_HUGEPAGE) --------
VFIO MAP DMA in 0.027 s (598.2 GB/s)
VFIO UNMAP DMA in 0.049 s (328.7 GB/s)
------- AVERAGE (MAP_POPULATE) --------
VFIO MAP DMA in 0.289 s (55.3 GB/s)
VFIO UNMAP DMA in 0.303 s (52.9 GB/s)
------- AVERAGE (HUGETLBFS) --------
VFIO MAP DMA in 0.032 s (506.8 GB/s)
VFIO UNMAP DMA in 0.049 s (326.7 GB/s)
The first patch appears to have negligible impact on the performance
of VFIO UNMAP DMA.
With the second patch, we achieve an approximate 64% performance
improvement in the VFIO UNMAP DMA item for large folios. For small
folios, the performance test results appear to show no significant
changes.
[1]: https://lore.kernel.org/all/20250529064947.38433-1-lizhe.67@bytedance.com/
[2]: https://github.com/awilliam/tests/blob/vfio-pci-mem-dma-map/vfio-pci-mem-dma-map.c
[3]: https://lore.kernel.org/all/20250610031013.98556-1-lizhe.67@bytedance.com/
Changelogs:
v2->v3:
- Split the original patch into two separate patches.
- Add several comments specific to large folio scenarios.
- Rename two variables.
- The update to iova has been removed within the loop in
vfio_unpin_pages_remote().
- Update the performance test results.
v1->v2:
- Refactor the implementation of the optimized code
v2: https://lore.kernel.org/all/20250610045753.6405-1-lizhe.67@bytedance.com/
v1: https://lore.kernel.org/all/20250605124923.21896-1-lizhe.67@bytedance.com/
Li Zhe (2):
vfio/type1: batch vfio_find_vpfn() in function
vfio_unpin_pages_remote()
vfio/type1: optimize vfio_unpin_pages_remote() for large folio
drivers/vfio/vfio_iommu_type1.c | 57 ++++++++++++++++++++++++++-------
1 file changed, 45 insertions(+), 12 deletions(-)
--
2.20.1
Powered by blists - more mailing lists