[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6508ccf7-5ce0-4274-9afb-a41bf192d81b@redhat.com>
Date: Wed, 2 Jul 2025 10:15:29 +0200
From: David Hildenbrand <david@...hat.com>
To: lizhe.67@...edance.com, alex.williamson@...hat.com, jgg@...pe.ca,
peterx@...hat.com
Cc: kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
Jason Gunthorpe <jgg@...dia.com>
Subject: Re: [PATCH 0/4] vfio/type1: optimize vfio_pin_pages_remote() and
vfio_unpin_pages_remote() for large folio
On 30.06.25 09:25, lizhe.67@...edance.com wrote:
> From: Li Zhe <lizhe.67@...edance.com>
>
> This patchset is an consolidation of the two previous patchsets[1][2].
>
> When vfio_pin_pages_remote() is called with a range of addresses that
> includes large folios, the function currently performs individual
> statistics counting operations for each page. This can lead to significant
> performance overheads, especially when dealing with large ranges of pages.
>
> The function vfio_unpin_pages_remote() has a similar issue, where executing
> put_pfn() for each pfn brings considerable consumption.
>
> This patchset optimizes the performance of the relevant functions by
> batching the less efficient operations mentioned before.
>
> The first patch optimizes the performance of the function
> vfio_pin_pages_remote(), while the remaining patches optimize the
> performance of the function vfio_unpin_pages_remote().
>
> The performance test results, based on v6.16-rc4, for completing the 16G
> VFIO MAP/UNMAP DMA, obtained through unit test[3] with slight
> modifications[4], are as follows.
>
> Base(6.16-rc4):
> ./vfio-pci-mem-dma-map 0000:03:00.0 16
> ------- AVERAGE (MADV_HUGEPAGE) --------
> VFIO MAP DMA in 0.047 s (340.2 GB/s)
> VFIO UNMAP DMA in 0.135 s (118.6 GB/s)
> ------- AVERAGE (MAP_POPULATE) --------
> VFIO MAP DMA in 0.280 s (57.2 GB/s)
> VFIO UNMAP DMA in 0.312 s (51.3 GB/s)
> ------- AVERAGE (HUGETLBFS) --------
> VFIO MAP DMA in 0.052 s (310.5 GB/s)
> VFIO UNMAP DMA in 0.136 s (117.3 GB/s)
>
> With this patchset:
> ------- AVERAGE (MADV_HUGEPAGE) --------
> VFIO MAP DMA in 0.027 s (596.4 GB/s)
> VFIO UNMAP DMA in 0.045 s (357.6 GB/s)
> ------- AVERAGE (MAP_POPULATE) --------
> VFIO MAP DMA in 0.288 s (55.5 GB/s)
> VFIO UNMAP DMA in 0.288 s (55.6 GB/s)
> ------- AVERAGE (HUGETLBFS) --------
> VFIO MAP DMA in 0.031 s (508.3 GB/s)
> VFIO UNMAP DMA in 0.045 s (352.9 GB/s)
>
> For large folio, we achieve an over 40% performance improvement for VFIO
> MAP DMA and an over 66% performance improvement for VFIO DMA UNMAP. For
> small folios, the performance test results show little difference compared
> with the performance before optimization.
Jason mentioned in reply to the other series that, ideally, vfio
shouldn't be messing with folios at all.
While we now do that on the unpin side, we still do it at the pin side.
Which makes me wonder if we can avoid folios in patch #1 in
contig_pages(), and simply collect pages that correspond to consecutive
PFNs.
What was the reason again, that contig_pages() would not exceed a single
folio?
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists