lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250617092117.10772-1-lizhe.67@bytedance.com>
Date: Tue, 17 Jun 2025 17:21:17 +0800
From: lizhe.67@...edance.com
To: david@...hat.com
Cc: akpm@...ux-foundation.org,
	alex.williamson@...hat.com,
	kvm@...r.kernel.org,
	linux-kernel@...r.kernel.org,
	linux-mm@...ck.org,
	lizhe.67@...edance.com,
	peterx@...hat.com
Subject: Re: [PATCH v4 2/3] gup: introduce unpin_user_folio_dirty_locked()

On Tue, 17 Jun 2025 09:43:56 +0200, david@...hat.com wrote:
 
> On 17.06.25 06:18, lizhe.67@...edance.com wrote:
> > From: Li Zhe <lizhe.67@...edance.com>
> > 
> > When vfio_unpin_pages_remote() is called with a range of addresses that
> > includes large folios, the function currently performs individual
> > put_pfn() operations for each page. This can lead to significant
> > performance overheads, especially when dealing with large ranges of pages.
> > 
> > This patch optimize this process by batching the put_pfn() operations.
> > 
> > The performance test results, based on v6.15, for completing the 16G VFIO
> > IOMMU DMA unmapping, obtained through unit test[1] with slight
> > modifications[2], are as follows.
> > 
> > Base(v6.15):
> > ./vfio-pci-mem-dma-map 0000:03:00.0 16
> > ------- AVERAGE (MADV_HUGEPAGE) --------
> > VFIO MAP DMA in 0.047 s (338.6 GB/s)
> > VFIO UNMAP DMA in 0.138 s (116.2 GB/s)
> > ------- AVERAGE (MAP_POPULATE) --------
> > VFIO MAP DMA in 0.280 s (57.2 GB/s)
> > VFIO UNMAP DMA in 0.312 s (51.3 GB/s)
> > ------- AVERAGE (HUGETLBFS) --------
> > VFIO MAP DMA in 0.052 s (308.3 GB/s)
> > VFIO UNMAP DMA in 0.139 s (115.1 GB/s)
> > 
> > Map[3] + This patchset:
> > ------- AVERAGE (MADV_HUGEPAGE) --------
> > VFIO MAP DMA in 0.028 s (563.9 GB/s)
> > VFIO UNMAP DMA in 0.049 s (325.1 GB/s)
> > ------- AVERAGE (MAP_POPULATE) --------
> > VFIO MAP DMA in 0.294 s (54.4 GB/s)
> > VFIO UNMAP DMA in 0.296 s (54.1 GB/s)
> > ------- AVERAGE (HUGETLBFS) --------
> > VFIO MAP DMA in 0.033 s (485.1 GB/s)
> > VFIO UNMAP DMA in 0.049 s (324.4 GB/s)
> > 
> > For large folio, we achieve an approximate 64% performance improvement
> > in the VFIO UNMAP DMA item. For small folios, the performance test
> > results appear to show no significant changes.
> > 
> > [1]: https://github.com/awilliam/tests/blob/vfio-pci-mem-dma-map/vfio-pci-mem-dma-map.c
> > [2]: https://lore.kernel.org/all/20250610031013.98556-1-lizhe.67@bytedance.com/
> > [3]: https://lore.kernel.org/all/20250529064947.38433-1-lizhe.67@bytedance.com/
> > 
> > Signed-off-by: Li Zhe <lizhe.67@...edance.com>
> > ---
> >   drivers/vfio/vfio_iommu_type1.c | 35 +++++++++++++++++++++++++++++----
> >   1 file changed, 31 insertions(+), 4 deletions(-)
> > 
> > diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
> > index e952bf8bdfab..159ba80082a8 100644
> > --- a/drivers/vfio/vfio_iommu_type1.c
> > +++ b/drivers/vfio/vfio_iommu_type1.c
> > @@ -806,11 +806,38 @@ static long vfio_unpin_pages_remote(struct vfio_dma *dma, dma_addr_t iova,
> >   				    bool do_accounting)
> >   {
> >   	long unlocked = 0, locked = vpfn_pages(dma, iova, npage);
> > -	long i;
> >   
> > -	for (i = 0; i < npage; i++)
> > -		if (put_pfn(pfn++, dma->prot))
> > -			unlocked++;
> > +	while (npage) {
> > +		long nr_pages = 1;
> > +
> > +		if (!is_invalid_reserved_pfn(pfn)) {
> > +			struct page *page = pfn_to_page(pfn);
> > +			struct folio *folio = page_folio(page);
> > +			long folio_pages_num = folio_nr_pages(folio);
> > +
> > +			/*
> > +			 * For a folio, it represents a physically
> > +			 * contiguous set of bytes, and all of its pages
> > +			 * share the same invalid/reserved state.
> > +			 *
> > +			 * Here, our PFNs are contiguous. Therefore, if we
> > +			 * detect that the current PFN belongs to a large
> > +			 * folio, we can batch the operations for the next
> > +			 * nr_pages PFNs.
> > +			 */
> > +			if (folio_pages_num > 1)
> > +				nr_pages = min_t(long, npage,
> > +					folio_pages_num -
> > +					folio_page_idx(folio, page));
> > +
> 
> (I know I can be a pain :) )

No, not at all! I really appreciate you taking the time to review my
patch.

> But the long comment indicates that this is confusing.
> 
> 
> That is essentially the logic in gup_folio_range_next().
> 
> What about factoring that out into a helper like
> 
> /*
>   * TODO, returned number includes the provided current page.
>   */
> unsigned long folio_remaining_pages(struct folio *folio,
> 	struct pages *pages, unsigned long max_pages)
> {
> 	if (!folio_test_large(folio))
> 		return 1;
> 	return min_t(unsigned long, max_pages,
> 		     folio_nr_pages(folio) - folio_page_idx(folio, page));
> }
> 
> 
> Then here you would do
> 
> if (!is_invalid_reserved_pfn(pfn)) {
> 	struct page *page = pfn_to_page(pfn);
> 	struct folio *folio = page_folio(page);
> 
> 	/* We can batch-process pages belonging to the same folio. */
> 	nr_pages = folio_remaining_pages(folio, page, npage);
> 
> 	unpin_user_folio_dirty_locked(folio, nr_pages,
> 				      dma->prot & IOMMU_WRITE);
> 	unlocked += nr_pages;
> }

Yes, this indeed makes the code much more comprehensible. Do you think
the implementation of the patch as follows look viable to you? I have
added some brief comments on top of your work to explain why we can
batch-process pages belonging to the same folio. This was suggested by
Alex[1].

diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
index e952bf8bdfab..d7653f4c10d5 100644
--- a/drivers/vfio/vfio_iommu_type1.c
+++ b/drivers/vfio/vfio_iommu_type1.c
@@ -801,16 +801,43 @@ static long vfio_pin_pages_remote(struct vfio_dma *dma, unsigned long vaddr,
        return pinned;
 }
 
+/* Returned number includes the provided current page. */
+static inline unsigned long folio_remaining_pages(struct folio *folio,
+               struct page *page, unsigned long max_pages)
+{
+       if (!folio_test_large(folio))
+               return 1;
+       return min_t(unsigned long, max_pages,
+                    folio_nr_pages(folio) - folio_page_idx(folio, page));
+}
+
 static long vfio_unpin_pages_remote(struct vfio_dma *dma, dma_addr_t iova,
                                    unsigned long pfn, unsigned long npage,
                                    bool do_accounting)
 {
        long unlocked = 0, locked = vpfn_pages(dma, iova, npage);
-       long i;
 
-       for (i = 0; i < npage; i++)
-               if (put_pfn(pfn++, dma->prot))
-                       unlocked++;
+       while (npage) {
+               unsigned long nr_pages = 1;
+
+               if (!is_invalid_reserved_pfn(pfn)) {
+                       struct page *page = pfn_to_page(pfn);
+                       struct folio *folio = page_folio(page);
+
+                       /*
+                        * We can batch-process pages belonging to the same
+                        * folio because all pages within a folio share the
+                        * same invalid/reserved state.
+                        * */
+                       nr_pages = folio_remaining_pages(folio, page, npage);
+                       unpin_user_folio_dirty_locked(folio, nr_pages,
+                                       dma->prot & IOMMU_WRITE);
+                       unlocked += nr_pages;
+               }
+
+               pfn += nr_pages;
+               npage -= nr_pages;
+       }
 
        if (do_accounting)
		vfio_lock_acct(dma, locked - unlocked, true);
---

Thanks,
Zhe

[1]: https://lore.kernel.org/all/20250613113818.584bec0a.alex.williamson@redhat.com/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ