[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250214191427.GC3696814@ziepe.ca>
Date: Fri, 14 Feb 2025 15:14:27 -0400
From: Jason Gunthorpe <jgg@...pe.ca>
To: Alex Williamson <alex.williamson@...hat.com>
Cc: kvm@...r.kernel.org, linux-kernel@...r.kernel.org, peterx@...hat.com,
mitchell.augustin@...onical.com, clg@...hat.com,
akpm@...ux-foundation.org, linux-mm@...ck.org
Subject: Re: [PATCH 4/5] mm: Provide page mask in struct follow_pfnmap_args
On Wed, Feb 05, 2025 at 04:17:20PM -0700, Alex Williamson wrote:
> follow_pfnmap_start() walks the page table for a given address and
> fills out the struct follow_pfnmap_args in pfnmap_args_setup().
> The page mask of the page table level is already provided to this
> latter function for calculating the pfn. This page mask can also be
> useful for the caller to determine the extent of the contiguous
> mapping.
>
> For example, vfio-pci now supports huge_fault for pfnmaps and is able
> to insert pud and pmd mappings. When we DMA map these pfnmaps, ex.
> PCI MMIO BARs, we iterate follow_pfnmap_start() to get each pfn to test
> for a contiguous pfn range. Providing the mapping page mask allows us
> to skip the extent of the mapping level. Assuming a 1GB pud level and
> 4KB page size, iterations are reduced by a factor of 256K. In wall
> clock time, mapping a 32GB PCI BAR is reduced from ~1s to <1ms.
>
> Cc: Andrew Morton <akpm@...ux-foundation.org>
> Cc: linux-mm@...ck.org
> Signed-off-by: Alex Williamson <alex.williamson@...hat.com>
> ---
> include/linux/mm.h | 2 ++
> mm/memory.c | 1 +
> 2 files changed, 3 insertions(+)
Reviewed-by: Jason Gunthorpe <jgg@...dia.com>
Jason
Powered by blists - more mailing lists