[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3d34111f-8365-ab95-af11-aaf4825204be@oracle.com>
Date: Tue, 26 Jan 2021 10:08:25 -0800
From: Mike Kravetz <mike.kravetz@...cle.com>
To: Joao Martins <joao.m.martins@...cle.com>, linux-mm@...ck.org
Cc: linux-kernel@...r.kernel.org,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH 2/2] mm/hugetlb: refactor subpage recording
On 1/25/21 12:57 PM, Joao Martins wrote:
> For a given hugepage backing a VA, there's a rather ineficient
> loop which is solely responsible for storing subpages in the passed
> pages/vmas array. For each subpage we check whether it's within
> range or size of @pages and keep incrementing @pfn_offset and a couple
> other variables per subpage iteration.
>
> Simplify this logic and minimize ops per iteration to just
> store the output page/vma. Instead of incrementing number of @refs
> iteratively, we do it through a precalculation of @refs and having
> only a tight loop for storing pinned subpages/vmas.
>
> pinning consequently improves considerably, bringing us close to
> {pin,get}_user_pages_fast:
>
> - 16G with 1G huge page size
> gup_test -f /mnt/huge/file -m 16384 -r 10 -L -S -n 512 -w
>
> PIN_LONGTERM_BENCHMARK: ~11k us -> ~4400 us
> PIN_FAST_BENCHMARK: ~3700 us
>
> Signed-off-by: Joao Martins <joao.m.martins@...cle.com>
> ---
> mm/hugetlb.c | 49 ++++++++++++++++++++++++++++---------------------
> 1 file changed, 28 insertions(+), 21 deletions(-)
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 016addc8e413..1f7a95bc7c87 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -4789,6 +4789,20 @@ int hugetlb_mcopy_atomic_pte(struct mm_struct *dst_mm,
> goto out;
> }
>
> +static void record_subpages_vmas(struct page *page, struct vm_area_struct *vma,
> + int refs, struct page **pages,
> + struct vm_area_struct **vmas)
> +{
> + int nr;
> +
> + for (nr = 0; nr < refs; nr++) {
> + if (likely(pages))
> + pages[nr] = page++;
> + if (vmas)
> + vmas[nr] = vma;
> + }
> +}
> +
> long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma,
> struct page **pages, struct vm_area_struct **vmas,
> unsigned long *position, unsigned long *nr_pages,
> @@ -4918,28 +4932,16 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma,
> continue;
> }
>
> - refs = 0;
> + refs = min3(pages_per_huge_page(h) - pfn_offset,
> + (vma->vm_end - vaddr) >> PAGE_SHIFT, remainder);
>
> -same_page:
> - if (pages)
> - pages[i] = mem_map_offset(page, pfn_offset);
> + if (pages || vmas)
> + record_subpages_vmas(mem_map_offset(page, pfn_offset),
The assumption made here is that mem_map is contiguous for the range of
pages in the hugetlb page. I do not believe you can make this assumption
for (gigantic) hugetlb pages which are > MAX_ORDER_NR_PAGES. For example,
/*
* Gigantic pages are so large that we do not guarantee that page++ pointer
* arithmetic will work across the entire page. We need something more
* specialized.
*/
static void __copy_gigantic_page(struct page *dst, struct page *src,
int nr_pages)
--
Mike Kravetz
> + vma, refs,
> + likely(pages) ? pages + i : NULL,
> + vmas ? vmas + i : NULL);
>
> - if (vmas)
> - vmas[i] = vma;
> -
> - vaddr += PAGE_SIZE;
> - ++pfn_offset;
> - --remainder;
> - ++i;
> - refs++;
> - if (vaddr < vma->vm_end && remainder &&
> - pfn_offset < pages_per_huge_page(h)) {
> - /*
> - * We use pfn_offset to avoid touching the pageframes
> - * of this compound page.
> - */
> - goto same_page;
> - } else if (pages) {
> + if (pages) {
> /*
> * try_grab_compound_head() should always succeed here,
> * because: a) we hold the ptl lock, and b) we've just
> @@ -4950,7 +4952,7 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma,
> * any way. So this page must be available at this
> * point, unless the page refcount overflowed:
> */
> - if (WARN_ON_ONCE(!try_grab_compound_head(pages[i-1],
> + if (WARN_ON_ONCE(!try_grab_compound_head(pages[i],
> refs,
> flags))) {
> spin_unlock(ptl);
> @@ -4959,6 +4961,11 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma,
> break;
> }
> }
> +
> + vaddr += (refs << PAGE_SHIFT);
> + remainder -= refs;
> + i += refs;
> +
> spin_unlock(ptl);
> }
> *nr_pages = remainder;
>
Powered by blists - more mailing lists