[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <cc0b33d1-a2e5-4c9a-9b9a-4ef3d3bd9606@arm.com>
Date: Wed, 25 Jun 2025 16:49:47 +0530
From: Dev Jain <dev.jain@....com>
To: David Hildenbrand <david@...hat.com>, akpm@...ux-foundation.org
Cc: ziy@...dia.com, baolin.wang@...ux.alibaba.com,
lorenzo.stoakes@...cle.com, Liam.Howlett@...cle.com, npache@...hat.com,
ryan.roberts@....com, baohua@...nel.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/3] khugepaged: Optimize
__collapse_huge_page_copy_succeeded() by PTE batching
On 25/06/25 4:44 pm, David Hildenbrand wrote:
> On 25.06.25 07:58, Dev Jain wrote:
>> Use PTE batching to optimize __collapse_huge_page_copy_succeeded().
>>
>> On arm64, suppose khugepaged is scanning a pte-mapped 2MB THP for
>> collapse.
>> Then, calling ptep_clear() for every pte will cause a TLB flush for
>> every
>> contpte block. Instead, clear_full_ptes() does a
>> contpte_try_unfold_partial() which will flush the TLB only for the
>> (if any)
>> starting and ending contpte block, if they partially overlap with the
>> range
>> khugepaged is looking at.
>>
>> For all arches, there should be a benefit due to batching atomic
>> operations
>> on mapcounts due to folio_remove_rmap_ptes().
>>
>> No issues were observed with mm-selftests.
>>
>> Signed-off-by: Dev Jain <dev.jain@....com>
>> ---
>> mm/khugepaged.c | 27 +++++++++++++++++++++------
>> 1 file changed, 21 insertions(+), 6 deletions(-)
>>
>> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
>> index d45d08b521f6..3944b112d452 100644
>> --- a/mm/khugepaged.c
>> +++ b/mm/khugepaged.c
>> @@ -700,12 +700,15 @@ static void
>> __collapse_huge_page_copy_succeeded(pte_t *pte,
>> spinlock_t *ptl,
>> struct list_head *compound_pagelist)
>> {
>> + unsigned long end = address + HPAGE_PMD_SIZE;
>> struct folio *src, *tmp;
>> - pte_t *_pte;
>> pte_t pteval;
>> + pte_t *_pte;
>> + int nr_ptes;
>> - for (_pte = pte; _pte < pte + HPAGE_PMD_NR;
>> - _pte++, address += PAGE_SIZE) {
>> + for (_pte = pte; _pte < pte + HPAGE_PMD_NR; _pte += nr_ptes,
>> + address += nr_ptes * PAGE_SIZE) {
>> + nr_ptes = 1;
>> pteval = ptep_get(_pte);
>> if (pte_none(pteval) || is_zero_pfn(pte_pfn(pteval))) {
>> add_mm_counter(vma->vm_mm, MM_ANONPAGES, 1);
>> @@ -719,21 +722,33 @@ static void
>> __collapse_huge_page_copy_succeeded(pte_t *pte,
>> ksm_might_unmap_zero_page(vma->vm_mm, pteval);
>> }
>> } else {
>> + const fpb_t flags = FPB_IGNORE_DIRTY |
>> FPB_IGNORE_SOFT_DIRTY;
>> + int max_nr_ptes;
>> +
>> struct page *src_page = pte_page(pteval);
>> src = page_folio(src_page);
>> if (!folio_test_large(src))
>> release_pte_folio(src);
>> +
>> + max_nr_ptes = (end - address) >> PAGE_SHIFT;
>> + if (folio_test_large(src))
>> + nr_ptes = folio_pte_batch(src, address, _pte,
>> + pteval, max_nr_ptes,
>> + flags, NULL, NULL, NULL);
>> +
>> /*
>> * ptl mostly unnecessary, but preempt has to
>> * be disabled to update the per-cpu stats
>> * inside folio_remove_rmap_pte().
>> */
>> spin_lock(ptl);
>> - ptep_clear(vma->vm_mm, address, _pte);
>> - folio_remove_rmap_pte(src, src_page, vma);
>> + clear_full_ptes(vma->vm_mm, address, _pte, nr_ptes,
>> + /* full = */ false);
>
> Can you add this patch to your series if nobody objects and use
> clear_ptes()
> instead?
Thanks! Stupid me thought, because of arm64-specific impl of these, it
will be churn to do this, I am indeed lazy :)
>
> From 95e20ab0ff62bbbdcd89898c9d76fdc1ea961257 Mon Sep 17 00:00:00 2001
> From: David Hildenbrand <david@...hat.com>
> Date: Wed, 25 Jun 2025 12:55:20 +0200
> Subject: [PATCH] mm: add get_and_clear_ptes() and clear_ptes()
>
> Let's add variants to be used where "full" does not apply -- which will
> be the majority of cases in the future. "full" really only applies if
> we are about to tear down a full MM.
>
> Use get_and_clear_ptes() in existing code, clear_ptes() users will
> be added next.
>
> Should we make these inline functions instead and add separate docs?
> Probably not worth it for now.
>
> Signed-off-by: David Hildenbrand <david@...hat.com>
> ---
> include/linux/pgtable.h | 6 ++++++
> mm/mremap.c | 2 +-
> mm/rmap.c | 2 +-
> 3 files changed, 8 insertions(+), 2 deletions(-)
>
> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> index cf1515c163e26..28679254b4f65 100644
> --- a/include/linux/pgtable.h
> +++ b/include/linux/pgtable.h
> @@ -736,6 +736,9 @@ static inline pte_t get_and_clear_full_ptes(struct
> mm_struct *mm,
> }
> #endif
>
> +#define get_and_clear_ptes(_mm, _addr, _ptep, _nr) \
> + get_and_clear_full_ptes(_mm, _addr, _ptep, _nr, 0)
> +
> #ifndef clear_full_ptes
> /**
> * clear_full_ptes - Clear present PTEs that map consecutive pages of
> the same
> @@ -768,6 +771,9 @@ static inline void clear_full_ptes(struct
> mm_struct *mm, unsigned long addr,
> }
> #endif
>
> +#define clear_ptes(_mm, _addr, _ptep, _nr) \
> + clear_full_ptes(_mm, _addr, _ptep, _nr, 0)
> +
> /*
> * If two threads concurrently fault at the same page, the thread that
> * won the race updates the PTE and its local TLB/Cache. The other
> thread
> diff --git a/mm/mremap.c b/mm/mremap.c
> index b31740f77b840..92890f8367574 100644
> --- a/mm/mremap.c
> +++ b/mm/mremap.c
> @@ -322,7 +322,7 @@ static int move_ptes(struct pagetable_move_control
> *pmc,
> old_pte, max_nr_ptes);
> force_flush = true;
> }
> - pte = get_and_clear_full_ptes(mm, old_addr, old_ptep,
> nr_ptes, 0);
> + pte = get_and_clear_ptes(mm, old_addr, old_ptep, nr_ptes);
> pte = move_pte(pte, old_addr, new_addr);
> pte = move_soft_dirty_pte(pte);
>
> diff --git a/mm/rmap.c b/mm/rmap.c
> index 3b74bb19c11dd..8200d705fe4ac 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -2031,7 +2031,7 @@ static bool try_to_unmap_one(struct folio
> *folio, struct vm_area_struct *vma,
> flush_cache_range(vma, address, end_addr);
>
> /* Nuke the page table entry. */
> - pteval = get_and_clear_full_ptes(mm, address, pvmw.pte,
> nr_pages, 0);
> + pteval = get_and_clear_ptes(mm, address, pvmw.pte,
> nr_pages);
> /*
> * We clear the PTE but do not flush so potentially
> * a remote CPU could still be writing to the folio.
Powered by blists - more mailing lists