[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e3a0a179-9246-4055-992e-3b9046e89748@arm.com>
Date: Thu, 19 Jun 2025 08:52:51 +0530
From: Dev Jain <dev.jain@....com>
To: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
Cc: akpm@...ux-foundation.org, david@...hat.com, ziy@...dia.com,
baolin.wang@...ux.alibaba.com, Liam.Howlett@...cle.com, npache@...hat.com,
ryan.roberts@....com, baohua@...nel.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] khugepaged: Optimize
__collapse_huge_page_copy_succeeded() for large folios by PTE batching
On 18/06/25 10:56 pm, Lorenzo Stoakes wrote:
> On Wed, Jun 18, 2025 at 03:56:07PM +0530, Dev Jain wrote:
>> Use PTE batching to optimize __collapse_huge_page_copy_succeeded().
>>
>> On arm64, suppose khugepaged is scanning a pte-mapped 2MB THP for collapse.
>> Then, calling ptep_clear() for every pte will cause a TLB flush for every
>> contpte block. Instead, clear_full_ptes() does a
>> contpte_try_unfold_partial() which will flush the TLB only for the (if any)
>> starting and ending contpte block, if they partially overlap with the range
>> khugepaged is looking at.
>>
>> For all arches, there should be a benefit due to batching atomic operations
>> on mapcounts due to folio_remove_rmap_ptes().
>>
>> No issues were observed with mm-selftests.
>>
>> Signed-off-by: Dev Jain <dev.jain@....com>
>> ---
>> mm/khugepaged.c | 31 +++++++++++++++++++++++--------
>> 1 file changed, 23 insertions(+), 8 deletions(-)
>>
>> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
>> index d45d08b521f6..649ccb2670f8 100644
>> --- a/mm/khugepaged.c
>> +++ b/mm/khugepaged.c
>> @@ -700,12 +700,14 @@ static void __collapse_huge_page_copy_succeeded(pte_t *pte,
>> spinlock_t *ptl,
>> struct list_head *compound_pagelist)
>> {
>> + unsigned long end = address + HPAGE_PMD_SIZE;
> I assume we always enter here with aligned address...
Yes.
>
>> struct folio *src, *tmp;
>> - pte_t *_pte;
>> + pte_t *_pte = pte;
>> pte_t pteval;
>> + int nr_ptes;
>>
>> - for (_pte = pte; _pte < pte + HPAGE_PMD_NR;
>> - _pte++, address += PAGE_SIZE) {
>> + do {
>> + nr_ptes = 1;
>> pteval = ptep_get(_pte);
>> if (pte_none(pteval) || is_zero_pfn(pte_pfn(pteval))) {
>> add_mm_counter(vma->vm_mm, MM_ANONPAGES, 1);
>> @@ -719,23 +721,36 @@ static void __collapse_huge_page_copy_succeeded(pte_t *pte,
>> ksm_might_unmap_zero_page(vma->vm_mm, pteval);
>> }
>> } else {
> Existing code but hate this level of indentation.
>
> The code before was (barely) sort of ok-ish, but now it's realyl out of hand.
>
> On the other hand, I look at __collapse_huge_page_isolate() and want to cry so I
> guess this maybe is something that needs addressing outside of this patch.
Trust me I have already cried a lot before while doing the khugepaged mTHP stuff :)
>
>
>> + const fpb_t flags = FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY;
>> + int max_nr_ptes;
>> + bool is_large;
>> +
>> struct page *src_page = pte_page(pteval);
>>
>> src = page_folio(src_page);
>> - if (!folio_test_large(src))
>> + is_large = folio_test_large(src);
>> + if (!is_large)
>> release_pte_folio(src);
> Hm, in this case right, release_pte_folio() does a folio_unlock().
>
> Where does a large folio get unlocked?
>
> I mean this must have been existing code because I don't see where this
> happens previously either.
>
>> +
>> + max_nr_ptes = (end - address) >> PAGE_SHIFT;
>> + if (is_large && max_nr_ptes != 1)
> Is it really that harmful if max_nr_ptes == 1? Doesn't folio_pte_batch()
> figure it out?
Yup it will figure that out, was just following the pattern of zap_present_ptes
and copy_present_ptes. Will drop this.
>
>> + nr_ptes = folio_pte_batch(src, address, _pte,
>> + pteval, max_nr_ptes,
>> + flags, NULL, NULL, NULL);
>> +
> It'd be nice(r) if this was:
>
> if (folio_test_large(src))
> nr_ptes = folio_pte_batch(src, address, _pte,
> pteval, max_nr_ptes,
> flags, NULL, NULL, NULL);
> else
> release_pte_folio(src);
>
> But even that is horrid because of the asymmetry.
>
>> /*
>> * ptl mostly unnecessary, but preempt has to
>> * be disabled to update the per-cpu stats
>> * inside folio_remove_rmap_pte().
>> */
>> spin_lock(ptl);
>> - ptep_clear(vma->vm_mm, address, _pte);
>> - folio_remove_rmap_pte(src, src_page, vma);
>> + clear_full_ptes(vma->vm_mm, address, _pte, nr_ptes, false);
> Be nice to use 'Liam's convention' of sticking `/* full = */ false)` on the
> end here so we know what the false refers to.
Sounds good, although in the other mail David mentioned a way to elide this
so I will prefer that.
>
>> + folio_remove_rmap_ptes(src, src_page, nr_ptes, vma);
> Kinda neat that folio_remove_map_pte() is jus ta define onto this with
> nr_ptes == 1 :)
>
>> spin_unlock(ptl);
>> - free_folio_and_swap_cache(src);
>> + free_swap_cache(src);
>> + folio_put_refs(src, nr_ptes);
>> }
>> - }
>> + } while (_pte += nr_ptes, address += nr_ptes * PAGE_SIZE, address != end);
>>
>> list_for_each_entry_safe(src, tmp, compound_pagelist, lru) {
>> list_del(&src->lru);
>> --
>> 2.30.2
>>
> I can't see much wrong with this though, just 'yuck' at existing code
> really :)
Powered by blists - more mailing lists