lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9840db68-f352-4e4d-8f06-c153b6c9280c@arm.com>
Date: Thu, 19 Jun 2025 09:24:35 +0530
From: Dev Jain <dev.jain@....com>
To: David Hildenbrand <david@...hat.com>, akpm@...ux-foundation.org
Cc: ziy@...dia.com, baolin.wang@...ux.alibaba.com,
 lorenzo.stoakes@...cle.com, Liam.Howlett@...cle.com, npache@...hat.com,
 ryan.roberts@....com, baohua@...nel.org, linux-mm@...ck.org,
 linux-kernel@...r.kernel.org
Subject: Re: [PATCH] khugepaged: Optimize
 __collapse_huge_page_copy_succeeded() for large folios by PTE batching


On 18/06/25 9:44 pm, David Hildenbrand wrote:
> On 18.06.25 12:26, Dev Jain wrote:
>> Use PTE batching to optimize __collapse_huge_page_copy_succeeded().
>>
>> On arm64, suppose khugepaged is scanning a pte-mapped 2MB THP for 
>> collapse.
>> Then, calling ptep_clear() for every pte will cause a TLB flush for 
>> every
>> contpte block. Instead, clear_full_ptes() does a
>> contpte_try_unfold_partial() which will flush the TLB only for the 
>> (if any)
>> starting and ending contpte block, if they partially overlap with the 
>> range
>> khugepaged is looking at.
>>
>> For all arches, there should be a benefit due to batching atomic 
>> operations
>> on mapcounts due to folio_remove_rmap_ptes().
>>
>> No issues were observed with mm-selftests.
>>
>> Signed-off-by: Dev Jain <dev.jain@....com>
>> ---
>>   mm/khugepaged.c | 31 +++++++++++++++++++++++--------
>>   1 file changed, 23 insertions(+), 8 deletions(-)
>>
>> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
>> index d45d08b521f6..649ccb2670f8 100644
>> --- a/mm/khugepaged.c
>> +++ b/mm/khugepaged.c
>> @@ -700,12 +700,14 @@ static void 
>> __collapse_huge_page_copy_succeeded(pte_t *pte,
>>                           spinlock_t *ptl,
>>                           struct list_head *compound_pagelist)
>>   {
>> +    unsigned long end = address + HPAGE_PMD_SIZE;
>>       struct folio *src, *tmp;
>> -    pte_t *_pte;
>> +    pte_t *_pte = pte;
>>       pte_t pteval;
>> +    int nr_ptes;
>>   -    for (_pte = pte; _pte < pte + HPAGE_PMD_NR;
>> -         _pte++, address += PAGE_SIZE) {
>> +    do {
>> +        nr_ptes = 1;
>>           pteval = ptep_get(_pte);
>>           if (pte_none(pteval) || is_zero_pfn(pte_pfn(pteval))) {
>>               add_mm_counter(vma->vm_mm, MM_ANONPAGES, 1);
>> @@ -719,23 +721,36 @@ static void 
>> __collapse_huge_page_copy_succeeded(pte_t *pte,
>>                   ksm_might_unmap_zero_page(vma->vm_mm, pteval);
>>               }
>>           } else {
>> +            const fpb_t flags = FPB_IGNORE_DIRTY | 
>> FPB_IGNORE_SOFT_DIRTY;
>> +            int max_nr_ptes;
>> +            bool is_large;
>
> folio_test_large() should be cheap, no need for the temporary variable 
> (the compiler will likely optimize this either way).

Okay.

>
>> +
>>               struct page *src_page = pte_page(pteval);
>>                 src = page_folio(src_page);
>> -            if (!folio_test_large(src))
>> +            is_large = folio_test_large(src);
>> +            if (!is_large)
>>                   release_pte_folio(src);
>> +
>> +            max_nr_ptes = (end - address) >> PAGE_SHIFT;
>> +            if (is_large && max_nr_ptes != 1)
>> +                nr_ptes = folio_pte_batch(src, address, _pte,
>> +                              pteval, max_nr_ptes,
>> +                              flags, NULL, NULL, NULL);
>
> Starting to wonder if we want a simplified, non-inlined version of 
> folio_pte_batch() in mm/util.c (e.g., without the 3 NULL parameters), 
> renaming existing folio_pte_batch to __folio_pte_batch() and only 
> using it where required (performance like in fork/zap, or because the 
> other parameters are relevant).
>
> Let me see if I find time for a quick patch later. Have to look at 
> what other similar code needs.

Perhaps that version can also have the default fpb_flags ignoring dirty 
and soft-dirty, since that is what

most code will do. So the wrapper can pass which flags to remove.

>
>> +
>>               /*
>>                * ptl mostly unnecessary, but preempt has to
>>                * be disabled to update the per-cpu stats
>>                * inside folio_remove_rmap_pte().
>>                */
>>               spin_lock(ptl);
>
> Existing code: The PTL locking should just be moved outside of the loop.
>
>> -            ptep_clear(vma->vm_mm, address, _pte);
>> -            folio_remove_rmap_pte(src, src_page, vma);
>> +            clear_full_ptes(vma->vm_mm, address, _pte, nr_ptes, false);
>
> Starting to wonder if we want a shortcut
>
> #define clear_ptes(__mm, __addr, __pte, __nr_ptes) \
>     clear_full_ptes(__mm, __addr, __pte, __nr_ptes, false)

Thanks for the suggestion! I will definitely do this cleanup as part of this

series. It is very confusing and if someone does not know, it becomes hard

to find the batched version of ptep_clear, because this does not follow the

convention we have right now - ptep_set_wrprotect -> wrprotect_ptes and

so on, the "full" comes out of nowhere.

>
>> +            folio_remove_rmap_ptes(src, src_page, nr_ptes, vma);
>>               spin_unlock(ptl);
>> -            free_folio_and_swap_cache(src);
>> +            free_swap_cache(src);
>> +            folio_put_refs(src, nr_ptes);
>>           }
>> -    }
>> +    } while (_pte += nr_ptes, address += nr_ptes * PAGE_SIZE, 
>> address != end);
>>         list_for_each_entry_safe(src, tmp, compound_pagelist, lru) {
>>           list_del(&src->lru);
>
> I think this should just work.
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ