lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <8022363e-fd50-4cdb-85ae-dc83865b5884@arm.com>
Date: Wed, 23 Jul 2025 09:34:37 +0530
From: Dev Jain <dev.jain@....com>
To: David Hildenbrand <david@...hat.com>, akpm@...ux-foundation.org
Cc: ziy@...dia.com, baolin.wang@...ux.alibaba.com,
 lorenzo.stoakes@...cle.com, Liam.Howlett@...cle.com, npache@...hat.com,
 ryan.roberts@....com, baohua@...nel.org, linux-mm@...ck.org,
 linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 2/3] khugepaged: Optimize
 __collapse_huge_page_copy_succeeded() by PTE batching


On 22/07/25 9:33 pm, David Hildenbrand wrote:
> On 22.07.25 17:05, Dev Jain wrote:
>> Use PTE batching to optimize __collapse_huge_page_copy_succeeded().
>>
>> On arm64, suppose khugepaged is scanning a pte-mapped 2MB THP for 
>> collapse.
>> Then, calling ptep_clear() for every pte will cause a TLB flush for 
>> every
>> contpte block. Instead, clear_ptes() does a contpte_try_unfold_partial()
>> which will flush the TLB only for the (if any) starting and ending 
>> contpte
>> block, if they partially overlap with the range khugepaged is looking 
>> at.
>
> I suggest not talking so much about arm specifics.
>
> Simply say that batching reduced the number of TLB flushes, especially 
> on architectures that support cont-pte optimizations.
Makes sense.
>
>>
>> For all arches, there should be a benefit due to batching atomic 
>> operations
>> on mapcounts due to folio_remove_rmap_ptes() and saving some calls.
>
> I would rephrase that to "Independent of that, batching PTE unmapping 
> has known performance benfits (i.e., less refcount and mapcount 
> updates)".
Thanks.
>
>>
>> Signed-off-by: Dev Jain <dev.jain@....com>
>> ---
>>   mm/khugepaged.c | 25 ++++++++++++++++++-------
>>   1 file changed, 18 insertions(+), 7 deletions(-)
>>
>> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
>> index a55fb1dcd224..63517ef7eafb 100644
>> --- a/mm/khugepaged.c
>> +++ b/mm/khugepaged.c
>> @@ -700,12 +700,15 @@ static void 
>> __collapse_huge_page_copy_succeeded(pte_t *pte,
>>                           spinlock_t *ptl,
>>                           struct list_head *compound_pagelist)
>>   {
>> +    unsigned long end = address + HPAGE_PMD_SIZE;
>>       struct folio *src, *tmp;
>> -    pte_t *_pte;
>>       pte_t pteval;
>> +    pte_t *_pte;
>> +    int nr_ptes;
>
> Nit: I guess we should switch to "unsigned int" here now for 
> consistency with folio_pte_batch().
Okay.
>
>>   -    for (_pte = pte; _pte < pte + HPAGE_PMD_NR;
>> -         _pte++, address += PAGE_SIZE) {
>> +    for (_pte = pte; _pte < pte + HPAGE_PMD_NR; _pte += nr_ptes,
>> +         address += nr_ptes * PAGE_SIZE) {
>> +        nr_ptes = 1;
>>           pteval = ptep_get(_pte);
>>           if (pte_none(pteval) || is_zero_pfn(pte_pfn(pteval))) {
>>               add_mm_counter(vma->vm_mm, MM_ANONPAGES, 1);
>> @@ -722,18 +725,26 @@ static void 
>> __collapse_huge_page_copy_succeeded(pte_t *pte,
>>               struct page *src_page = pte_page(pteval);
>>                 src = page_folio(src_page);
>> -            if (!folio_test_large(src))
>> +
>> +            if (folio_test_large(src)) {
>> +                int max_nr_ptes = (end - address) >> PAGE_SHIFT;
>
> Dito.
>
>> +
>> +                nr_ptes = folio_pte_batch(src, _pte, pteval, 
>> max_nr_ptes);
>> +            } else {
>>                   release_pte_folio(src);
>> +            }
>> +
>
> Acked-by: David Hildenbrand <david@...hat.com>
Thanks.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ