[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <970e6480-5ba0-4500-85a6-f7ec6db2f005@redhat.com>
Date: Wed, 18 Jun 2025 19:29:42 +0200
From: David Hildenbrand <david@...hat.com>
To: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>, Dev Jain <dev.jain@....com>
Cc: akpm@...ux-foundation.org, ziy@...dia.com, baolin.wang@...ux.alibaba.com,
Liam.Howlett@...cle.com, npache@...hat.com, ryan.roberts@....com,
baohua@...nel.org, linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] khugepaged: Optimize
__collapse_huge_page_copy_succeeded() for large folios by PTE batching
On 18.06.25 19:26, Lorenzo Stoakes wrote:
> On Wed, Jun 18, 2025 at 03:56:07PM +0530, Dev Jain wrote:
>> Use PTE batching to optimize __collapse_huge_page_copy_succeeded().
>>
>> On arm64, suppose khugepaged is scanning a pte-mapped 2MB THP for collapse.
>> Then, calling ptep_clear() for every pte will cause a TLB flush for every
>> contpte block. Instead, clear_full_ptes() does a
>> contpte_try_unfold_partial() which will flush the TLB only for the (if any)
>> starting and ending contpte block, if they partially overlap with the range
>> khugepaged is looking at.
>>
>> For all arches, there should be a benefit due to batching atomic operations
>> on mapcounts due to folio_remove_rmap_ptes().
>>
>> No issues were observed with mm-selftests.
>>
>> Signed-off-by: Dev Jain <dev.jain@....com>
>> ---
>> mm/khugepaged.c | 31 +++++++++++++++++++++++--------
>> 1 file changed, 23 insertions(+), 8 deletions(-)
>>
>> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
>> index d45d08b521f6..649ccb2670f8 100644
>> --- a/mm/khugepaged.c
>> +++ b/mm/khugepaged.c
>> @@ -700,12 +700,14 @@ static void __collapse_huge_page_copy_succeeded(pte_t *pte,
>> spinlock_t *ptl,
>> struct list_head *compound_pagelist)
>> {
>> + unsigned long end = address + HPAGE_PMD_SIZE;
>
> I assume we always enter here with aligned address...
>
>> struct folio *src, *tmp;
>> - pte_t *_pte;
>> + pte_t *_pte = pte;
>> pte_t pteval;
>> + int nr_ptes;
>>
>> - for (_pte = pte; _pte < pte + HPAGE_PMD_NR;
>> - _pte++, address += PAGE_SIZE) {
>> + do {
>> + nr_ptes = 1;
>> pteval = ptep_get(_pte);
>> if (pte_none(pteval) || is_zero_pfn(pte_pfn(pteval))) {
>> add_mm_counter(vma->vm_mm, MM_ANONPAGES, 1);
>> @@ -719,23 +721,36 @@ static void __collapse_huge_page_copy_succeeded(pte_t *pte,
>> ksm_might_unmap_zero_page(vma->vm_mm, pteval);
>> }
>> } else {
>
> Existing code but hate this level of indentation.
>
> The code before was (barely) sort of ok-ish, but now it's realyl out of hand.
>
> On the other hand, I look at __collapse_huge_page_isolate() and want to cry so I
> guess this maybe is something that needs addressing outside of this patch.
>
>
>> + const fpb_t flags = FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY;
>> + int max_nr_ptes;
>> + bool is_large;
>> +
>> struct page *src_page = pte_page(pteval);
>>
>> src = page_folio(src_page);
>> - if (!folio_test_large(src))
>> + is_large = folio_test_large(src);
>> + if (!is_large)
>> release_pte_folio(src);
>
> Hm, in this case right, release_pte_folio() does a folio_unlock().
>
> Where does a large folio get unlocked?
Through the "compound_pagelist" below. ... this code is so ugly.
"large_folio_list" ...
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists