[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <c69f0b21-1e67-4e1d-b56b-a5c1294e8b45@redhat.com>
Date: Thu, 24 Jul 2025 19:40:38 +0200
From: David Hildenbrand <david@...hat.com>
To: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>, Dev Jain <dev.jain@....com>
Cc: akpm@...ux-foundation.org, ziy@...dia.com, baolin.wang@...ux.alibaba.com,
Liam.Howlett@...cle.com, npache@...hat.com, ryan.roberts@....com,
baohua@...nel.org, linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v4 2/3] khugepaged: Optimize
__collapse_huge_page_copy_succeeded() by PTE batching
On 24.07.25 19:32, Lorenzo Stoakes wrote:
> Message-ID: <32843cfb-a70b-4dfb-965c-4e1b0623a1b4@...ifer.local>
> Reply-To:
> In-Reply-To: <20250724052301.23844-3-dev.jain@....com>
>
> NIT: Please don't capitalise 'Optimize' here.
>
> I think Andrew fixed this for you actually in the repo though :P
>
> On Thu, Jul 24, 2025 at 10:53:00AM +0530, Dev Jain wrote:
>> Use PTE batching to batch process PTEs mapping the same large folio. An
>> improvement is expected due to batching refcount-mapcount manipulation on
>> the folios, and for arm64 which supports contig mappings, the number of
>> TLB flushes is also reduced.
>>
>> Acked-by: David Hildenbrand <david@...hat.com>
>> Reviewed-by: Baolin Wang <baolin.wang@...ux.alibaba.com>
>> Signed-off-by: Dev Jain <dev.jain@....com>
>> ---
>> mm/khugepaged.c | 25 ++++++++++++++++++-------
>> 1 file changed, 18 insertions(+), 7 deletions(-)
>>
>> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
>> index a55fb1dcd224..f23e943506bc 100644
>> --- a/mm/khugepaged.c
>> +++ b/mm/khugepaged.c
>> @@ -700,12 +700,15 @@ static void __collapse_huge_page_copy_succeeded(pte_t *pte,
>> spinlock_t *ptl,
>> struct list_head *compound_pagelist)
>> {
>> + unsigned long end = address + HPAGE_PMD_SIZE;
>> struct folio *src, *tmp;
>> - pte_t *_pte;
>> pte_t pteval;
>> + pte_t *_pte;
>> + unsigned int nr_ptes;
>>
>> - for (_pte = pte; _pte < pte + HPAGE_PMD_NR;
>> - _pte++, address += PAGE_SIZE) {
>> + for (_pte = pte; _pte < pte + HPAGE_PMD_NR; _pte += nr_ptes,
>> + address += nr_ptes * PAGE_SIZE) {
>> + nr_ptes = 1;
>> pteval = ptep_get(_pte);
>> if (pte_none(pteval) || is_zero_pfn(pte_pfn(pteval))) {
>> add_mm_counter(vma->vm_mm, MM_ANONPAGES, 1);
>> @@ -722,18 +725,26 @@ static void __collapse_huge_page_copy_succeeded(pte_t *pte,
>> struct page *src_page = pte_page(pteval);
>>
>> src = page_folio(src_page);
>> - if (!folio_test_large(src))
>> +
>> + if (folio_test_large(src)) {
>> + unsigned int max_nr_ptes = (end - address) >> PAGE_SHIFT;
>> +
>> + nr_ptes = folio_pte_batch(src, _pte, pteval, max_nr_ptes);
>> + } else {
>> release_pte_folio(src);
>> + }
>> +
>> /*
>> * ptl mostly unnecessary, but preempt has to
>> * be disabled to update the per-cpu stats
>> * inside folio_remove_rmap_pte().
>> */
>> spin_lock(ptl);
>> - ptep_clear(vma->vm_mm, address, _pte);
>> - folio_remove_rmap_pte(src, src_page, vma);
>> + clear_ptes(vma->vm_mm, address, _pte, nr_ptes);
>> + folio_remove_rmap_ptes(src, src_page, nr_ptes, vma);
>> spin_unlock(ptl);
>> - free_folio_and_swap_cache(src);
>> + free_swap_cache(src);
>> + folio_put_refs(src, nr_ptes);
>
> Hm one thing here though is the free_folio_and_swap_cache() does:
>
> free_swap_cache(folio);
> if (!is_huge_zero_folio(folio))
> folio_put(folio);
>
> Whereas here you unconditionally reduce the reference count. Might this
> cause issues with the shrinker version of the huge zero folio?
>
> Should this be:
>
> if (!is_huge_zero_folio(src))
> folio_put_refs(src, nr_ptes);
>
> Or do we otherwise avoid issues with this?
The huge zero folio is never PTE-mapped.
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists