[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5820b1e9-3c45-432c-84aa-638cf92fd240@linux.dev>
Date: Fri, 23 Jan 2026 17:09:40 +0800
From: Lance Yang <lance.yang@...ux.dev>
To: Vernon Yang <vernon2gm@...il.com>
Cc: lorenzo.stoakes@...cle.com, ziy@...dia.com, dev.jain@....com,
baohua@...nel.org, linux-mm@...ck.org, linux-kernel@...r.kernel.org,
david@...nel.org, Vernon Yang <yanglincheng@...inos.cn>,
akpm@...ux-foundation.org
Subject: Re: [PATCH mm-new v5 4/5] mm: khugepaged: skip lazy-free folios
On 2026/1/23 16:22, Vernon Yang wrote:
> From: Vernon Yang <yanglincheng@...inos.cn>
>
> For example, create three task: hot1 -> cold -> hot2. After all three
> task are created, each allocate memory 128MB. the hot1/hot2 task
> continuously access 128 MB memory, while the cold task only accesses
> its memory briefly andthen call madvise(MADV_FREE). However, khugepaged
> still prioritizes scanning the cold task and only scans the hot2 task
> after completing the scan of the cold task.
>
> And if we collapse with a lazyfree page, that content will never be none
> and the deferred shrinker cannot reclaim them.
>
> So if the user has explicitly informed us via MADV_FREE that this memory
> will be freed, it is appropriate for khugepaged to skip it only, thereby
> avoiding unnecessary scan and collapse operations to reducing CPU
> wastage.
>
> Here are the performance test results:
> (Throughput bigger is better, other smaller is better)
>
> Testing on x86_64 machine:
>
> | task hot2 | without patch | with patch | delta |
> |---------------------|---------------|---------------|---------|
> | total accesses time | 3.14 sec | 2.93 sec | -6.69% |
> | cycles per access | 4.96 | 2.21 | -55.44% |
> | Throughput | 104.38 M/sec | 111.89 M/sec | +7.19% |
> | dTLB-load-misses | 284814532 | 69597236 | -75.56% |
>
> Testing on qemu-system-x86_64 -enable-kvm:
>
> | task hot2 | without patch | with patch | delta |
> |---------------------|---------------|---------------|---------|
> | total accesses time | 3.35 sec | 2.96 sec | -11.64% |
> | cycles per access | 7.29 | 2.07 | -71.60% |
> | Throughput | 97.67 M/sec | 110.77 M/sec | +13.41% |
> | dTLB-load-misses | 241600871 | 3216108 | -98.67% |
>
> Signed-off-by: Vernon Yang <yanglincheng@...inos.cn>
> ---
> include/trace/events/huge_memory.h | 1 +
> mm/khugepaged.c | 11 +++++++++++
> 2 files changed, 12 insertions(+)
>
> diff --git a/include/trace/events/huge_memory.h b/include/trace/events/huge_memory.h
> index 384e29f6bef0..bcdc57eea270 100644
> --- a/include/trace/events/huge_memory.h
> +++ b/include/trace/events/huge_memory.h
> @@ -25,6 +25,7 @@
> EM( SCAN_PAGE_LRU, "page_not_in_lru") \
> EM( SCAN_PAGE_LOCK, "page_locked") \
> EM( SCAN_PAGE_ANON, "page_not_anon") \
> + EM( SCAN_PAGE_LAZYFREE, "page_lazyfree") \
> EM( SCAN_PAGE_COMPOUND, "page_compound") \
> EM( SCAN_ANY_PROCESS, "no_process_for_page") \
> EM( SCAN_VMA_NULL, "vma_null") \
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index de95029e3763..be1c09842ea2 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -46,6 +46,7 @@ enum scan_result {
> SCAN_PAGE_LRU,
> SCAN_PAGE_LOCK,
> SCAN_PAGE_ANON,
> + SCAN_PAGE_LAZYFREE,
> SCAN_PAGE_COMPOUND,
> SCAN_ANY_PROCESS,
> SCAN_VMA_NULL,
> @@ -583,6 +584,11 @@ static enum scan_result __collapse_huge_page_isolate(struct vm_area_struct *vma,
> folio = page_folio(page);
> VM_BUG_ON_FOLIO(!folio_test_anon(folio), folio);
>
> + if (!pte_dirty(pteval) && folio_test_lazyfree(folio)) {
I'm wondering if we need "cc->is_khugepaged &&" as well here?
We should allow users to enforce collapse via the madvise_collapse()
path even if pages are marked lazyfree, IMHO.
> + result = SCAN_PAGE_LAZYFREE;
> + goto out;
> + }
> +
> /* See hpage_collapse_scan_pmd(). */
> if (folio_maybe_mapped_shared(folio)) {
> ++shared;
> @@ -1330,6 +1336,11 @@ static enum scan_result hpage_collapse_scan_pmd(struct mm_struct *mm,
> }
> folio = page_folio(page);
>
> + if (!pte_dirty(pteval) && folio_test_lazyfree(folio)) {
Ditto.
> + result = SCAN_PAGE_LAZYFREE;
> + goto out_unmap;
> + }
> +
> if (!folio_test_anon(folio)) {
> result = SCAN_PAGE_ANON;
> goto out_unmap;
Powered by blists - more mailing lists