[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <cf094519-82e1-4f11-b670-dacf89da22ef@linux.dev>
Date: Mon, 5 Jan 2026 11:35:58 +0800
From: Lance Yang <lance.yang@...ux.dev>
To: Vernon Yang <vernon2gm@...il.com>
Cc: lorenzo.stoakes@...cle.com, ziy@...dia.com, dev.jain@....com,
baohua@...nel.org, richard.weiyang@...il.com, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Vernon Yang <yanglincheng@...inos.cn>,
akpm@...ux-foundation.org, david@...nel.org
Subject: Re: [PATCH v3 5/6] mm: khugepaged: skip lazy-free folios at scanning
On 2026/1/5 11:12, Vernon Yang wrote:
> On Mon, Jan 5, 2026 at 10:51 AM Lance Yang <lance.yang@...ux.dev> wrote:
>>
>> On 2026/1/5 09:48, Vernon Yang wrote:
>>> On Sun, Jan 04, 2026 at 08:10:17PM +0800, Lance Yang wrote:
>>>>
>>>>
>>>> On 2026/1/4 13:41, Vernon Yang wrote:
>>>>> For example, create three task: hot1 -> cold -> hot2. After all three
>>>>> task are created, each allocate memory 128MB. the hot1/hot2 task
>>>>> continuously access 128 MB memory, while the cold task only accesses
>>>>> its memory briefly andthen call madvise(MADV_FREE). However, khugepaged
>>>>> still prioritizes scanning the cold task and only scans the hot2 task
>>>>> after completing the scan of the cold task.
>>>>>
>>>>> So if the user has explicitly informed us via MADV_FREE that this memory
>>>>> will be freed, it is appropriate for khugepaged to skip it only, thereby
>>>>> avoiding unnecessary scan and collapse operations to reducing CPU
>>>>> wastage.
>>>>>
>>>>> Here are the performance test results:
>>>>> (Throughput bigger is better, other smaller is better)
>>>>>
>>>>> Testing on x86_64 machine:
>>>>>
>>>>> | task hot2 | without patch | with patch | delta |
>>>>> |---------------------|---------------|---------------|---------|
>>>>> | total accesses time | 3.14 sec | 2.93 sec | -6.69% |
>>>>> | cycles per access | 4.96 | 2.21 | -55.44% |
>>>>> | Throughput | 104.38 M/sec | 111.89 M/sec | +7.19% |
>>>>> | dTLB-load-misses | 284814532 | 69597236 | -75.56% |
>>>>>
>>>>> Testing on qemu-system-x86_64 -enable-kvm:
>>>>>
>>>>> | task hot2 | without patch | with patch | delta |
>>>>> |---------------------|---------------|---------------|---------|
>>>>> | total accesses time | 3.35 sec | 2.96 sec | -11.64% |
>>>>> | cycles per access | 7.29 | 2.07 | -71.60% |
>>>>> | Throughput | 97.67 M/sec | 110.77 M/sec | +13.41% |
>>>>> | dTLB-load-misses | 241600871 | 3216108 | -98.67% |
>>>>>
>>>>> Signed-off-by: Vernon Yang <yanglincheng@...inos.cn>
>>>>> ---
>>>>> include/trace/events/huge_memory.h | 1 +
>>>>> mm/khugepaged.c | 6 ++++++
>>>>> 2 files changed, 7 insertions(+)
>>>>>
>>>>> diff --git a/include/trace/events/huge_memory.h b/include/trace/events/huge_memory.h
>>>>> index 01225dd27ad5..e99d5f71f2a4 100644
>>>>> --- a/include/trace/events/huge_memory.h
>>>>> +++ b/include/trace/events/huge_memory.h
>>>>> @@ -25,6 +25,7 @@
>>>>> EM( SCAN_PAGE_LRU, "page_not_in_lru") \
>>>>> EM( SCAN_PAGE_LOCK, "page_locked") \
>>>>> EM( SCAN_PAGE_ANON, "page_not_anon") \
>>>>> + EM( SCAN_PAGE_LAZYFREE, "page_lazyfree") \
>>>>> EM( SCAN_PAGE_COMPOUND, "page_compound") \
>>>>> EM( SCAN_ANY_PROCESS, "no_process_for_page") \
>>>>> EM( SCAN_VMA_NULL, "vma_null") \
>>>>> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
>>>>> index 30786c706c4a..1ca034a5f653 100644
>>>>> --- a/mm/khugepaged.c
>>>>> +++ b/mm/khugepaged.c
>>>>> @@ -45,6 +45,7 @@ enum scan_result {
>>>>> SCAN_PAGE_LRU,
>>>>> SCAN_PAGE_LOCK,
>>>>> SCAN_PAGE_ANON,
>>>>> + SCAN_PAGE_LAZYFREE,
>>>>> SCAN_PAGE_COMPOUND,
>>>>> SCAN_ANY_PROCESS,
>>>>> SCAN_VMA_NULL,
>>>>> @@ -1337,6 +1338,11 @@ static int hpage_collapse_scan_pmd(struct mm_struct *mm,
>>>>> }
>>>>> folio = page_folio(page);
>>>>> + if (folio_is_lazyfree(folio)) {
>>>>> + result = SCAN_PAGE_LAZYFREE;
>>>>> + goto out_unmap;
>>>>> + }
>>>>
>>>> That's a bit tricky ... I don't think we need to handle MADV_FREE pages
>>>> differently :)
>>>>
>>>> MADV_FREE pages are likely cold memory, but what if there are just
>>>> a few MADV_FREE pages in a hot memory region? Skipping the entire
>>>> region would be unfortunate ...
>>>
>>> If there are hot in lazyfree folios, the folio will be set as non-lazyfree
>>> in the memory reclaim path, it is not skipped in the next scan in the
>>> khugepaged.
>>>
>>> shrink_folio_list()
>>> try_to_unmap()
>>> folio_set_swapbacked()
>>>
>>> If there are no hot in lazyfree folios, continuing the collapse would
>>> waste CPU and require a long wait (khugepaged_scan_sleep_millisecs).
>>> Additionally, due to collapse hugepage become non-lazyfree, preventing
>>> the rapid release of lazyfree folios in the memory reclaim path.
>>>
>>> So skipping lazy-free folios make sense here for us.
>>>
>>> If I missed something, please let me know, thank!
>>
>> I'm not saying lazyfree pages become hot :)
>>
>> If a PMD region has mostly hot pages but just a few lazyfree
>> pages, we would skip the entire region. Those hot pages won't
>> be collapsed.
>
> Same above, the lazyfree folios will be set as non-lazyfree
Nop ...
> in the memory reclaim path, it is not skipped in the next scan,
> the PMD region will collapse :)
Let me be more specific:
Assume we have a PMD region (512 pages):
- Pages 0-499: hot pages (frequently accessed, NOT lazyfree)
- Pages 500-511: lazyfree pages (MADV_FREE'd and clean)
This patch skips the entire region when it hits page 500. So pages
0-499 can't be collapsed, even though they are hot.
I'm NOT saying lazyfree pages themselves become hot ;)
As I mentioned earlier, even if we skip these pages now, after they
are reclaimed they become pte_none. Then khugepaged will try to
collapse them anyway (based on khugepaged_max_ptes_none). So
skipping them just delays things, it does not really change the
final result ...
>
>>>
>>>> Also, even if we skip these pages now, after they are reclaimed, they
>>>> become pte_none. Then khugepaged will try to collapse them anyway
>>>> (based on khugepaged_max_ptes_none). So skipping them just delays
>>>> things, it does not really change the final result ;)
>>>
>>> This patch just resolve scene for hot1 -> cold -> hot2.
>>>
>>> --
>>> Thanks,
>>> Vernon
>>
Powered by blists - more mailing lists