[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <adaffa11-c0a8-484e-9de3-e15eeddcf922@kernel.org>
Date: Fri, 19 Dec 2025 09:55:56 +0100
From: "David Hildenbrand (Red Hat)" <david@...nel.org>
To: Vernon Yang <vernon2gm@...il.com>
Cc: akpm@...ux-foundation.org, lorenzo.stoakes@...cle.com, ziy@...dia.com,
baohua@...nel.org, lance.yang@...ux.dev, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Vernon Yang <yanglincheng@...inos.cn>
Subject: Re: [PATCH 2/4] mm: khugepaged: remove mm when all memory has been
collapsed
On 12/19/25 09:35, Vernon Yang wrote:
> On Thu, Dec 18, 2025 at 10:29:18AM +0100, David Hildenbrand (Red Hat) wrote:
>> On 12/15/25 10:04, Vernon Yang wrote:
>>> The following data is traced by bpftrace on a desktop system. After
>>> the system has been left idle for 10 minutes upon booting, a lot of
>>> SCAN_PMD_MAPPED or SCAN_PMD_NONE are observed during a full scan by
>>> khugepaged.
>>>
>>> @scan_pmd_status[1]: 1 ## SCAN_SUCCEED
>>> @scan_pmd_status[4]: 158 ## SCAN_PMD_MAPPED
>>> @scan_pmd_status[3]: 174 ## SCAN_PMD_NONE
>>> total progress size: 701 MB
>>> Total time : 440 seconds ## include khugepaged_scan_sleep_millisecs
>>>
>>> The khugepaged_scan list save all task that support collapse into hugepage,
>>> as long as the take is not destroyed, khugepaged will not remove it from
>>> the khugepaged_scan list. This exist a phenomenon where task has already
>>> collapsed all memory regions into hugepage, but khugepaged continues to
>>> scan it, which wastes CPU time and invalid, and due to
>>> khugepaged_scan_sleep_millisecs (default 10s) causes a long wait for
>>> scanning a large number of invalid task, so scanning really valid task
>>> is later.
>>>
>>> After applying this patch, when all memory is either SCAN_PMD_MAPPED or
>>> SCAN_PMD_NONE, the mm is automatically removed from khugepaged's scan
>>> list. If the page fault or MADV_HUGEPAGE again, it is added back to
>>> khugepaged.
>>
>> I don't like that, as it assumes that memory within such a process would be
>> rather static, which is easily not the case (e.g., allocators just doing
>> MADV_DONTNEED to free memory).
>>
>> If most stuff is collapsed to PMDs already, can't we just skip over these
>> regions a bit faster?
>
> I have a flash of inspiration and came up with a good idea.
>
> If these regions have already been collapsed into hugepage, rechecking
> them would be very fast. Due to the khugepaged_pages_to_scan can also
> represent the number of VMAs to skip, we can extend its semantics as
> follows:
>
> /*
> * default scan 8*HPAGE_PMD_NR ptes, pmd_mapped, no_pte_table or vmas
> * every 10 second.
> */
> static unsigned int khugepaged_pages_to_scan __read_mostly;
>
> switch (*result) {
> case SCAN_NO_PTE_TABLE:
> case SCAN_PMD_MAPPED:
> case SCAN_PTE_MAPPED_HUGEPAGE:
> progress++; // here
> break;
> case SCAN_SUCCEED:
> ++khugepaged_pages_collapsed;
> fallthrough;
> default:
> progress += HPAGE_PMD_NR;
> }
>
> This way can achieve our goal. David, do you like it?
I'd have to see the full patch, but IMHO we should rather focus on on
"how many pte/pmd entries did we check" and not "how many PMD areas did
we check".
Maybe there is a history to this, but conceptually I think we wanted to
limit the work we do in one operation to something reasonable. Reading a
single PMD is obviously faster than 512 PTEs.
--
Cheers
David
Powered by blists - more mailing lists