[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <750a06dc-db3d-43c6-b234-95efb393a9df@arm.com>
Date: Sun, 14 Sep 2025 22:33:22 +0530
From: Dev Jain <dev.jain@....com>
To: Lance Yang <lance.yang@...ux.dev>, akpm@...ux-foundation.org,
david@...hat.com, lorenzo.stoakes@...cle.com
Cc: ziy@...dia.com, baolin.wang@...ux.alibaba.com, Liam.Howlett@...cle.com,
npache@...hat.com, ryan.roberts@....com, baohua@...nel.org,
ioworker0@...il.com, linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH mm-new 3/3] mm/khugepaged: abort collapse scan on guard
PTEs
On 14/09/25 8:05 pm, Lance Yang wrote:
> From: Lance Yang <lance.yang@...ux.dev>
>
> Guard PTE markers are installed via MADV_GUARD_INSTALL to create
> lightweight guard regions.
>
> Currently, any collapse path (khugepaged or MADV_COLLAPSE) will fail when
> encountering such a range.
>
> MADV_COLLAPSE fails deep inside the collapse logic when trying to swap-in
> the special marker in __collapse_huge_page_swapin().
>
> hpage_collapse_scan_pmd()
> `- collapse_huge_page()
> `- __collapse_huge_page_swapin() -> fails!
>
> khugepaged's behavior is slightly different due to its max_ptes_swap limit
> (default 64). It won't fail as deep, but it will still needlessly scan up
> to 64 swap entries before bailing out.
>
> IMHO, we can and should detect this much earlier ;)
>
> This patch adds a check directly inside the PTE scan loop. If a guard
> marker is found, the scan is aborted immediately with a new SCAN_PTE_GUARD
> status, avoiding wasted work.
>
> Signed-off-by: Lance Yang <lance.yang@...ux.dev>
> ---
> mm/khugepaged.c | 12 ++++++++++++
> 1 file changed, 12 insertions(+)
>
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index e54f99bb0b57..910a6f2ec8a9 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -59,6 +59,7 @@ enum scan_result {
> SCAN_STORE_FAILED,
> SCAN_COPY_MC,
> SCAN_PAGE_FILLED,
> + SCAN_PTE_GUARD,
> };
>
> #define CREATE_TRACE_POINTS
> @@ -1317,6 +1318,16 @@ static int hpage_collapse_scan_pmd(struct mm_struct *mm,
> result = SCAN_PTE_UFFD_WP;
> goto out_unmap;
> }
> + /*
> + * Guard PTE markers are installed by
> + * MADV_GUARD_INSTALL. Any collapse path must
> + * not touch them, so abort the scan immediately
> + * if one is found.
> + */
> + if (is_guard_pte_marker(pteval)) {
> + result = SCAN_PTE_GUARD;
> + goto out_unmap;
> + }
> continue;
This looks good, but see below.
> } else {
> result = SCAN_EXCEED_SWAP_PTE;
> @@ -2860,6 +2871,7 @@ int madvise_collapse(struct vm_area_struct *vma, unsigned long start,
> case SCAN_PAGE_COMPOUND:
> case SCAN_PAGE_LRU:
> case SCAN_DEL_PAGE_LRU:
> + case SCAN_PTE_GUARD:
> last_fail = result;
Should we not do this, and just send this case over to the default case. That
would mean immediate exit with -EINVAL, instead of iterating over the complete
range, potentially collapsing a non-guard range, and returning -EINVAL. I do not
think we should spend a significant time in the kernel when the user is literally
invoking madvise(MADV_GUARD_INSTALL) and madvise(MADV_COLLAPSE) on overlapping regions.
> break;
> default:
Powered by blists - more mailing lists