lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a696c734-9f88-4d6f-a852-013071a2dd2a@redhat.com>
Date: Thu, 18 Sep 2025 20:47:18 +0200
From: David Hildenbrand <david@...hat.com>
To: Lance Yang <lance.yang@...ux.dev>, akpm@...ux-foundation.org,
 lorenzo.stoakes@...cle.com
Cc: ziy@...dia.com, baolin.wang@...ux.alibaba.com, Liam.Howlett@...cle.com,
 npache@...hat.com, ryan.roberts@....com, dev.jain@....com,
 baohua@...nel.org, ioworker0@...il.com, kirill@...temov.name,
 hughd@...gle.com, mpenttil@...hat.com, linux-kernel@...r.kernel.org,
 linux-mm@...ck.org
Subject: Re: [PATCH mm-new v2 2/2] mm/khugepaged: abort collapse scan on guard
 PTEs

On 18.09.25 07:04, Lance Yang wrote:
> From: Lance Yang <lance.yang@...ux.dev>
> 
> Guard PTE markers are installed via MADV_GUARD_INSTALL to create
> lightweight guard regions.
> 
> Currently, any collapse path (khugepaged or MADV_COLLAPSE) will fail when
> encountering such a range.
> 
> MADV_COLLAPSE fails deep inside the collapse logic when trying to swap-in
> the special marker in __collapse_huge_page_swapin().
> 
> hpage_collapse_scan_pmd()
>   `- collapse_huge_page()
>       `- __collapse_huge_page_swapin() -> fails!
> 
> khugepaged's behavior is slightly different due to its max_ptes_swap limit
> (default 64). It won't fail as deep, but it will still needlessly scan up
> to 64 swap entries before bailing out.
> 
> IMHO, we can and should detect this much earlier.
> 
> This patch adds a check directly inside the PTE scan loop. If a guard
> marker is found, the scan is aborted immediately with SCAN_PTE_NON_PRESENT,
> avoiding wasted work.
> 
> Suggested-by: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
> Signed-off-by: Lance Yang <lance.yang@...ux.dev>
> ---
>   mm/khugepaged.c | 10 ++++++++++
>   1 file changed, 10 insertions(+)
> 
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 9ed1af2b5c38..70ebfc7c1f3e 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -1306,6 +1306,16 @@ static int hpage_collapse_scan_pmd(struct mm_struct *mm,
>   					result = SCAN_PTE_UFFD_WP;
>   					goto out_unmap;
>   				}
> +				/*
> +				 * Guard PTE markers are installed by
> +				 * MADV_GUARD_INSTALL. Any collapse path must
> +				 * not touch them, so abort the scan immediately
> +				 * if one is found.
> +				 */
> +				if (is_guard_pte_marker(pteval)) {
> +					result = SCAN_PTE_NON_PRESENT;
> +					goto out_unmap;
> +				}

Thinking about it, this is interesting.

Essentially we track any non-swap swap entries towards khugepaged_max_ptes_swap, which is rather weird.

I think we might also run into migration entries here and hwpoison entries?

So what about just generalizing this:

diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index af5f5c80fe4ed..28f1f4bf0e0a8 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1293,7 +1293,24 @@ static int hpage_collapse_scan_pmd(struct mm_struct *mm,
         for (_address = address, _pte = pte; _pte < pte + HPAGE_PMD_NR;
              _pte++, _address += PAGE_SIZE) {
                 pte_t pteval = ptep_get(_pte);
-               if (is_swap_pte(pteval)) {
+
+               if (pte_none(pteval) || is_zero_pfn(pte_pfn(pteval))) {
+                       ++none_or_zero;
+                       if (!userfaultfd_armed(vma) &&
+                           (!cc->is_khugepaged ||
+                            none_or_zero <= khugepaged_max_ptes_none)) {
+                               continue;
+                       } else {
+                               result = SCAN_EXCEED_NONE_PTE;
+                               count_vm_event(THP_SCAN_EXCEED_NONE_PTE);
+                               goto out_unmap;
+                       }
+               } else if (!pte_present(pteval)) {
+                       if (non_swap_entry(pte_to_swp_entry(pteval))) {
+                               result = SCAN_PTE_NON_PRESENT;
+                               goto out_unmap;
+                       }
+
                         ++unmapped;
                         if (!cc->is_khugepaged ||
                             unmapped <= khugepaged_max_ptes_swap) {
@@ -1313,18 +1330,7 @@ static int hpage_collapse_scan_pmd(struct mm_struct *mm,
                                 goto out_unmap;
                         }
                 }
-               if (pte_none(pteval) || is_zero_pfn(pte_pfn(pteval))) {
-                       ++none_or_zero;
-                       if (!userfaultfd_armed(vma) &&
-                           (!cc->is_khugepaged ||
-                            none_or_zero <= khugepaged_max_ptes_none)) {
-                               continue;
-                       } else {
-                               result = SCAN_EXCEED_NONE_PTE;
-                               count_vm_event(THP_SCAN_EXCEED_NONE_PTE);
-                               goto out_unmap;
-                       }
-               }
+
                 if (pte_uffd_wp(pteval)) {
                         /*
                          * Don't collapse the page if any of the small


With that, the function flow looks more similar to __collapse_huge_page_isolate(),
except that we handle swap entries in there now.


And with that in place, couldn't we factor out a huge chunk of both scanning
functions into some helper (passing whether swap entries are allowed or not?).

Yes, I know, refactoring khugepaged, crazy idea.

-- 
Cheers

David / dhildenb


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ