[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20251001032251.85888-1-lance.yang@linux.dev>
Date: Wed, 1 Oct 2025 11:22:51 +0800
From: Lance Yang <lance.yang@...ux.dev>
To: akpm@...ux-foundation.org
Cc: david@...hat.com,
lorenzo.stoakes@...cle.com,
Liam.Howlett@...cle.com,
baohua@...nel.org,
baolin.wang@...ux.alibaba.com,
dev.jain@....com,
hughd@...gle.com,
ioworker0@...il.com,
kirill@...temov.name,
linux-kernel@...r.kernel.org,
linux-mm@...ck.org,
mpenttil@...hat.com,
npache@...hat.com,
ryan.roberts@....com,
ziy@...dia.com,
richard.weiyang@...il.com,
Lance Yang <lance.yang@...ux.dev>
Subject: [PATCH mm-new v2 1/1] mm/khugepaged: abort collapse scan on non-swap entries
From: Lance Yang <lance.yang@...ux.dev>
Currently, special non-swap entries (like migration, hwpoison, or PTE
markers) are not caught early in hpage_collapse_scan_pmd(), leading to
failures deep in the swap-in logic.
hpage_collapse_scan_pmd()
`- collapse_huge_page()
`- __collapse_huge_page_swapin() -> fails!
As David suggested[1], this patch skips any such non-swap entries
early. If any one is found, the scan is aborted immediately with the
SCAN_PTE_NON_PRESENT result, as Lorenzo suggested[2], avoiding wasted
work.
[1] https://lore.kernel.org/linux-mm/7840f68e-7580-42cb-a7c8-1ba64fd6df69@redhat.com
[2] https://lore.kernel.org/linux-mm/7df49fe7-c6b7-426a-8680-dcd55219c8bd@lucifer.local
Suggested-by: David Hildenbrand <david@...hat.com>
Suggested-by: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
Signed-off-by: Lance Yang <lance.yang@...ux.dev>
---
v1 -> v2:
- Skip all non-present entries except swap entries (per David) thanks!
- https://lore.kernel.org/linux-mm/20250924100207.28332-1-lance.yang@linux.dev/
mm/khugepaged.c | 32 ++++++++++++++++++--------------
1 file changed, 18 insertions(+), 14 deletions(-)
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 7ab2d1a42df3..d0957648db19 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1284,7 +1284,23 @@ static int hpage_collapse_scan_pmd(struct mm_struct *mm,
for (addr = start_addr, _pte = pte; _pte < pte + HPAGE_PMD_NR;
_pte++, addr += PAGE_SIZE) {
pte_t pteval = ptep_get(_pte);
- if (is_swap_pte(pteval)) {
+ if (pte_none(pteval) || is_zero_pfn(pte_pfn(pteval))) {
+ ++none_or_zero;
+ if (!userfaultfd_armed(vma) &&
+ (!cc->is_khugepaged ||
+ none_or_zero <= khugepaged_max_ptes_none)) {
+ continue;
+ } else {
+ result = SCAN_EXCEED_NONE_PTE;
+ count_vm_event(THP_SCAN_EXCEED_NONE_PTE);
+ goto out_unmap;
+ }
+ } else if (!pte_present(pteval)) {
+ if (non_swap_entry(pte_to_swp_entry(pteval))) {
+ result = SCAN_PTE_NON_PRESENT;
+ goto out_unmap;
+ }
+
++unmapped;
if (!cc->is_khugepaged ||
unmapped <= khugepaged_max_ptes_swap) {
@@ -1293,7 +1309,7 @@ static int hpage_collapse_scan_pmd(struct mm_struct *mm,
* enabled swap entries. Please see
* comment below for pte_uffd_wp().
*/
- if (pte_swp_uffd_wp_any(pteval)) {
+ if (pte_swp_uffd_wp(pteval)) {
result = SCAN_PTE_UFFD_WP;
goto out_unmap;
}
@@ -1304,18 +1320,6 @@ static int hpage_collapse_scan_pmd(struct mm_struct *mm,
goto out_unmap;
}
}
- if (pte_none(pteval) || is_zero_pfn(pte_pfn(pteval))) {
- ++none_or_zero;
- if (!userfaultfd_armed(vma) &&
- (!cc->is_khugepaged ||
- none_or_zero <= khugepaged_max_ptes_none)) {
- continue;
- } else {
- result = SCAN_EXCEED_NONE_PTE;
- count_vm_event(THP_SCAN_EXCEED_NONE_PTE);
- goto out_unmap;
- }
- }
if (pte_uffd_wp(pteval)) {
/*
* Don't collapse the page if any of the small
--
2.49.0
Powered by blists - more mailing lists