[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20250625055806.82645-4-dev.jain@arm.com>
Date: Wed, 25 Jun 2025 11:28:06 +0530
From: Dev Jain <dev.jain@....com>
To: akpm@...ux-foundation.org,
david@...hat.com
Cc: ziy@...dia.com,
baolin.wang@...ux.alibaba.com,
lorenzo.stoakes@...cle.com,
Liam.Howlett@...cle.com,
npache@...hat.com,
ryan.roberts@....com,
baohua@...nel.org,
linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
Dev Jain <dev.jain@....com>
Subject: [PATCH v2 3/3] khugepaged: Reduce race probability between migration and khugepaged
Suppose a folio is under migration, and khugepaged is also trying to
collapse it. collapse_pte_mapped_thp() will retrieve the folio from the
page cache via filemap_lock_folio(), thus taking a reference on the folio
and sleeping on the folio lock, since the lock is held by the migration
path. Migration will then fail in
__folio_migrate_mapping -> folio_ref_freeze. Reduce the probability of
such a race happening (leading to migration failure) by bailing out
if we detect a PMD is marked with a migration entry.
This fixes the migration-shared-anon-thp testcase failure on Apple M3.
Signed-off-by: Dev Jain <dev.jain@....com>
---
mm/khugepaged.c | 12 ++++++++++--
1 file changed, 10 insertions(+), 2 deletions(-)
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 4c8d33abfbd8..bc8774f62e86 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -31,6 +31,7 @@ enum scan_result {
SCAN_FAIL,
SCAN_SUCCEED,
SCAN_PMD_NULL,
+ SCAN_PMD_MIGRATION,
SCAN_PMD_NONE,
SCAN_PMD_MAPPED,
SCAN_EXCEED_NONE_PTE,
@@ -956,6 +957,8 @@ static inline int check_pmd_state(pmd_t *pmd)
if (pmd_none(pmde))
return SCAN_PMD_NONE;
+ if (is_pmd_migration_entry(pmde))
+ return SCAN_PMD_MIGRATION;
if (!pmd_present(pmde))
return SCAN_PMD_NULL;
if (pmd_trans_huge(pmde))
@@ -1518,9 +1521,12 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr,
!range_in_vma(vma, haddr, haddr + HPAGE_PMD_SIZE))
return SCAN_VMA_CHECK;
- /* Fast check before locking page if already PMD-mapped */
+ /*
+ * Fast check before locking folio if already PMD-mapped, or if the
+ * folio is under migration
+ */
result = find_pmd_or_thp_or_none(mm, haddr, &pmd);
- if (result == SCAN_PMD_MAPPED)
+ if (result == SCAN_PMD_MAPPED || result == SCAN_PMD_MIGRATION)
return result;
/*
@@ -2745,6 +2751,7 @@ static int madvise_collapse_errno(enum scan_result r)
case SCAN_PAGE_LRU:
case SCAN_DEL_PAGE_LRU:
case SCAN_PAGE_FILLED:
+ case SCAN_PMD_MIGRATION:
return -EAGAIN;
/*
* Other: Trying again likely not to succeed / error intrinsic to
@@ -2834,6 +2841,7 @@ int madvise_collapse(struct vm_area_struct *vma, struct vm_area_struct **prev,
goto handle_result;
/* Whitelisted set of results where continuing OK */
case SCAN_PMD_NULL:
+ case SCAN_PMD_MIGRATION:
case SCAN_PTE_NON_PRESENT:
case SCAN_PTE_UFFD_WP:
case SCAN_PAGE_RO:
--
2.30.2
Powered by blists - more mailing lists