[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20260109034723.1342798-1-wangjinchao600@gmail.com>
Date: Fri, 9 Jan 2026 11:47:16 +0800
From: Jinchao Wang <wangjinchao600@...il.com>
To: Matthew Wilcox <willy@...radead.org>,
Andrew Morton <akpm@...ux-foundation.org>,
David Hildenbrand <david@...nel.org>,
Zi Yan <ziy@...dia.com>,
Matthew Brost <matthew.brost@...el.com>,
Joshua Hahn <joshua.hahnjy@...il.com>,
Rakie Kim <rakie.kim@...com>,
Byungchul Park <byungchul@...com>,
Gregory Price <gourry@...rry.net>,
Ying Huang <ying.huang@...ux.alibaba.com>,
Alistair Popple <apopple@...dia.com>,
linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Cc: Jinchao Wang <wangjinchao600@...il.com>,
syzbot+2d9c96466c978346b55f@...kaller.appspotmail.com
Subject: [PATCH] mm/migrate: fix hugetlbfs deadlock by respecting lock ordering
Fix an AB-BA deadlock between hugetlbfs_punch_hole() and page migration.
The deadlock occurs because migration violates the lock ordering defined
in mm/rmap.c for hugetlbfs:
* hugetlbfs PageHuge() take locks in this order:
* hugetlb_fault_mutex
* vma_lock
* mapping->i_mmap_rwsem
* folio_lock
The following trace illustrates the inversion:
Task A (punch_hole): Task B (migration):
-------------------- -------------------
1. i_mmap_lock_write(mapping) 1. folio_lock(folio)
2. folio_lock(folio) 2. i_mmap_lock_read(mapping)
(blocks waiting for B) (blocks waiting for A)
Task A is blocked in the punch-hole path:
hugetlbfs_fallocate
hugetlbfs_punch_hole
hugetlbfs_zero_partial_page
folio_lock
Task B is blocked in the migration path:
migrate_pages
unmap_and_move_huge_page
remove_migration_ptes
__rmap_walk_file
i_mmap_lock_read
To fix this, adjust unmap_and_move_huge_page() to respect the established
hierarchy. If i_mmap_rwsem is acquired during try_to_migrate(), hold it
until remove_migration_ptes() completes.
This utilizes the existing retry logic, which unlocks the folio and
returns -EAGAIN if hugetlb_folio_mapping_lock_write() fails.
Link: https://lore.kernel.org/all/68e9715a.050a0220.1186a4.000d.GAE@google.com/
Link: https://lore.kernel.org/all/20260108123957.1123502-2-wangjinchao600@gmail.com
Reported-by: syzbot+2d9c96466c978346b55f@...kaller.appspotmail.com
Suggested-by: Matthew Wilcox <willy@...radead.org>
Signed-off-by: Jinchao Wang <wangjinchao600@...il.com>
---
mm/migrate.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/mm/migrate.c b/mm/migrate.c
index 5169f9717f60..bcaa13541acc 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1458,6 +1458,7 @@ static int unmap_and_move_huge_page(new_folio_t get_new_folio,
int page_was_mapped = 0;
struct anon_vma *anon_vma = NULL;
struct address_space *mapping = NULL;
+ enum ttu_flags ttu = 0;
if (folio_ref_count(src) == 1) {
/* page was freed from under us. So we are done. */
@@ -1498,8 +1499,6 @@ static int unmap_and_move_huge_page(new_folio_t get_new_folio,
goto put_anon;
if (folio_mapped(src)) {
- enum ttu_flags ttu = 0;
-
if (!folio_test_anon(src)) {
/*
* In shared mappings, try_to_unmap could potentially
@@ -1516,16 +1515,17 @@ static int unmap_and_move_huge_page(new_folio_t get_new_folio,
try_to_migrate(src, ttu);
page_was_mapped = 1;
-
- if (ttu & TTU_RMAP_LOCKED)
- i_mmap_unlock_write(mapping);
}
if (!folio_mapped(src))
rc = move_to_new_folio(dst, src, mode);
if (page_was_mapped)
- remove_migration_ptes(src, !rc ? dst : src, 0);
+ remove_migration_ptes(src, !rc ? dst : src,
+ ttu ? RMP_LOCKED : 0);
+
+ if (ttu & TTU_RMAP_LOCKED)
+ i_mmap_unlock_write(mapping);
unlock_put_anon:
folio_unlock(dst);
--
2.43.0
Powered by blists - more mailing lists