[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251110111553.88384-1-lance.yang@linux.dev>
Date: Mon, 10 Nov 2025 19:15:53 +0800
From: Lance Yang <lance.yang@...ux.dev>
To: akpm@...ux-foundation.org
Cc: syzbot+3f5f9a0d292454409ca6@...kaller.appspotmail.com,
syzbot+ci5a676d3d210999ee@...kaller.appspotmail.com,
david@...hat.com,
linux-kernel@...r.kernel.org,
linux-mm@...ck.org,
muchun.song@...ux.dev,
osalvador@...e.de,
syzkaller-bugs@...glegroups.com,
syzbot@...ts.linux.dev,
syzbot@...kaller.appspotmail.com,
Lance Yang <lance.yang@...ux.dev>
Subject: [PATCH v2 1/1] mm/hugetlb: fix possible deadlocks in hugetlb VMA unmap paths
From: Lance Yang <lance.yang@...ux.dev>
The hugetlb VMA unmap path contains several potential deadlocks, as
reported by syzbot. These deadlocks occur in __hugetlb_zap_begin(),
move_hugetlb_page_tables(), and the retry path of
hugetlb_unmap_file_folio() (affecting remove_inode_hugepages() and
unmap_vmas()), where vma_lock is acquired before i_mmap_lock. This lock
ordering conflicts with other paths like hugetlb_fault(), which establish
the correct dependency as i_mmap_lock -> vma_lock.
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(&vma_lock->rw_sema);
lock(&i_mmap_lock);
lock(&vma_lock->rw_sema);
lock(&i_mmap_lock);
Resolve the circular dependencies reported by syzbot across multiple call
chains by reordering the locks in all conflicting paths to consistently
follow the established i_mmap_lock -> vma_lock order.
Reported-by: syzbot+3f5f9a0d292454409ca6@...kaller.appspotmail.com
Closes: https://lore.kernel.org/linux-mm/69113a97.a70a0220.22f260.00ca.GAE@google.com/
Signed-off-by: Lance Yang <lance.yang@...ux.dev>
---
V1 -> V2:
- Update changelog
- Resolve three related deadlock scenarios reported by syzbot
https://lore.kernel.org/linux-mm/6911ad38.a70a0220.22f260.00dc.GAE@google.com/
- https://lore.kernel.org/linux-mm/20251110051421.29436-1-lance.yang@linux.dev/
fs/hugetlbfs/inode.c | 2 +-
mm/hugetlb.c | 4 ++--
2 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
index 3919fca56553..d1b0b5346728 100644
--- a/fs/hugetlbfs/inode.c
+++ b/fs/hugetlbfs/inode.c
@@ -447,8 +447,8 @@ static void hugetlb_unmap_file_folio(struct hstate *h,
* a reference. We must 'open code' vma locking as we do
* not know if vma_lock is still attached to vma.
*/
- down_write(&vma_lock->rw_sema);
i_mmap_lock_write(mapping);
+ down_write(&vma_lock->rw_sema);
vma = vma_lock->vma;
if (!vma) {
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index b1f47b87ae65..f0212d2579f6 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -5110,8 +5110,8 @@ int move_hugetlb_page_tables(struct vm_area_struct *vma,
mmu_notifier_invalidate_range_start(&range);
last_addr_mask = hugetlb_mask_last_page(h);
/* Prevent race with file truncation */
- hugetlb_vma_lock_write(vma);
i_mmap_lock_write(mapping);
+ hugetlb_vma_lock_write(vma);
for (; old_addr < old_end; old_addr += sz, new_addr += sz) {
src_pte = hugetlb_walk(vma, old_addr, sz);
if (!src_pte) {
@@ -5327,9 +5327,9 @@ void __hugetlb_zap_begin(struct vm_area_struct *vma,
return;
adjust_range_if_pmd_sharing_possible(vma, start, end);
- hugetlb_vma_lock_write(vma);
if (vma->vm_file)
i_mmap_lock_write(vma->vm_file->f_mapping);
+ hugetlb_vma_lock_write(vma);
}
void __hugetlb_zap_end(struct vm_area_struct *vma,
--
2.49.0
Powered by blists - more mailing lists