[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <68edce3b.050a0220.91a22.01fa.GAE@google.com>
Date: Mon, 13 Oct 2025 21:14:51 -0700
From: syzbot <syzbot+f26d7c75c26ec19790e7@...kaller.appspotmail.com>
To: linux-kernel@...r.kernel.org, syzkaller-bugs@...glegroups.com
Subject: Forwarded: [PATCH v6] hugetlbfs: move lock assertions after early
returns in huge_pmd_unshare()
For archival purposes, forwarding an incoming command email to
linux-kernel@...r.kernel.org, syzkaller-bugs@...glegroups.com.
***
Subject: [PATCH v6] hugetlbfs: move lock assertions after early returns in huge_pmd_unshare()
Author: kartikey406@...il.com
#syz test: git://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
When hugetlb_vmdelete_list() processes VMAs during truncate operations,
it may encounter VMAs where huge_pmd_unshare() is called without the
required shareable lock. This triggers an assertion failure in
hugetlb_vma_assert_locked().
The previous fix in commit dd83609b8898 ("hugetlbfs: skip VMAs without
shareable locks in hugetlb_vmdelete_list") skipped entire VMAs without
shareable locks to avoid the assertion. However, this prevented pages
from being unmapped and freed, causing a regression in fallocate(PUNCH_HOLE)
operations where pages were not freed immediately, as reported by Mark Brown.
A subsequent fix in commit 06e8ca1b3dca ("hugetlbfs: check for shareable
lock before calling huge_pmd_unshare()") addressed this by checking
__vma_shareable_lock() in the caller before calling huge_pmd_unshare().
However, a cleaner approach is to move the lock assertions in
huge_pmd_unshare() itself to after the early return checks. The assertions
are only needed when actual PMD unsharing work will be performed. If the
function returns early because sz != PMD_SIZE or the PMD is not shared,
no locks are required.
This patch removes the check added in commit 06e8ca1b3dca ("hugetlbfs:
check for shareable lock before calling huge_pmd_unshare()") and instead
moves the assertions inside huge_pmd_unshare(), keeping all the logic
within the function itself.
Reported-by: syzbot+f26d7c75c26ec19790e7@...kaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=f26d7c75c26ec19790e7
Fixes: dd83609b8898 ("hugetlbfs: skip VMAs without shareable locks in hugetlb_vmdelete_list")
Tested-by: syzbot+f26d7c75c26ec19790e7@...kaller.appspotmail.com
Link: https://lore.kernel.org/mm-commits/20250925203504.7BE02C4CEF7@smtp.kernel.org/ [v1]
Link: https://lore.kernel.org/mm-commits/20250928185232.BEDB6C4CEF0@smtp.kernel.org/ [v2]
Link: https://lore.kernel.org/linux-mm/20251003174553.3078839-1-kartikey406@gmail.com/ [v3]
Link: https://lore.kernel.org/linux-mm/20251008052759.469714-1-kartikey406@gmail.com/ [v4]
Link: https://lore.kernel.org/linux-mm/CADhLXY72yEVDjXWfxBUXfXhNfb8MWqwJmcb1daEHmDeFW+DRGw@mail.gmail.com/ [v5]
Signed-off-by: Deepanshu Kartikey <kartikey406@...il.com>
---
Changes in v6:
- Remove __vma_shareable_lock() check from __unmap_hugepage_range()
that was added in v4 (commit 06e8ca1b3dca)
- Move lock assertions after early returns in huge_pmd_unshare()
- Complete implementation of David's cleaner approach
Changes in v5:
- Incomplete: only moved assertions, forgot to remove v4 check
Changes in v4:
- Check __vma_shareable_lock() in __unmap_hugepage_range() before calling
huge_pmd_unshare() per Oscar's suggestion
Changes in v3:
- Add ZAP_FLAG_NO_UNSHARE to skip only PMD unsharing
Changes in v2:
- Skip entire VMAs without shareable locks (caused PUNCH_HOLE regression)
Changes in v1:
- Initial fix attempt
---
mm/hugetlb.c | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 85b2dac79d25..0455119716ec 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -5885,7 +5885,7 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
}
ptl = huge_pte_lock(h, mm, ptep);
- if (__vma_shareable_lock(vma) && huge_pmd_unshare(mm, vma, address, ptep)) {
+ if (huge_pmd_unshare(mm, vma, address, ptep)) {
spin_unlock(ptl);
tlb_flush_pmd_range(tlb, address & PUD_MASK, PUD_SIZE);
force_flush = true;
@@ -7614,13 +7614,12 @@ int huge_pmd_unshare(struct mm_struct *mm, struct vm_area_struct *vma,
p4d_t *p4d = p4d_offset(pgd, addr);
pud_t *pud = pud_offset(p4d, addr);
- i_mmap_assert_write_locked(vma->vm_file->f_mapping);
- hugetlb_vma_assert_locked(vma);
if (sz != PMD_SIZE)
return 0;
if (!ptdesc_pmd_is_shared(virt_to_ptdesc(ptep)))
return 0;
-
+ i_mmap_assert_write_locked(vma->vm_file->f_mapping);
+ hugetlb_vma_assert_locked(vma);
pud_clear(pud);
/*
* Once our caller drops the rmap lock, some other process might be
--
2.34.1
Powered by blists - more mailing lists