[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <adf3cb95-916b-4513-b763-48aa8fbfb700@redhat.com>
Date: Mon, 13 Oct 2025 10:27:28 +0200
From: David Hildenbrand <david@...hat.com>
To: Deepanshu Kartikey <kartikey406@...il.com>, muchun.song@...ux.dev,
osalvador@...e.de, akpm@...ux-foundation.org, broonie@...nel.org
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
syzbot+f26d7c75c26ec19790e7@...kaller.appspotmail.com
Subject: Re: [PATCH v4] hugetlbfs: check for shareable lock before calling
huge_pmd_unshare()
On 08.10.25 07:27, Deepanshu Kartikey wrote:
> When hugetlb_vmdelete_list() processes VMAs during truncate operations,
> it may encounter VMAs where huge_pmd_unshare() is called without the
> required shareable lock. This triggers an assertion failure in
> hugetlb_vma_assert_locked().
>
> The previous fix in commit dd83609b8898 ("hugetlbfs: skip VMAs without
> shareable locks in hugetlb_vmdelete_list") skipped entire VMAs without
> shareable locks to avoid the assertion. However, this prevented pages
> from being unmapped and freed, causing a regression in fallocate(PUNCH_HOLE)
> operations where pages were not freed immediately, as reported by Mark Brown.
>
> Instead of skipping VMAs or adding new flags, check __vma_shareable_lock()
> directly in __unmap_hugepage_range() right before calling huge_pmd_unshare().
> This ensures PMD unsharing only happens when the VMA has a shareable lock
> structure, while still allowing page unmapping and freeing to proceed for
> all VMAs.
>
> Reported-by: syzbot+f26d7c75c26ec19790e7@...kaller.appspotmail.com
> Tested-by: syzbot+f26d7c75c26ec19790e7@...kaller.appspotmail.com
> Reported-by: Mark Brown <broonie@...nel.org>
> Fixes: dd83609b8898 ("hugetlbfs: skip VMAs without shareable locks in hugetlb_vmdelete_list")
> Suggested-by: Oscar Salvador <osalvador@...e.de>
> Suggested-by: David Hildenbrand <david@...hat.com>
> Link: https://lore.kernel.org/mm-commits/20250925203504.7BE02C4CEF7@smtp.kernel.org/ [v1]
> Link: https://lore.kernel.org/mm-commits/20250928185232.BEDB6C4CEF0@smtp.kernel.org/ [v2]
> Link: https://lore.kernel.org/linux-mm/20251003174553.3078839-1-kartikey406@gmail.com/ [v3]
> Signed-off-by: Deepanshu Kartikey <kartikey406@...il.com>
> ---
> Changes in v4:
> - Simplified approach per Oscar's suggestion: check __vma_shareable_lock()
> directly in __unmap_hugepage_range() before calling huge_pmd_unshare()
> - Removed ZAP_FLAG_NO_UNSHARE flag per David's feedback to avoid polluting
> generic mm.h header
> - Reverted hugetlb_vmdelete_list() to not skip VMAs
>
> Changes in v3:
> - Added ZAP_FLAG_NO_UNSHARE to skip only PMD unsharing, not entire VMA
>
> Changes in v2:
> - Skip entire VMAs without shareable locks in hugetlb_vmdelete_list()
> (caused PUNCH_HOLE regression)
>
> Changes in v1:
> - Initial fix attempt
> ---
> fs/hugetlbfs/inode.c | 10 +---------
> mm/hugetlb.c | 2 +-
> 2 files changed, 2 insertions(+), 10 deletions(-)
>
> diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
> index 9c94ed8c3ab0..1e040db18b20 100644
> --- a/fs/hugetlbfs/inode.c
> +++ b/fs/hugetlbfs/inode.c
> @@ -478,14 +478,6 @@ hugetlb_vmdelete_list(struct rb_root_cached *root, pgoff_t start, pgoff_t end,
> if (!hugetlb_vma_trylock_write(vma))
> continue;
>
> - /*
> - * Skip VMAs without shareable locks. Per the design in commit
> - * 40549ba8f8e0, these will be handled by remove_inode_hugepages()
> - * called after this function with proper locking.
> - */
> - if (!__vma_shareable_lock(vma))
> - goto skip;
> -
> v_start = vma_offset_start(vma, start);
> v_end = vma_offset_end(vma, end);
>
> @@ -496,7 +488,7 @@ hugetlb_vmdelete_list(struct rb_root_cached *root, pgoff_t start, pgoff_t end,
> * vmas. Therefore, lock is not held when calling
> * unmap_hugepage_range for private vmas.
> */
> -skip:
> +
> hugetlb_vma_unlock_write(vma);
> }
> }
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 6cac826cb61f..9ed85ab8420e 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -5885,7 +5885,7 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
> }
>
> ptl = huge_pte_lock(h, mm, ptep);
> - if (huge_pmd_unshare(mm, vma, address, ptep)) {
> + if (__vma_shareable_lock(vma) && huge_pmd_unshare(mm, vma, address, ptep)) {
> spin_unlock(ptl);
> tlb_flush_pmd_range(tlb, address & PUD_MASK, PUD_SIZE);
> force_flush = true;
Wondering, couldn't we handle that in huge_pmd_unshare()?
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index eed59cfb5d218..f167cec4a5acc 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -7598,13 +7598,14 @@ int huge_pmd_unshare(struct mm_struct *mm, struct vm_area_struct *vma,
p4d_t *p4d = p4d_offset(pgd, addr);
pud_t *pud = pud_offset(p4d, addr);
- i_mmap_assert_write_locked(vma->vm_file->f_mapping);
- hugetlb_vma_assert_locked(vma);
if (sz != PMD_SIZE)
return 0;
if (!ptdesc_pmd_pts_count(virt_to_ptdesc(ptep)))
return 0;
+ i_mmap_assert_write_locked(vma->vm_file->f_mapping);
+ hugetlb_vma_assert_locked(vma);
+
pud_clear(pud);
/*
* Once our caller drops the rmap lock, some other process might be
--
Cheers
David / dhildenb
Powered by blists - more mailing lists