lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAO9qdTGKPQ=L2fMJ=oNz-7OG-9p+4VQz3+8-g7TRXJsqBC-6OA@mail.gmail.com>
Date: Sun, 24 Aug 2025 00:07:02 +0900
From: Jeongjun Park <aha310510@...il.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: muchun.song@...ux.dev, osalvador@...e.de, david@...hat.com, 
	leitao@...ian.org, linux-mm@...ck.org, linux-kernel@...r.kernel.org, 
	syzbot+417aeb05fd190f3a6da9@...kaller.appspotmail.com
Subject: Re: [PATCH] mm/hugetlb: add missing hugetlb_lock in __unmap_hugepage_range()

Hello Andrew,

Andrew Morton <akpm@...ux-foundation.org> wrote:
>
> On Fri, 22 Aug 2025 14:58:57 +0900 Jeongjun Park <aha310510@...il.com> wrote:
>
> > When restoring a reservation for an anonymous page, we need to check to
> > freeing a surplus. However, __unmap_hugepage_range() causes data race
> > because it reads h->surplus_huge_pages without the protection of
> > hugetlb_lock.
> >
> > Therefore, we need to add missing hugetlb_lock.
> >
> > ...
> >
> > --- a/mm/hugetlb.c
> > +++ b/mm/hugetlb.c
> > @@ -5951,6 +5951,8 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
> >                * If there we are freeing a surplus, do not set the restore
> >                * reservation bit.
> >                */
> > +             spin_lock_irq(&hugetlb_lock);
> > +
> >               if (!h->surplus_huge_pages && __vma_private_lock(vma) &&
> >                   folio_test_anon(folio)) {
> >                       folio_set_hugetlb_restore_reserve(folio);
> > @@ -5958,6 +5960,7 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
> >                       adjust_reservation = true;
> >               }
> >
> > +             spin_unlock_irq(&hugetlb_lock);
> >               spin_unlock(ptl);
> >
>
> Does hugetlb_lock nest inside page_table_lock?
>
> It's a bit sad to be taking a global lock just to defend against some
> alleged data race which probably never happens.  Doing it once per
> hugepage probably won't matter but still, is there something more
> proportionate that we can do here?

I think it would be better to move the page_table_lock unlock call after
the hugetlb_remove_rmap() call.

```
        pte = huge_ptep_get_and_clear(mm, address, ptep, sz);
        tlb_remove_huge_tlb_entry(h, tlb, ptep, address);
        if (huge_pte_dirty(pte))
            folio_mark_dirty(folio);
        /* Leave a uffd-wp pte marker if needed */
        if (huge_pte_uffd_wp(pte) &&
            !(zap_flags & ZAP_FLAG_DROP_MARKER))
            set_huge_pte_at(mm, address, ptep,
                    make_pte_marker(PTE_MARKER_UFFD_WP),
                    sz);
        hugetlb_count_sub(pages_per_huge_page(h), mm);
        hugetlb_remove_rmap(folio);
```

In __unmap_hugepage_range(), after all of the above code has been
executed, the PTE/TLB/rmap are all properly cleaned up. Therefore,
there's no need to continue protecting
folio_set_hugetlb_restore_reserve(), which only sets bits of
folio->private, with a page_table_lock.

Regards,
Jeongjun Park

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ