[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20250822180708.86e79941d7e47e3bb759b193@linux-foundation.org>
Date: Fri, 22 Aug 2025 18:07:08 -0700
From: Andrew Morton <akpm@...ux-foundation.org>
To: Jeongjun Park <aha310510@...il.com>
Cc: muchun.song@...ux.dev, osalvador@...e.de, david@...hat.com,
leitao@...ian.org, linux-mm@...ck.org, linux-kernel@...r.kernel.org,
syzbot+417aeb05fd190f3a6da9@...kaller.appspotmail.com
Subject: Re: [PATCH] mm/hugetlb: add missing hugetlb_lock in
__unmap_hugepage_range()
On Fri, 22 Aug 2025 14:58:57 +0900 Jeongjun Park <aha310510@...il.com> wrote:
> When restoring a reservation for an anonymous page, we need to check to
> freeing a surplus. However, __unmap_hugepage_range() causes data race
> because it reads h->surplus_huge_pages without the protection of
> hugetlb_lock.
>
> Therefore, we need to add missing hugetlb_lock.
>
> ...
>
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -5951,6 +5951,8 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
> * If there we are freeing a surplus, do not set the restore
> * reservation bit.
> */
> + spin_lock_irq(&hugetlb_lock);
> +
> if (!h->surplus_huge_pages && __vma_private_lock(vma) &&
> folio_test_anon(folio)) {
> folio_set_hugetlb_restore_reserve(folio);
> @@ -5958,6 +5960,7 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
> adjust_reservation = true;
> }
>
> + spin_unlock_irq(&hugetlb_lock);
> spin_unlock(ptl);
>
Does hugetlb_lock nest inside page_table_lock?
It's a bit sad to be taking a global lock just to defend against some
alleged data race which probably never happens. Doing it once per
hugepage probably won't matter but still, is there something more
proportionate that we can do here?
Powered by blists - more mailing lists