lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <fd39be50-90ab-4b25-ac1e-10930aa52b5f@oracle.com>
Date: Fri, 22 Aug 2025 11:19:33 -0400
From: Sidhartha Kumar <sidhartha.kumar@...cle.com>
To: Jeongjun Park <aha310510@...il.com>, muchun.song@...ux.dev,
        osalvador@...e.de, david@...hat.com, akpm@...ux-foundation.org
Cc: leitao@...ian.org, linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        syzbot+417aeb05fd190f3a6da9@...kaller.appspotmail.com
Subject: Re: [PATCH] mm/hugetlb: add missing hugetlb_lock in
 __unmap_hugepage_range()

On 8/22/25 1:58 AM, Jeongjun Park wrote:
> When restoring a reservation for an anonymous page, we need to check to
> freeing a surplus. However, __unmap_hugepage_range() causes data race
> because it reads h->surplus_huge_pages without the protection of
> hugetlb_lock.
> 
> Therefore, we need to add missing hugetlb_lock.
> 

Makes sense as alloc_surplus_hugetlb_folio() takes the hugetlb_lock when 
reading the hstate.

Reviewed-by: Sidhartha Kumar <sidhartha.kumar@...cle.com>

> Reported-by: syzbot+417aeb05fd190f3a6da9@...kaller.appspotmail.com
> Closes: https://syzkaller.appspot.com/bug?extid=417aeb05fd190f3a6da9
> Fixes: df7a6d1f6405 ("mm/hugetlb: restore the reservation if needed")
> Signed-off-by: Jeongjun Park <aha310510@...il.com>
> ---
>   mm/hugetlb.c | 3 +++
>   1 file changed, 3 insertions(+)
> 
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 753f99b4c718..e8d95a314df2 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -5951,6 +5951,8 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
>   		 * If there we are freeing a surplus, do not set the restore
>   		 * reservation bit.
>   		 */
> +		spin_lock_irq(&hugetlb_lock);
> +
>   		if (!h->surplus_huge_pages && __vma_private_lock(vma) &&
>   		    folio_test_anon(folio)) {
>   			folio_set_hugetlb_restore_reserve(folio);
> @@ -5958,6 +5960,7 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
>   			adjust_reservation = true;
>   		}
>   
> +		spin_unlock_irq(&hugetlb_lock);
>   		spin_unlock(ptl);
>   
>   		/*
> --
> 


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ