lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Mon, 17 Oct 2022 20:25:05 -0400 From: Rik van Riel <riel@...riel.com> To: linux-kernel@...r.kernel.org Cc: linux-mm@...ck.org, stable@...nel.org, Naoya Horiguchi <n-horiguchi@...jp.nec.com>, Glen McCready <gkmccready@...a.com>, Mike Kravetz <mike.kravetz@...cle.com>, Muchun Song <songmuchun@...edance.com>, Andrew Morton <akpm@...ux-foundation.org>, kernel-team@...a.com Subject: [PATCH] mm,hugetlb: take hugetlb_lock before decrementing h->resv_huge_pages The h->*_huge_pages counters are protected by the hugetlb_lock, but alloc_huge_page has a corner case where it can decrement the counter outside of the lock. This could lead to a corrupted value of h->resv_huge_pages, which we have observed on our systems. Take the hugetlb_lock before decrementing h->resv_huge_pages to avoid a potential race. Fixes: a88c76954804 ("mm: hugetlb: fix hugepage memory leak caused by wrong reserve count") Cc: stable@...nel.org Cc: Naoya Horiguchi <n-horiguchi@...jp.nec.com> Cc: Glen McCready <gkmccready@...a.com> Cc: Mike Kravetz <mike.kravetz@...cle.com> Cc: Muchun Song <songmuchun@...edance.com> Cc: Andrew Morton <akpm@...ux-foundation.org> Signed-off-by: Rik van Riel <riel@...riel.com> --- mm/hugetlb.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index b586cdd75930..dede0337c07c 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -2924,11 +2924,11 @@ struct page *alloc_huge_page(struct vm_area_struct *vma, page = alloc_buddy_huge_page_with_mpol(h, vma, addr); if (!page) goto out_uncharge_cgroup; + spin_lock_irq(&hugetlb_lock); if (!avoid_reserve && vma_has_reserves(vma, gbl_chg)) { SetHPageRestoreReserve(page); h->resv_huge_pages--; } - spin_lock_irq(&hugetlb_lock); list_add(&page->lru, &h->hugepage_activelist); set_page_refcounted(page); /* Fall through */ -- 2.37.2
Powered by blists - more mailing lists