[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <Y04J/FxGLAhY+z6O@monkey>
Date: Mon, 17 Oct 2022 19:05:48 -0700
From: Mike Kravetz <mike.kravetz@...cle.com>
To: Rik van Riel <riel@...riel.com>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org,
stable@...nel.org, Naoya Horiguchi <n-horiguchi@...jp.nec.com>,
Glen McCready <gkmccready@...a.com>,
Muchun Song <songmuchun@...edance.com>,
Andrew Morton <akpm@...ux-foundation.org>, kernel-team@...a.com
Subject: Re: [PATCH] mm,hugetlb: take hugetlb_lock before decrementing
h->resv_huge_pages
On 10/17/22 20:25, Rik van Riel wrote:
> The h->*_huge_pages counters are protected by the hugetlb_lock, but
> alloc_huge_page has a corner case where it can decrement the counter
> outside of the lock.
>
> This could lead to a corrupted value of h->resv_huge_pages, which we
> have observed on our systems.
>
> Take the hugetlb_lock before decrementing h->resv_huge_pages to avoid
> a potential race.
>
> Fixes: a88c76954804 ("mm: hugetlb: fix hugepage memory leak caused by wrong reserve count")
> Cc: stable@...nel.org
> Cc: Naoya Horiguchi <n-horiguchi@...jp.nec.com>
> Cc: Glen McCready <gkmccready@...a.com>
> Cc: Mike Kravetz <mike.kravetz@...cle.com>
> Cc: Muchun Song <songmuchun@...edance.com>
> Cc: Andrew Morton <akpm@...ux-foundation.org>
> Signed-off-by: Rik van Riel <riel@...riel.com>
> ---
> mm/hugetlb.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
Thanks Rik! That case did slip through the cracks.
Reviewed-by: Mike Kravetz <mike.kravetz@...cle.com>
--
Mike Kravetz
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index b586cdd75930..dede0337c07c 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -2924,11 +2924,11 @@ struct page *alloc_huge_page(struct vm_area_struct *vma,
> page = alloc_buddy_huge_page_with_mpol(h, vma, addr);
> if (!page)
> goto out_uncharge_cgroup;
> + spin_lock_irq(&hugetlb_lock);
> if (!avoid_reserve && vma_has_reserves(vma, gbl_chg)) {
> SetHPageRestoreReserve(page);
> h->resv_huge_pages--;
> }
> - spin_lock_irq(&hugetlb_lock);
> list_add(&page->lru, &h->hugepage_activelist);
> set_page_refcounted(page);
> /* Fall through */
> --
> 2.37.2
>
>
Powered by blists - more mailing lists