[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250331212343.66780-1-joshua.hahnjy@gmail.com>
Date: Mon, 31 Mar 2025 14:23:41 -0700
From: Joshua Hahn <joshua.hahnjy@...il.com>
To: Wupeng Ma <mawupeng1@...wei.com>
Cc: akpm@...ux-foundation.org,
mike.kravetz@...cle.com,
david@...hat.com,
muchun.song@...ux.dev,
linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
kernel-team@...a.com
Subject: Re: [RFC PATCH] mm: hugetlb: Fix incorrect fallback for subpool
On Tue, 25 Mar 2025 14:16:34 +0800 Wupeng Ma <mawupeng1@...wei.com> wrote:
> During our testing with hugetlb subpool enabled, we observe that
> hstate->resv_huge_pages may underflow into negative values. Root cause
> analysis reveals a race condition in subpool reservation fallback handling
> as follow:
>
> hugetlb_reserve_pages()
> /* Attempt subpool reservation */
> gbl_reserve = hugepage_subpool_get_pages(spool, chg);
>
> /* Global reservation may fail after subpool allocation */
> if (hugetlb_acct_memory(h, gbl_reserve) < 0)
> goto out_put_pages;
>
> out_put_pages:
> /* This incorrectly restores reservation to subpool */
> hugepage_subpool_put_pages(spool, chg);
>
> When hugetlb_acct_memory() fails after subpool allocation, the current
> implementation over-commits subpool reservations by returning the full
> 'chg' value instead of the actual allocated 'gbl_reserve' amount. This
> discrepancy propagates to global reservations during subsequent releases,
> eventually causing resv_huge_pages underflow.
>
> This problem can be trigger easily with the following steps:
> 1. reverse hugepage for hugeltb allocation
> 2. mount hugetlbfs with min_size to enable hugetlb subpool
> 3. alloc hugepages with two task(make sure the second will fail due to
> insufficient amount of hugepages)
> 4. with for a few seconds and repeat step 3 which will make
> hstate->resv_huge_pages to go below zero.
>
> To fix this problem, return corrent amount of pages to subpool during the
> fallback after hugepage_subpool_get_pages is called.
>
> Fixes: 1c5ecae3a93f ("hugetlbfs: add minimum size accounting to subpools")
> Signed-off-by: Wupeng Ma <mawupeng1@...wei.com>
Hi Wupeng,
Thank you for the fix! This is a problem that we've also seen happen in
our fleet at Meta. I was able to recreate the issue that you mentioned -- to
explicitly lay down the steps I used:
1. echo 1 > /proc/sys/vm/nr_hugepages
2. mkdir /mnt/hugetlb-pool
3.mount -t hugetlbfs -o min_size=2M none /mnt/hugetlb-pool
4. (./get_hugepage &) && (./get_hugepage &)
# get_hugepage just opens a file in /mnt/hugetlb-pool and mmaps 2M into it.
5. sleep 3
6. (./get_hugepage &) && (./get_hugepage &)
7. cat /proc/meminfo | grep HugePages_Rsvd
... and (6) shows that HugePages_Rsvd has indeed underflowed to U64_MAX!
I've also verified that applying your fix and then re-running the reproducer
shows no underflow.
Reviewed-by: Joshua Hahn <joshua.hahnjy@...il.com>
Tested-by: Joshua Hahn <joshua.hahnjy@...il.com>
Sent using hkml (https://github.com/sjp38/hackermail)
Powered by blists - more mailing lists