[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250612005448.571615-1-joshua.hahnjy@gmail.com>
Date: Wed, 11 Jun 2025 17:54:41 -0700
From: Joshua Hahn <joshua.hahnjy@...il.com>
To: Ackerley Tng <ackerleytng@...gle.com>
Cc: mawupeng1@...wei.com,
akpm@...ux-foundation.org,
mike.kravetz@...cle.com,
david@...hat.com,
muchun.song@...ux.dev,
linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
kernel-team@...a.com
Subject: Re: [RFC PATCH] mm: hugetlb: Fix incorrect fallback for subpool
On Wed, 11 Jun 2025 08:55:30 -0700 Ackerley Tng <ackerleytng@...gle.com> wrote:
> Joshua Hahn <joshua.hahnjy@...il.com> writes:
>
> > On Tue, 25 Mar 2025 14:16:34 +0800 Wupeng Ma <mawupeng1@...wei.com> wrote:
> >
> >> During our testing with hugetlb subpool enabled, we observe that
> >> hstate->resv_huge_pages may underflow into negative values. Root cause
> >> analysis reveals a race condition in subpool reservation fallback handling
> >> as follow:
> >>
> >> hugetlb_reserve_pages()
> >> /* Attempt subpool reservation */
> >> gbl_reserve = hugepage_subpool_get_pages(spool, chg);
> >>
> >> /* Global reservation may fail after subpool allocation */
> >> if (hugetlb_acct_memory(h, gbl_reserve) < 0)
> >> goto out_put_pages;
> >>
> >> out_put_pages:
> >> /* This incorrectly restores reservation to subpool */
> >> hugepage_subpool_put_pages(spool, chg);
> >>
> >> When hugetlb_acct_memory() fails after subpool allocation, the current
> >> implementation over-commits subpool reservations by returning the full
> >> 'chg' value instead of the actual allocated 'gbl_reserve' amount. This
> >> discrepancy propagates to global reservations during subsequent releases,
> >> eventually causing resv_huge_pages underflow.
> >>
> >> This problem can be trigger easily with the following steps:
> >> 1. reverse hugepage for hugeltb allocation
> >> 2. mount hugetlbfs with min_size to enable hugetlb subpool
> >> 3. alloc hugepages with two task(make sure the second will fail due to
> >> insufficient amount of hugepages)
> >> 4. with for a few seconds and repeat step 3 which will make
> >> hstate->resv_huge_pages to go below zero.
> >>
> >> To fix this problem, return corrent amount of pages to subpool during the
> >> fallback after hugepage_subpool_get_pages is called.
> >>
> >> Fixes: 1c5ecae3a93f ("hugetlbfs: add minimum size accounting to subpools")
> >> Signed-off-by: Wupeng Ma <mawupeng1@...wei.com>
> >
> > Hi Wupeng,
> > Thank you for the fix! This is a problem that we've also seen happen in
> > our fleet at Meta. I was able to recreate the issue that you mentioned -- to
> > explicitly lay down the steps I used:
> >
> > 1. echo 1 > /proc/sys/vm/nr_hugepages
> > 2. mkdir /mnt/hugetlb-pool
> > 3.mount -t hugetlbfs -o min_size=2M none /mnt/hugetlb-pool
> > 4. (./get_hugepage &) && (./get_hugepage &)
> > # get_hugepage just opens a file in /mnt/hugetlb-pool and mmaps 2M into it.
>
> Hi Joshua,
>
> Would you be able to share the source for ./get_hugepage? I'm trying to
> reproduce this too.
>
> Does ./get_hugepage just mmap and then spin in an infinite loop?
>
> Do you have to somehow limit allocation of surplus HugeTLB pages from
> the buddy allocator?
>
> Thanks!
Hi Ackerley,
The script I used for get_hugepage is very simple : -) No need to even spin
infinitely! I just make a file descriptor, ftruncate it to 2M, and mmap
into it. For good measure I set addr[0] = '.', sleep for 1 second, and then
munmap the area afterwards.
Here is a simplified version of the script (no error handling):
int fd = open("/mnt/hugetlb-pool/hugetlb_file", O_RDWR | O_CREAT, 0666);
ftruncate(fd, 2*1024*1024);
char *addr = mmap(NULL, 2*1024*1024, PROT_READ | PROT_WRITE, MAP_PRIVATE, fd, 0);
addr[0] = '.';
sleep(1);
munmap(addr, 2*1024*1024);
close(fd);
Hope this helps! Please let me know if it doesn't work, I would be happy
to investigate this with you. Have a great day!
Joshua
Sent using hkml (https://github.com/sjp38/hackermail)
Powered by blists - more mailing lists