lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <diqz7c1iw9vx.fsf@ackerleytng-ctop.c.googlers.com>
Date: Wed, 11 Jun 2025 08:55:30 -0700
From: Ackerley Tng <ackerleytng@...gle.com>
To: Joshua Hahn <joshua.hahnjy@...il.com>
Cc: mawupeng1@...wei.com, akpm@...ux-foundation.org, mike.kravetz@...cle.com, 
	david@...hat.com, muchun.song@...ux.dev, linux-mm@...ck.org, 
	linux-kernel@...r.kernel.org, kernel-team@...a.com
Subject: Re: [RFC PATCH] mm: hugetlb: Fix incorrect fallback for subpool

Joshua Hahn <joshua.hahnjy@...il.com> writes:

> On Tue, 25 Mar 2025 14:16:34 +0800 Wupeng Ma <mawupeng1@...wei.com> wrote:
>
>> During our testing with hugetlb subpool enabled, we observe that
>> hstate->resv_huge_pages may underflow into negative values. Root cause
>> analysis reveals a race condition in subpool reservation fallback handling
>> as follow:
>> 
>> hugetlb_reserve_pages()
>>     /* Attempt subpool reservation */
>>     gbl_reserve = hugepage_subpool_get_pages(spool, chg);
>> 
>>     /* Global reservation may fail after subpool allocation */
>>     if (hugetlb_acct_memory(h, gbl_reserve) < 0)
>>         goto out_put_pages;
>> 
>> out_put_pages:
>>     /* This incorrectly restores reservation to subpool */
>>     hugepage_subpool_put_pages(spool, chg);
>> 
>> When hugetlb_acct_memory() fails after subpool allocation, the current
>> implementation over-commits subpool reservations by returning the full
>> 'chg' value instead of the actual allocated 'gbl_reserve' amount. This
>> discrepancy propagates to global reservations during subsequent releases,
>> eventually causing resv_huge_pages underflow.
>> 
>> This problem can be trigger easily with the following steps:
>> 1. reverse hugepage for hugeltb allocation
>> 2. mount hugetlbfs with min_size to enable hugetlb subpool
>> 3. alloc hugepages with two task(make sure the second will fail due to
>>    insufficient amount of hugepages)
>> 4. with for a few seconds and repeat step 3 which will make
>>    hstate->resv_huge_pages to go below zero.
>> 
>> To fix this problem, return corrent amount of pages to subpool during the
>> fallback after hugepage_subpool_get_pages is called.
>> 
>> Fixes: 1c5ecae3a93f ("hugetlbfs: add minimum size accounting to subpools")
>> Signed-off-by: Wupeng Ma <mawupeng1@...wei.com>
>
> Hi Wupeng,
> Thank you for the fix! This is a problem that we've also seen happen in
> our fleet at Meta. I was able to recreate the issue that you mentioned -- to
> explicitly lay down the steps I used:
>
> 1. echo 1 > /proc/sys/vm/nr_hugepages
> 2. mkdir /mnt/hugetlb-pool
> 3.mount -t hugetlbfs -o min_size=2M none /mnt/hugetlb-pool
> 4. (./get_hugepage &) && (./get_hugepage &)
>     # get_hugepage just opens a file in /mnt/hugetlb-pool and mmaps 2M into it.

Hi Joshua,

Would you be able to share the source for ./get_hugepage? I'm trying to
reproduce this too.

Does ./get_hugepage just mmap and then spin in an infinite loop?

Do you have to somehow limit allocation of surplus HugeTLB pages from
the buddy allocator?

Thanks!

> 5. sleep 3
> 6. (./get_hugepage &) && (./get_hugepage &)
> 7. cat /proc/meminfo | grep HugePages_Rsvd
>
> ... and (6) shows that HugePages_Rsvd has indeed underflowed to U64_MAX!
>
> I've also verified that applying your fix and then re-running the reproducer
> shows no underflow.
>
> Reviewed-by: Joshua Hahn <joshua.hahnjy@...il.com>
> Tested-by: Joshua Hahn <joshua.hahnjy@...il.com>
>
> Sent using hkml (https://github.com/sjp38/hackermail)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ