[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1f9f5fb4-27cd-4981-aa60-789a33376598@redhat.com>
Date: Tue, 11 Feb 2025 14:49:13 +0100
From: David Hildenbrand <david@...hat.com>
To: Luiz Capitulino <luizcap@...hat.com>, linux-kernel@...r.kernel.org,
yaozhenguo1@...il.com, muchun.song@...ux.dev
Cc: linux-mm@...ck.org, akpm@...ux-foundation.org, rppt@...nel.org
Subject: Re: [PATCH] mm: hugetlb: avoid fallback for specific node allocation
of 1G pages
On 11.02.25 04:48, Luiz Capitulino wrote:
> When using the HugeTLB kernel command-line to allocate 1G pages from
> a specific node, such as:
>
> default_hugepagesz=1G hugepages=1:1
>
> If node 1 happens to not have enough memory for the requested number of
> 1G pages, the allocation falls back to other nodes. A quick way to
> reproduce this is by creating a KVM guest with a memory-less node and
> trying to allocate 1 1G page from it. Instead of failing, the allocation
> will fallback to other nodes.
>
> This defeats the purpose of node specific allocation. Also, specific
> node allocation for 2M pages don't have this behavior: the allocation
> will just fail for the pages it can't satisfy.
>
> This issue happens because HugeTLB calls memblock_alloc_try_nid_raw()
> for 1G boot-time allocation as this function falls back to other nodes
> if the allocation can't be satisfied. Use memblock_alloc_exact_nid_raw()
> instead, which ensures that the allocation will only be satisfied from
> the specified node.
>
> Fixes: b5389086ad7b ("hugetlbfs: extend the definition of hugepages parameter to support node allocation")
>
> Signed-off-by: Luiz Capitulino <luizcap@...hat.com>
> ---
> mm/hugetlb.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 65068671e460..163190e89ea1 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -3145,7 +3145,7 @@ int __alloc_bootmem_huge_page(struct hstate *h, int nid)
>
> /* do node specific alloc */
> if (nid != NUMA_NO_NODE) {
> - m = memblock_alloc_try_nid_raw(huge_page_size(h), huge_page_size(h),
> + m = memblock_alloc_exact_nid_raw(huge_page_size(h), huge_page_size(h),
> 0, MEMBLOCK_ALLOC_ACCESSIBLE, nid);
> if (!m)
> return 0;
Yeah, documentation says "The node format specifies the number of huge
pages to allocate on specific nodes."
Likely the patch simply copied the memblock_alloc_try_nid_raw() call;
memblock_alloc_exact_nid_raw() seems to be the right thing to do
Acked-by: David Hildenbrand <david@...hat.com>
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists