lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <8a6c0cca-c0cf-4511-86f0-df0b9e2c179b@redhat.com>
Date: Fri, 29 Aug 2025 17:03:06 +0200
From: David Hildenbrand <david@...hat.com>
To: lirongqing <lirongqing@...du.com>, muchun.song@...ux.dev,
 osalvador@...e.de, akpm@...ux-foundation.org, linux-mm@...ck.org,
 linux-kernel@...r.kernel.org, giorgitchankvetadze1997@...il.com
Subject: Re: [PATCH][v3] mm/hugetlb: Retry to allocate for early boot hugepage
 allocation

On 29.08.25 11:52, lirongqing wrote:
> From: Li RongQing <lirongqing@...du.com>
> 
> In cloud environments with massive hugepage reservations (95%+ of system
> RAM), single-attempt allocation during early boot often fails due to
> memory pressure.
> 
> Commit 91f386bf0772 ("hugetlb: batch freeing of vmemmap pages") intensified
> this by deferring page frees, increase peak memory usage during allocation.
> 
> Introduce a retry mechanism that leverages vmemmap optimization reclaim
> (~1.6% memory) when available. Upon initial allocation failure, the system
> retries until successful or no further progress is made, ensuring reliable
> hugepage allocation while preserving batched vmemmap freeing benefits.
> 
> Testing on a 256G machine allocating 252G of hugepages:
> Before: 128056/129024 hugepages allocated
> After:  Successfully allocated all 129024 hugepages
> 
> Suggested-by: David Hildenbrand <david@...hat.com>
> Signed-off-by: Li RongQing <lirongqing@...du.com>
> ---
> Diff with v2: auto retry mechanism
> Diff with v1: add log if two-phase hugepage allocation is triggered
> 		add the knod to control split ratio
> 
>   mm/hugetlb.c | 27 +++++++++++++++++++++++----
>   1 file changed, 23 insertions(+), 4 deletions(-)
> 
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 753f99b..18e54ea 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -3589,10 +3589,9 @@ static unsigned long __init hugetlb_pages_alloc_boot(struct hstate *h)
>   
>   	unsigned long jiffies_start;
>   	unsigned long jiffies_end;
> +	unsigned long remaining;
>   
>   	job.thread_fn	= hugetlb_pages_alloc_boot_node;
> -	job.start	= 0;
> -	job.size	= h->max_huge_pages;
>   
>   	/*
>   	 * job.max_threads is 25% of the available cpu threads by default.
> @@ -3616,10 +3615,30 @@ static unsigned long __init hugetlb_pages_alloc_boot(struct hstate *h)
>   	}
>   
>   	job.max_threads	= hugepage_allocation_threads;
> -	job.min_chunk	= h->max_huge_pages / hugepage_allocation_threads;
>   
>   	jiffies_start = jiffies;
> -	padata_do_multithreaded(&job);
> +	do {
> +		remaining = h->max_huge_pages - h->nr_huge_pages;
> +
> +		job.start     = h->nr_huge_pages;
> +		job.size      = remaining;
> +		job.min_chunk = remaining / hugepage_allocation_threads;
> +		padata_do_multithreaded(&job);
> +
> +		if (h->nr_huge_pages == h->max_huge_pages)
> +			break;
> +
> +		/*
> +		 * Retry allocation if vmemmap optimization is available, the
> +		 * optimization frees ~1.6% of memory of hugepages, this reclaimed
> +		 * memory enables additional hugepage allocations

As I said, please remove any calculation details about the vmemmap. 
That's not the place to have such calculations easily become stale.

Something like the following:

/*
  * Retry only if the vmemmap optimization might have been able to free
  * some memory back to the system.
  */

> +		 */
> +		if (!hugetlb_vmemmap_optimizable(h))
> +			break;
> +
> +	/* Continue if progress was made in last iteration */

Comment wrongly indented.

> +	} while (remaining != (h->max_huge_pages - h->nr_huge_pages));

Why would you want to retry if you allocated all pages (IOW the common 
case)?

E.g.,

remaining == 1
h->max_huge_pages == 1
h->nr_huge_pages == 1

while (1 != 1 -1) -> while (1 != 0)


you should probably do

do {
	...
	
	/* Stop if there is no progress */
	if (remaining == h->max_huge_pages - h->nr_huge_pages)
		break;
} (h->max_huge_pages != h->nr_huge_pages);

-- 
Cheers

David / dhildenb


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ