[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <876040d3-d814-49cd-9829-a36afd320a09@huawei.com>
Date: Mon, 13 Oct 2025 20:56:07 +0800
From: Kefeng Wang <wangkefeng.wang@...wei.com>
To: Oscar Salvador <osalvador@...e.de>, Usama Arif <usamaarif642@...il.com>
CC: <muchun.song@...ux.dev>, <david@...hat.com>, Andrew Morton
<akpm@...ux-foundation.org>, <shakeel.butt@...ux.dev>, <linux-mm@...ck.org>,
<hannes@...xchg.org>, <riel@...riel.com>, <kas@...nel.org>,
<linux-kernel@...r.kernel.org>, <kernel-team@...a.com>
Subject: Re: [PATCH v2 2/2] mm/hugetlb: allow overcommitting gigantic
hugepages
On 2025/10/13 16:00, Oscar Salvador wrote:
> On Thu, Oct 09, 2025 at 06:24:31PM +0100, Usama Arif wrote:
>> Currently, gigantic hugepages cannot use the overcommit mechanism
>> (nr_overcommit_hugepages), forcing users to permanently reserve memory via
>> nr_hugepages even when pages might not be actively used.
>>
>> The restriction was added in 2011 [1], which was before there was support
>> for reserving 1G hugepages at runtime.
>> Remove this blanket restriction on gigantic hugepage overcommit.
>> This will bring the same benefits to gigantic pages as hugepages:
>>
>> - Memory is only taken out of regular use when actually needed
>> - Unused surplus pages can be returned to the system
>> - Better memory utilization, especially with CMA backing which can
>> significantly increase the changes of hugepage allocation
>>
>> Without this patch:
>> echo 3 > /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_overcommit_hugepages
>> bash: echo: write error: Invalid argument
>>
>> With this patch:
>> echo 3 > /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_overcommit_hugepages
>> ./mmap_hugetlb_test
>> Successfully allocated huge pages at address: 0x7f9d40000000
>>
>> cat mmap_hugetlb_test.c
>> ...
>> unsigned long ALLOC_SIZE = 3 * (unsigned long) HUGE_PAGE_SIZE;
>> addr = mmap(NULL,
>> ALLOC_SIZE, // 3GB
>> PROT_READ | PROT_WRITE,
>> MAP_PRIVATE | MAP_ANONYMOUS | MAP_HUGETLB | MAP_HUGE_1GB,
>> -1,
>> 0);
>>
>> if (addr == MAP_FAILED) {
>> fprintf(stderr, "mmap failed: %s\n", strerror(errno));
>> return 1;
>> }
>> printf("Successfully allocated huge pages at address: %p\n", addr);
>> ...
>>
>> [1] https://git.zx2c4.com/linux-rng/commit/mm/hugetlb.c?id=adbe8726dc2a3805630d517270db17e3af86e526
>>
>> Signed-off-by: Usama Arif <usamaarif642@...il.com>
Reviewed-by: Kefeng Wang <wangkefeng.wang@...wei.com>
>
> I guess nobody bothered to do this after we added support for 1GB hugepages because
> creating those at runtime is tricky, and in my experience, almost everybody reserves
> those at boot time.
> But I do not have objections to make them behave as normal hugepages:
We do have cases to allocate 1G hugepages at runtime and we enable
migrate 1G hugepages in alloc_migrate_hugetlb_folio() too :)
I will send a patch to enable gigantic pages migration based on this one.
>
> Acked-by: Oscar Salvador <osalvador@...e.de>
>
>
>
Powered by blists - more mailing lists