lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <df3566e4-82dd-4e1f-b043-e9de4879b5a0@redhat.com>
Date: Wed, 8 Oct 2025 11:02:34 +0200
From: David Hildenbrand <david@...hat.com>
To: Usama Arif <usamaarif642@...il.com>, muchun.song@...ux.dev,
 osalvador@...e.de, Andrew Morton <akpm@...ux-foundation.org>
Cc: linux-mm@...ck.org, hannes@...xchg.org, riel@...riel.com, kas@...nel.org,
 linux-kernel@...r.kernel.org, kernel-team@...a.com
Subject: Re: [PATCH] mm/hugetlb: allow overcommitting gigantic hugepages

On 06.10.25 20:56, Usama Arif wrote:
> Currently, gigantic hugepages cannot use the overcommit mechanism
> (nr_overcommit_hugepages), forcing users to permanently reserve memory via
> nr_hugepages even when pages might not be actively used.
> 
> Remove this blanket restriction on gigantic hugepage overcommit.
> This will bring the same benefits to gigantic pages as hugepages:
> 
> - Memory is only taken out of regular use when actually needed
> - Unused surplus pages can be returned to the system
> - Better memory utilization, especially with CMA backing which can
>    significantly increase the changes of hugepage allocation
> 
> Without this patch:
> echo 3 > /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_overcommit_hugepages
> bash: echo: write error: Invalid argument
> 
> With this patch:
> echo 3 > /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_overcommit_hugepages
> ./mmap_hugetlb_test
> Successfully allocated huge pages at address: 0x7f9d40000000
> 
> cat mmap_hugetlb_test.c
> ...
>      unsigned long ALLOC_SIZE = 3 * (unsigned long) HUGE_PAGE_SIZE;
>      addr = mmap(NULL,
>                  ALLOC_SIZE, // 3GB
>                  PROT_READ | PROT_WRITE,
>                  MAP_PRIVATE | MAP_ANONYMOUS | MAP_HUGETLB | MAP_HUGE_1GB,
>                  -1,
>                  0);
> 
>      if (addr == MAP_FAILED) {
>          fprintf(stderr, "mmap failed: %s\n", strerror(errno));
>          return 1;
>      }
>      printf("Successfully allocated huge pages at address: %p\n", addr);
> ...
> 
> Signed-off-by: Usama Arif <usamaarif642@...il.com>
> ---

No opinion from my side. I guess it won't harm anybody (but people 
should be aware that "overcommit" with huge pages where we have no 
allocation guarantees is a flawed concept).

-- 
Cheers

David / dhildenb


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ