lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <m63ub6lxljw7m2mmc3ovbsyfurl7hp4cvx27tmwelcxxrra5m3@eva5tqcdjxtn>
Date: Tue, 9 Dec 2025 14:44:37 +0000
From: Kiryl Shutsemau <kas@...nel.org>
To: Muchun Song <muchun.song@...ux.dev>
Cc: Andrew Morton <akpm@...ux-foundation.org>, 
	David Hildenbrand <david@...nel.org>, Oscar Salvador <osalvador@...e.de>, 
	Mike Rapoport <rppt@...nel.org>, Vlastimil Babka <vbabka@...e.cz>, 
	Lorenzo Stoakes <lorenzo.stoakes@...cle.com>, Matthew Wilcox <willy@...radead.org>, Zi Yan <ziy@...dia.com>, 
	Baoquan He <bhe@...hat.com>, Michal Hocko <mhocko@...e.com>, 
	Johannes Weiner <hannes@...xchg.org>, Jonathan Corbet <corbet@....net>, 
	Usama Arif <usamaarif642@...il.com>, kernel-team@...a.com, linux-mm@...ck.org, 
	linux-kernel@...r.kernel.org, linux-doc@...r.kernel.org
Subject: Re: [PATCH 00/11] mm/hugetlb: Eliminate fake head pages from vmemmap
 optimization

On Tue, Dec 09, 2025 at 02:22:28PM +0800, Muchun Song wrote:
> The prerequisite is that the starting address of vmemmap must be aligned to
> 16MB boundaries (for 1GB huge pages). Right? We should add some checks
> somewhere to guarantee this (not compile time but at runtime like for KASLR).

I have hard time finding the right spot to put the check.

I considered something like the patch below, but it is probably too late
if we boot preallocating huge pages.

I will dig more later, but if you have any suggestions, I would
appreciate.

diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index 04a211a146a0..971558184587 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -886,6 +886,14 @@ static int __init hugetlb_vmemmap_init(void)
 	BUILD_BUG_ON(__NR_USED_SUBPAGE > HUGETLB_VMEMMAP_RESERVE_PAGES);
 
 	for_each_hstate(h) {
+		unsigned long size = huge_page_size(h) / sizeof(struct page);
+
+		/* vmemmap is expected to be naturally aligned to page size */
+		if (WARN_ON_ONCE(!IS_ALIGNED((unsigned long)vmemmap, size))) {
+			vmemmap_optimize_enabled = false;
+			continue;
+		}
+
 		if (hugetlb_vmemmap_optimizable(h)) {
 			register_sysctl_init("vm", hugetlb_vmemmap_sysctls);
 			break;
-- 
  Kiryl Shutsemau / Kirill A. Shutemov

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ