lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230724134644.1299963-5-usama.arif@bytedance.com>
Date:   Mon, 24 Jul 2023 14:46:44 +0100
From:   Usama Arif <usama.arif@...edance.com>
To:     linux-mm@...ck.org, muchun.song@...ux.dev, mike.kravetz@...cle.com,
        rppt@...nel.org
Cc:     linux-kernel@...r.kernel.org, fam.zheng@...edance.com,
        liangma@...ngbit.com, simon.evans@...edance.com,
        punit.agrawal@...edance.com, Usama Arif <usama.arif@...edance.com>
Subject: [RFC 4/4] mm/memblock: Skip initialization of struct pages freed later by HVO

If the region is for hugepages and if HVO is enabled, then those
struct pages which will be freed later don't need to be initialized.
This can save significant time when a large number of hugepages are
allocated at boot time. As memmap_init_reserved_pages is only called at
boot time, we don't need to worry about memory hotplug.

Hugepage regions are kept separate from non hugepage regions in
memblock_merge_regions so that initialization for unused struct pages
can be skipped for the entire region.

Signed-off-by: Usama Arif <usama.arif@...edance.com>
---
 mm/hugetlb_vmemmap.c |  2 +-
 mm/hugetlb_vmemmap.h |  3 +++
 mm/memblock.c        | 27 ++++++++++++++++++++++-----
 3 files changed, 26 insertions(+), 6 deletions(-)

diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index bdf750a4786b..b5b7834e0f42 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -443,7 +443,7 @@ static int vmemmap_remap_alloc(unsigned long start, unsigned long end,
 DEFINE_STATIC_KEY_FALSE(hugetlb_optimize_vmemmap_key);
 EXPORT_SYMBOL(hugetlb_optimize_vmemmap_key);
 
-static bool vmemmap_optimize_enabled = IS_ENABLED(CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON);
+bool vmemmap_optimize_enabled = IS_ENABLED(CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON);
 core_param(hugetlb_free_vmemmap, vmemmap_optimize_enabled, bool, 0);
 
 /**
diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h
index 3525c514c061..8b9a1563f7b9 100644
--- a/mm/hugetlb_vmemmap.h
+++ b/mm/hugetlb_vmemmap.h
@@ -58,4 +58,7 @@ static inline bool hugetlb_vmemmap_optimizable(const struct hstate *h)
 	return hugetlb_vmemmap_optimizable_size(h) != 0;
 }
 bool vmemmap_should_optimize(const struct hstate *h, const struct page *head);
+
+extern bool vmemmap_optimize_enabled;
+
 #endif /* _LINUX_HUGETLB_VMEMMAP_H */
diff --git a/mm/memblock.c b/mm/memblock.c
index e92d437bcb51..62072a0226de 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -21,6 +21,7 @@
 #include <linux/io.h>
 
 #include "internal.h"
+#include "hugetlb_vmemmap.h"
 
 #define INIT_MEMBLOCK_REGIONS			128
 #define INIT_PHYSMEM_REGIONS			4
@@ -519,7 +520,8 @@ static void __init_memblock memblock_merge_regions(struct memblock_type *type,
 		if (this->base + this->size != next->base ||
 		    memblock_get_region_node(this) !=
 		    memblock_get_region_node(next) ||
-		    this->flags != next->flags) {
+		    this->flags != next->flags ||
+		    this->hugepage_size != next->hugepage_size) {
 			BUG_ON(this->base + this->size > next->base);
 			i++;
 			continue;
@@ -2125,10 +2127,25 @@ static void __init memmap_init_reserved_pages(void)
 	/* initialize struct pages for the reserved regions */
 	for_each_reserved_mem_region(region) {
 		nid = memblock_get_region_node(region);
-		start = region->base;
-		end = start + region->size;
-
-		reserve_bootmem_region(start, end, nid);
+		/*
+		 * If the region is for hugepages and if HVO is enabled, then those
+		 * struct pages which will be freed later don't need to be initialized.
+		 * This can save significant time when a large number of hugepages are
+		 * allocated at boot time. As this is at boot time, we don't need to
+		 * worry about memory hotplug.
+		 */
+		if (region->hugepage_size && vmemmap_optimize_enabled) {
+			for (start = region->base;
+			    start < region->base + region->size;
+			    start += region->hugepage_size) {
+				end = start + HUGETLB_VMEMMAP_RESERVE_SIZE * sizeof(struct page);
+				reserve_bootmem_region(start, end, nid);
+			}
+		} else {
+			start = region->base;
+			end = start + region->size;
+			reserve_bootmem_region(start, end, nid);
+		}
 	}
 }
 
-- 
2.25.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ