lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220509074330.4822-1-jaewon31.kim@samsung.com>
Date:   Mon,  9 May 2022 16:43:30 +0900
From:   Jaewon Kim <jaewon31.kim@...sung.com>
To:     vbabka@...e.cz, akpm@...ux-foundation.org
Cc:     linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        jaewon31.kim@...il.com, Jaewon Kim <jaewon31.kim@...sung.com>
Subject: [RFC PATCH] page_ext: create page extension for all memblock memory
 regions

The page extension can be prepared for each section. But if the first
page is not valid, the page extension for the section was not
initialized though there were many other valid pages within the section.

To support the page extension for all sections, refer to memblock memory
regions. If the page is valid use the nid from pfn_to_nid, otherwise use
the previous nid.

Also this pagech changed log to include total sections and a section
size.

i.e.
allocated 100663296 bytes of page_ext for 64 sections (1 section : 0x8000000)

Signed-off-by: Jaewon Kim <jaewon31.kim@...sung.com>
---
 mm/page_ext.c | 42 ++++++++++++++++++++++--------------------
 1 file changed, 22 insertions(+), 20 deletions(-)

diff --git a/mm/page_ext.c b/mm/page_ext.c
index 2e66d934d63f..506d58b36a1d 100644
--- a/mm/page_ext.c
+++ b/mm/page_ext.c
@@ -381,41 +381,43 @@ static int __meminit page_ext_callback(struct notifier_block *self,
 void __init page_ext_init(void)
 {
 	unsigned long pfn;
-	int nid;
+	int nid = 0;
+	struct memblock_region *rgn;
+	int nr_section = 0;
+	unsigned long next_section_pfn = 0;
 
 	if (!invoke_need_callbacks())
 		return;
 
-	for_each_node_state(nid, N_MEMORY) {
+	/*
+	 * iterate each memblock memory region and do not skip a section having
+	 * !pfn_valid(pfn)
+	 */
+	for_each_mem_region(rgn) {
 		unsigned long start_pfn, end_pfn;
 
-		start_pfn = node_start_pfn(nid);
-		end_pfn = node_end_pfn(nid);
-		/*
-		 * start_pfn and end_pfn may not be aligned to SECTION and the
-		 * page->flags of out of node pages are not initialized.  So we
-		 * scan [start_pfn, the biggest section's pfn < end_pfn) here.
-		 */
+		start_pfn = (unsigned long)(rgn->base >> PAGE_SHIFT);
+		end_pfn = start_pfn + (unsigned long)(rgn->size >> PAGE_SHIFT);
+
+		if (start_pfn < next_section_pfn)
+			start_pfn = next_section_pfn;
+
 		for (pfn = start_pfn; pfn < end_pfn;
 			pfn = ALIGN(pfn + 1, PAGES_PER_SECTION)) {
 
-			if (!pfn_valid(pfn))
-				continue;
-			/*
-			 * Nodes's pfns can be overlapping.
-			 * We know some arch can have a nodes layout such as
-			 * -------------pfn-------------->
-			 * N0 | N1 | N2 | N0 | N1 | N2|....
-			 */
-			if (pfn_to_nid(pfn) != nid)
-				continue;
+			if (pfn_valid(pfn))
+				nid = pfn_to_nid(pfn);
+			nr_section++;
 			if (init_section_page_ext(pfn, nid))
 				goto oom;
 			cond_resched();
 		}
+		next_section_pfn = pfn;
 	}
+
 	hotplug_memory_notifier(page_ext_callback, 0);
-	pr_info("allocated %ld bytes of page_ext\n", total_usage);
+	pr_info("allocated %ld bytes of page_ext for %d sections (1 section : 0x%x)\n",
+		total_usage, nr_section, (1 << SECTION_SIZE_BITS));
 	invoke_init_callbacks();
 	return;
 
-- 
2.17.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ