lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251206212507.135503-1-swarajgaikwad1925@gmail.com>
Date: Sat,  6 Dec 2025 21:25:06 +0000
From: Swaraj Gaikwad <swarajgaikwad1925@...il.com>
To: David Hildenbrand <david@...nel.org>,
	Oscar Salvador <osalvador@...e.de>,
	Andrew Morton <akpm@...ux-foundation.org>,
	linux-mm@...ck.org (open list:MEMORY HOT(UN)PLUG),
	linux-kernel@...r.kernel.org (open list)
Cc: skhan@...uxfoundation.org,
	david.hunter.linux@...il.com,
	Swaraj Gaikwad <swarajgaikwad1925@...il.com>
Subject: [PATCH] mm/memory_hotplug: Cache auto_movable stats to optimize online check

The auto_movable_can_online_movable() function currently walks all
populated zones when nid == NUMA_NO_NODE,

Since adjust_present_page_count() is called every time memory is
onlined/offlined and already updates present page counts, we can
maintain cached global statistics that are updated incrementally. This
eliminates the need to walk all zones for the NUMA_NO_NODE case.

This patch introduces a static global_auto_movable_stats structure that
caches kernel_early_pages and movable_pages counts. The cache is updated
in adjust_present_page_count() whenever pages are onlined/offlined, and
is read directly in auto_movable_can_online_movable() when
nid == NUMA_NO_NODE.

Testing: Built and booted the kernel successfully. Ran the memory
management test suite in tools/testing/selftests/mm/ with
./run_vmtests.sh - all tests passed.

Signed-off-by: Swaraj Gaikwad <swarajgaikwad1925@...il.com>
---
 mm/memory_hotplug.c | 15 +++++++++++----
 1 file changed, 11 insertions(+), 4 deletions(-)

diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 63b9d500ec6c..ba43edba8c92 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -50,6 +50,8 @@ enum {
 
 static int memmap_mode __read_mostly = MEMMAP_ON_MEMORY_DISABLE;
 
+static struct auto_movable_stats global_auto_movable_stats;
+
 static inline unsigned long memory_block_memmap_size(void)
 {
 	return PHYS_PFN(memory_block_size_bytes()) * sizeof(struct page);
@@ -851,9 +853,7 @@ static bool auto_movable_can_online_movable(int nid, struct memory_group *group,
 
 	/* Walk all relevant zones and collect MOVABLE vs. KERNEL stats. */
 	if (nid == NUMA_NO_NODE) {
-		/* TODO: cache values */
-		for_each_populated_zone(zone)
-			auto_movable_stats_account_zone(&stats, zone);
+		stats = global_auto_movable_stats;
 	} else {
 		for (i = 0; i < MAX_NR_ZONES; i++) {
 			pg_data_t *pgdat = NODE_DATA(nid);
@@ -1071,12 +1071,13 @@ void adjust_present_page_count(struct page *page, struct memory_group *group,
 {
 	struct zone *zone = page_zone(page);
 	const bool movable = zone_idx(zone) == ZONE_MOVABLE;
+	const bool early = early_section(__pfn_to_section(page_to_pfn(page)));
 
 	/*
 	 * We only support onlining/offlining/adding/removing of complete
 	 * memory blocks; therefore, either all is either early or hotplugged.
 	 */
-	if (early_section(__pfn_to_section(page_to_pfn(page))))
+	if (early)
 		zone->present_early_pages += nr_pages;
 	zone->present_pages += nr_pages;
 	zone->zone_pgdat->node_present_pages += nr_pages;
@@ -1085,6 +1086,12 @@ void adjust_present_page_count(struct page *page, struct memory_group *group,
 		group->present_movable_pages += nr_pages;
 	else if (group && !movable)
 		group->present_kernel_pages += nr_pages;
+
+	if (movable) {
+		global_auto_movable_stats.movable_pages += nr_pages;
+	} else if (early) {
+		global_auto_movable_stats.kernel_early_pages += nr_pages;
+	}
 }
 
 int mhp_init_memmap_on_memory(unsigned long pfn, unsigned long nr_pages,

base-commit: 3cfeff1d2304237b1c14628d695a6df44daff48f
-- 
2.52.0


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ