[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <tencent_ED235B379E160A5C2BCF688ADDF3921EC808@qq.com>
Date: Fri, 14 Nov 2025 10:40:32 +0000
From: fujunjie <fujunjie1@...com>
To: akpm@...ux-foundation.org
Cc: vbabka@...e.cz,
surenb@...gle.com,
mhocko@...e.com,
jackmanb@...gle.com,
hannes@...xchg.org,
ziy@...dia.com,
linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
fujunjie <fujunjie1@...com>
Subject: [PATCH] mm/page_alloc: optimize lowmem_reserve max lookup using monotonicity
calculate_totalreserve_pages() currently finds the maximum
lowmem_reserve[j] for a zone by scanning the full range
[j = zone_idx .. MAX_NR_ZONES). However,
setup_per_zone_lowmem_reserve() constructs lowmem_reserve[]
monotonically increasing in j for a fixed zone (and never populates
lowmem_reserve[zone_idx] itself). This means the maximum valid reserve
entry always resides at the highest j > zone_idx that has a non-zero value.
Rewrite the loop to walk backwards from MAX_NR_ZONES - 1 down to
zone_idx + 1, stopping at the first non-zero lowmem_reserve[j]. This
avoids a full-range scan and makes the intent clearer. Behavior remains
unchanged.
Although this code is not on a hot path, the revised form is clearer
and avoids an unnecessary full scan.
Signed-off-by: fujunjie <fujunjie1@...com>
---
mm/page_alloc.c | 18 +++++++++++++++---
1 file changed, 15 insertions(+), 3 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 600d9e981c23d..414c5ba978418 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -6285,10 +6285,22 @@ static void calculate_totalreserve_pages(void)
long max = 0;
unsigned long managed_pages = zone_managed_pages(zone);
- /* Find valid and maximum lowmem_reserve in the zone */
- for (j = i; j < MAX_NR_ZONES; j++)
- max = max(max, zone->lowmem_reserve[j]);
+ /*
+ * Find valid and maximum lowmem_reserve in the zone.
+ *
+ * setup_per_zone_lowmem_reserve() builds
+ * lowmem_reserve[j] monotonically increasing in j
+ * for a fixed zone, so the maximum lives at the
+ * highest index that has a non-zero value. Walk
+ * backwards and stop at the first hit.
+ */
+ for (j = MAX_NR_ZONES - 1; j > i; j--) {
+ if (!zone->lowmem_reserve[j])
+ continue;
+ max = zone->lowmem_reserve[j];
+ break;
+ }
/* we treat the high watermark as reserved pages. */
max += high_wmark_pages(zone);
--
2.34.1
Powered by blists - more mailing lists