[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20260112121614.1840607-1-yajun.deng@linux.dev>
Date: Mon, 12 Jan 2026 20:16:14 +0800
From: Yajun Deng <yajun.deng@...ux.dev>
To: akpm@...ux-foundation.org,
vbabka@...e.cz,
surenb@...gle.com,
mhocko@...e.com,
jackmanb@...gle.com,
hannes@...xchg.org,
ziy@...dia.com
Cc: linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
Yajun Deng <yajun.deng@...ux.dev>,
Joshua Hahn <joshua.hahnjy@...il.com>
Subject: [PATCH v2] mm/page_alloc: Avoid duplicate NR_FREE_PAGES updates in move_to_free_list()
In move_to_free_list(), when a page block changes its migration type,
we need to update free page counts for both the old and new types.
Originally, this was done by two calls to account_freepages(), which
updates NR_FREE_PAGES and also type-specific counters. However, this
causes NR_FREE_PAGES to be updated twice, while the net change is zero
in most cases.
This patch adds a condition that updates the NR_FREE_PAGES only if one of
the two types is the isolate type. This avoids NR_FREE_PAGES being
updates twice.
The optimization avoid duplicate NR_FREE_PAGES updates in
move_to_free_list().
Signed-off-by: Yajun Deng <yajun.deng@...ux.dev>
Suggested-by: Joshua Hahn <joshua.hahnjy@...il.com>
---
v2: remove account_freepages_both().
v1: https://lore.kernel.org/all/20260109105121.328780-1-yajun.deng@linux.dev/
---
mm/page_alloc.c | 38 +++++++++++++++++++++++++-------------
1 file changed, 25 insertions(+), 13 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index ebfa07632995..d56e94eb4914 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -812,6 +812,16 @@ compaction_capture(struct capture_control *capc, struct page *page,
}
#endif /* CONFIG_COMPACTION */
+static inline void account_specific_freepages(struct zone *zone, int nr_pages,
+ int migratetype)
+{
+ if (is_migrate_cma(migratetype))
+ __mod_zone_page_state(zone, NR_FREE_CMA_PAGES, nr_pages);
+ else if (migratetype == MIGRATE_HIGHATOMIC)
+ WRITE_ONCE(zone->nr_free_highatomic,
+ zone->nr_free_highatomic + nr_pages);
+}
+
static inline void account_freepages(struct zone *zone, int nr_pages,
int migratetype)
{
@@ -822,11 +832,7 @@ static inline void account_freepages(struct zone *zone, int nr_pages,
__mod_zone_page_state(zone, NR_FREE_PAGES, nr_pages);
- if (is_migrate_cma(migratetype))
- __mod_zone_page_state(zone, NR_FREE_CMA_PAGES, nr_pages);
- else if (migratetype == MIGRATE_HIGHATOMIC)
- WRITE_ONCE(zone->nr_free_highatomic,
- zone->nr_free_highatomic + nr_pages);
+ account_specific_freepages(zone, nr_pages, migratetype);
}
/* Used for pages not on another list */
@@ -861,6 +867,8 @@ static inline void move_to_free_list(struct page *page, struct zone *zone,
{
struct free_area *area = &zone->free_area[order];
int nr_pages = 1 << order;
+ bool old_isolated = is_migrate_isolate(old_mt);
+ bool new_isolated = is_migrate_isolate(new_mt);
/* Free page moving can fail, so it happens before the type update */
VM_WARN_ONCE(get_pageblock_migratetype(page) != old_mt,
@@ -869,14 +877,18 @@ static inline void move_to_free_list(struct page *page, struct zone *zone,
list_move_tail(&page->buddy_list, &area->free_list[new_mt]);
- account_freepages(zone, -nr_pages, old_mt);
- account_freepages(zone, nr_pages, new_mt);
-
- if (order >= pageblock_order &&
- is_migrate_isolate(old_mt) != is_migrate_isolate(new_mt)) {
- if (!is_migrate_isolate(old_mt))
- nr_pages = -nr_pages;
- __mod_zone_page_state(zone, NR_FREE_PAGES_BLOCKS, nr_pages);
+ if (!old_isolated)
+ account_specific_freepages(zone, -nr_pages, old_mt);
+ if (!new_isolated)
+ account_specific_freepages(zone, nr_pages, new_mt);
+
+ /* Only update NR_FREE_PAGES if one of them is isolate type */
+ if (old_isolated != new_isolated) {
+ nr_pages = old_isolated ? nr_pages : -nr_pages;
+ __mod_zone_page_state(zone, NR_FREE_PAGES, nr_pages);
+ if (order >= pageblock_order)
+ __mod_zone_page_state(zone, NR_FREE_PAGES_BLOCKS,
+ nr_pages);
}
}
--
2.34.1
Powered by blists - more mailing lists