[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220329010901.1654-2-richard.weiyang@gmail.com>
Date: Tue, 29 Mar 2022 01:09:01 +0000
From: Wei Yang <richard.weiyang@...il.com>
To: akpm@...ux-foundation.org
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
ying.huang@...el.com, mgorman@...hsingularity.net,
Wei Yang <richard.weiyang@...il.com>,
Miaohe Lin <linmiaohe@...wei.com>,
David Hildenbrand <david@...hat.com>,
Oscar Salvador <osalvador@...e.de>
Subject: [Patch v2 2/2] mm/vmscan: make sure wakeup_kswapd with managed zone
wakeup_kswapd() only wake up kswapd when the zone is managed.
For two callers of wakeup_kswapd(), they are node perspective.
* wake_all_kswapds
* numamigrate_isolate_page
If we picked up a !managed zone, this is not we expected.
This patch makes sure we pick up a managed zone for wakeup_kswapd(). And
it also use managed_zone in migrate_balanced_pgdat() to get the proper
zone.
Signed-off-by: Wei Yang <richard.weiyang@...il.com>
Cc: Miaohe Lin <linmiaohe@...wei.com>
Cc: David Hildenbrand <david@...hat.com>
Cc: "Huang, Ying" <ying.huang@...el.com>
Cc: Mel Gorman <mgorman@...hsingularity.net>
Cc: Oscar Salvador <osalvador@...e.de>
Signed-off-by: Andrew Morton <akpm@...ux-foundation.org>
---
v2: adjust the usage in migrate_balanced_pgdat()
---
mm/migrate.c | 6 +++---
mm/page_alloc.c | 2 ++
2 files changed, 5 insertions(+), 3 deletions(-)
diff --git a/mm/migrate.c b/mm/migrate.c
index 3d60823afd2d..5adc55b5347c 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1971,7 +1971,7 @@ SYSCALL_DEFINE6(move_pages, pid_t, pid, unsigned long, nr_pages,
#ifdef CONFIG_NUMA_BALANCING
/*
* Returns true if this is a safe migration target node for misplaced NUMA
- * pages. Currently it only checks the watermarks which crude
+ * pages. Currently it only checks the watermarks which is crude.
*/
static bool migrate_balanced_pgdat(struct pglist_data *pgdat,
unsigned long nr_migrate_pages)
@@ -1981,7 +1981,7 @@ static bool migrate_balanced_pgdat(struct pglist_data *pgdat,
for (z = pgdat->nr_zones - 1; z >= 0; z--) {
struct zone *zone = pgdat->node_zones + z;
- if (!populated_zone(zone))
+ if (!managed_zone(zone))
continue;
/* Avoid waking kswapd by allocating pages_to_migrate pages. */
@@ -2046,7 +2046,7 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING))
return 0;
for (z = pgdat->nr_zones - 1; z >= 0; z--) {
- if (populated_zone(pgdat->node_zones + z))
+ if (managed_zone(pgdat->node_zones + z))
break;
}
wakeup_kswapd(pgdat->node_zones + z, 0, order, ZONE_MOVABLE);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 4c0c4ef94ba0..6656c2d06e01 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4674,6 +4674,8 @@ static void wake_all_kswapds(unsigned int order, gfp_t gfp_mask,
for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, highest_zoneidx,
ac->nodemask) {
+ if (!managed_zone(zone))
+ continue;
if (last_pgdat != zone->zone_pgdat)
wakeup_kswapd(zone, gfp_mask, order, highest_zoneidx);
last_pgdat = zone->zone_pgdat;
--
2.33.1
Powered by blists - more mailing lists