[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240313163707.615000-8-sashal@kernel.org>
Date: Wed, 13 Mar 2024 12:36:14 -0400
From: Sasha Levin <sashal@...nel.org>
To: linux-kernel@...r.kernel.org,
stable@...r.kernel.org
Cc: Kefeng Wang <wangkefeng.wang@...wei.com>,
Zi Yan <ziy@...dia.com>,
David Hildenbrand <david@...hat.com>,
"Huang, Ying" <ying.huang@...el.com>,
Hugh Dickins <hughd@...gle.com>,
Matthew Wilcox <willy@...radead.org>,
Mike Kravetz <mike.kravetz@...cle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Sasha Levin <sashal@...nel.org>
Subject: [PATCH 6.6 07/60] mm: migrate: convert numamigrate_isolate_page() to numamigrate_isolate_folio()
From: Kefeng Wang <wangkefeng.wang@...wei.com>
[ Upstream commit 2ac9e99f3b21b2864305fbfba4bae5913274c409 ]
Rename numamigrate_isolate_page() to numamigrate_isolate_folio(), then
make it takes a folio and use folio API to save compound_head() calls.
Link: https://lkml.kernel.org/r/20230913095131.2426871-4-wangkefeng.wang@huawei.com
Signed-off-by: Kefeng Wang <wangkefeng.wang@...wei.com>
Reviewed-by: Zi Yan <ziy@...dia.com>
Cc: David Hildenbrand <david@...hat.com>
Cc: "Huang, Ying" <ying.huang@...el.com>
Cc: Hugh Dickins <hughd@...gle.com>
Cc: Matthew Wilcox (Oracle) <willy@...radead.org>
Cc: Mike Kravetz <mike.kravetz@...cle.com>
Signed-off-by: Andrew Morton <akpm@...ux-foundation.org>
Stable-dep-of: 2774f256e7c0 ("mm/vmscan: fix a bug calling wakeup_kswapd() with a wrong zone index")
Signed-off-by: Sasha Levin <sashal@...nel.org>
---
mm/migrate.c | 20 ++++++++++----------
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/mm/migrate.c b/mm/migrate.c
index c9fabb960996f..e5f2f7243a659 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -2501,10 +2501,9 @@ static struct folio *alloc_misplaced_dst_folio(struct folio *src,
return __folio_alloc_node(gfp, order, nid);
}
-static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
+static int numamigrate_isolate_folio(pg_data_t *pgdat, struct folio *folio)
{
- int nr_pages = thp_nr_pages(page);
- int order = compound_order(page);
+ int nr_pages = folio_nr_pages(folio);
/* Avoid migrating to a node that is nearly full */
if (!migrate_balanced_pgdat(pgdat, nr_pages)) {
@@ -2516,22 +2515,23 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
if (managed_zone(pgdat->node_zones + z))
break;
}
- wakeup_kswapd(pgdat->node_zones + z, 0, order, ZONE_MOVABLE);
+ wakeup_kswapd(pgdat->node_zones + z, 0,
+ folio_order(folio), ZONE_MOVABLE);
return 0;
}
- if (!isolate_lru_page(page))
+ if (!folio_isolate_lru(folio))
return 0;
- mod_node_page_state(page_pgdat(page), NR_ISOLATED_ANON + page_is_file_lru(page),
+ node_stat_mod_folio(folio, NR_ISOLATED_ANON + folio_is_file_lru(folio),
nr_pages);
/*
- * Isolating the page has taken another reference, so the
- * caller's reference can be safely dropped without the page
+ * Isolating the folio has taken another reference, so the
+ * caller's reference can be safely dropped without the folio
* disappearing underneath us during migration.
*/
- put_page(page);
+ folio_put(folio);
return 1;
}
@@ -2565,7 +2565,7 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma,
if (page_is_file_lru(page) && PageDirty(page))
goto out;
- isolated = numamigrate_isolate_page(pgdat, page);
+ isolated = numamigrate_isolate_folio(pgdat, page_folio(page));
if (!isolated)
goto out;
--
2.43.0
Powered by blists - more mailing lists