[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <a37b13dd91bd3eadcd56a08cb3c839616f8457e7.1692440586.git.baolin.wang@linux.alibaba.com>
Date: Sat, 19 Aug 2023 18:52:34 +0800
From: Baolin Wang <baolin.wang@...ux.alibaba.com>
To: akpm@...ux-foundation.org
Cc: mgorman@...hsingularity.net, shy828301@...il.com, david@...hat.com,
ying.huang@...el.com, baolin.wang@...ux.alibaba.com,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: [PATCH 1/4] mm: migrate: move migration validation into numa_migrate_prep()
Now there are 3 places will validate if a page can mirate or not, and
some validations are performed later, which will waste some CPU to call
numa_migrate_prep().
Thus we can move all the migration validation into numa_migrate_prep(),
which is more maintainable as well as saving some CPU resources. Another
benefit is that it can serve as a preparation for supporting batch migration
in do_numa_page() in future.
Signed-off-by: Baolin Wang <baolin.wang@...ux.alibaba.com>
---
mm/memory.c | 19 +++++++++++++++++++
mm/migrate.c | 19 -------------------
2 files changed, 19 insertions(+), 19 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c
index d003076b218d..bee9b1e86ef0 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -4747,6 +4747,25 @@ int numa_migrate_prep(struct page *page, struct vm_area_struct *vma,
*flags |= TNF_FAULT_LOCAL;
}
+ /*
+ * Don't migrate file pages that are mapped in multiple processes
+ * with execute permissions as they are probably shared libraries.
+ */
+ if (page_mapcount(page) != 1 && page_is_file_lru(page) &&
+ (vma->vm_flags & VM_EXEC))
+ return NUMA_NO_NODE;
+
+ /*
+ * Also do not migrate dirty pages as not all filesystems can move
+ * dirty pages in MIGRATE_ASYNC mode which is a waste of cycles.
+ */
+ if (page_is_file_lru(page) && PageDirty(page))
+ return NUMA_NO_NODE;
+
+ /* Do not migrate THP mapped by multiple processes */
+ if (PageTransHuge(page) && total_mapcount(page) > 1)
+ return NUMA_NO_NODE;
+
return mpol_misplaced(page, vma, addr);
}
diff --git a/mm/migrate.c b/mm/migrate.c
index e21d5a7e7447..9cc98fb1d6ec 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -2485,10 +2485,6 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
VM_BUG_ON_PAGE(order && !PageTransHuge(page), page);
- /* Do not migrate THP mapped by multiple processes */
- if (PageTransHuge(page) && total_mapcount(page) > 1)
- return 0;
-
/* Avoid migrating to a node that is nearly full */
if (!migrate_balanced_pgdat(pgdat, nr_pages)) {
int z;
@@ -2533,21 +2529,6 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma,
LIST_HEAD(migratepages);
int nr_pages = thp_nr_pages(page);
- /*
- * Don't migrate file pages that are mapped in multiple processes
- * with execute permissions as they are probably shared libraries.
- */
- if (page_mapcount(page) != 1 && page_is_file_lru(page) &&
- (vma->vm_flags & VM_EXEC))
- goto out;
-
- /*
- * Also do not migrate dirty pages as not all filesystems can move
- * dirty pages in MIGRATE_ASYNC mode which is a waste of cycles.
- */
- if (page_is_file_lru(page) && PageDirty(page))
- goto out;
-
isolated = numamigrate_isolate_page(pgdat, page);
if (!isolated)
goto out;
--
2.39.3
Powered by blists - more mailing lists