[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <81ED0EF7-779F-4977-AF09-665FF750319C@nvidia.com>
Date: Thu, 04 Nov 2021 11:33:31 -0400
From: Zi Yan <ziy@...dia.com>
To: Baolin Wang <baolin.wang@...ux.alibaba.com>
Cc: akpm@...ux-foundation.org, rostedt@...dmis.org, mingo@...hat.com,
shy828301@...il.com, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH 1/3] mm: migrate: Fix the return value of migrate_pages()
On 3 Nov 2021, at 6:51, Baolin Wang wrote:
> As Zi Yan pointed out, the syscall move_pages() can return a non-migrated
> number larger than the number of pages the users tried to migrate, when a
> THP page is failed to migrate. This is confusing for users.
>
> Since other migration scenarios do not care about the actual non-migrated
> number of pages except the memory compaction migration which will fix in
> following patch. Thus we can change the return value to return the number
> of {normal page, THP, hugetlb} instead to avoid this issue, meanwhile we
> should still keep the migration counters using the number of normal pages.
>
> Signed-off-by: Baolin Wang <baolin.wang@...ux.alibaba.com>
> ---
> mm/migrate.c | 18 ++++++++++--------
> 1 file changed, 10 insertions(+), 8 deletions(-)
>
> diff --git a/mm/migrate.c b/mm/migrate.c
> index a11e948..00b8922 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -1428,7 +1428,7 @@ static inline int try_split_thp(struct page *page, struct page **page2,
> * @mode: The migration mode that specifies the constraints for
> * page migration, if any.
> * @reason: The reason for page migration.
> - * @ret_succeeded: Set to the number of pages migrated successfully if
> + * @ret_succeeded: Set to the number of normal pages migrated successfully if
> * the caller passes a non-NULL pointer.
> *
> * The function returns after 10 attempts or if no pages are movable any more
> @@ -1436,7 +1436,7 @@ static inline int try_split_thp(struct page *page, struct page **page2,
> * It is caller's responsibility to call putback_movable_pages() to return pages
> * to the LRU or free list only if ret != 0.
> *
> - * Returns the number of pages that were not migrated, or an error code.
> + * Returns the number of {normal page, THP} that were not migrated, or an error code.
> */
> int migrate_pages(struct list_head *from, new_page_t get_new_page,
> free_page_t put_new_page, unsigned long private,
> @@ -1445,6 +1445,7 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
> int retry = 1;
> int thp_retry = 1;
> int nr_failed = 0;
> + int nr_failed_pages = 0;
> int nr_succeeded = 0;
> int nr_thp_succeeded = 0;
> int nr_thp_failed = 0;
> @@ -1517,7 +1518,7 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
> }
>
> nr_thp_failed++;
> - nr_failed += nr_subpages;
> + nr_failed_pages += nr_subpages;
> break;
> }
>
> @@ -1537,7 +1538,7 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
> }
>
> nr_thp_failed++;
> - nr_failed += nr_subpages;
> + nr_failed_pages += nr_subpages;
> goto out;
> }
> nr_failed++;
> @@ -1566,7 +1567,7 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
> */
> if (is_thp) {
> nr_thp_failed++;
> - nr_failed += nr_subpages;
> + nr_failed_pages += nr_subpages;
> break;
> }
> nr_failed++;
> @@ -1575,8 +1576,9 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
> }
> }
> nr_failed += retry + thp_retry;
This line can probably go away, since we do not want to count retried pages.
> + nr_failed_pages += nr_failed;
> nr_thp_failed += thp_retry;
> - rc = nr_failed;
> + rc = nr_failed + nr_thp_failed;
> out:
> /*
> * Put the permanent failure page back to migration list, they
> @@ -1585,11 +1587,11 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
> list_splice(&ret_pages, from);
>
> count_vm_events(PGMIGRATE_SUCCESS, nr_succeeded);
> - count_vm_events(PGMIGRATE_FAIL, nr_failed);
> + count_vm_events(PGMIGRATE_FAIL, nr_failed_pages);
> count_vm_events(THP_MIGRATION_SUCCESS, nr_thp_succeeded);
> count_vm_events(THP_MIGRATION_FAIL, nr_thp_failed);
> count_vm_events(THP_MIGRATION_SPLIT, nr_thp_split);
> - trace_mm_migrate_pages(nr_succeeded, nr_failed, nr_thp_succeeded,
> + trace_mm_migrate_pages(nr_succeeded, nr_failed_pages, nr_thp_succeeded,
> nr_thp_failed, nr_thp_split, mode, reason);
>
> if (!swapwrite)
> --
> 1.8.3.1
Thank you for the patch!
In general, this looks good to me. But like you said in other email, when a THP fails to
migrate and gets split, the number of nr_failed will still be inflated by the number of
failed subpage migrations. What I can think of is to split THPs to a separate list and
stop increasing nr_failed when the pages from the new list is under migration. Let me
know how it sounds to you.
An untested but compiled patch (please apply it before this one) looks like:
diff --git a/mm/migrate.c b/mm/migrate.c
index 1852d787e6ab..f7e424f8e647 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1457,13 +1457,16 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
int swapwrite = current->flags & PF_SWAPWRITE;
int rc, nr_subpages;
LIST_HEAD(ret_pages);
+ LIST_HEAD(thp_split_pages);
bool nosplit = (reason == MR_NUMA_MISPLACED);
+ bool no_failed_counting = false;
trace_mm_migrate_pages_start(mode, reason);
if (!swapwrite)
current->flags |= PF_SWAPWRITE;
+thp_subpage_migration:
for (pass = 0; pass < 10 && (retry || thp_retry); pass++) {
retry = 0;
thp_retry = 0;
@@ -1512,7 +1515,7 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
case -ENOSYS:
/* THP migration is unsupported */
if (is_thp) {
- if (!try_split_thp(page, &page2, from)) {
+ if (!try_split_thp(page, &page2, &thp_split_pages)) {
nr_thp_split++;
goto retry;
}
@@ -1523,7 +1526,8 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
}
/* Hugetlb migration is unsupported */
- nr_failed++;
+ if (!no_failed_counting)
+ nr_failed++;
break;
case -ENOMEM:
/*
@@ -1532,7 +1536,7 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
* THP NUMA faulting doesn't split THP to retry.
*/
if (is_thp && !nosplit) {
- if (!try_split_thp(page, &page2, from)) {
+ if (!try_split_thp(page, &page2, &thp_split_pages)) {
nr_thp_split++;
goto retry;
}
@@ -1541,7 +1545,8 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
nr_failed += nr_subpages;
goto out;
}
- nr_failed++;
+ if (!no_failed_counting)
+ nr_failed++;
goto out;
case -EAGAIN:
if (is_thp) {
@@ -1570,13 +1575,24 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
nr_failed += nr_subpages;
break;
}
- nr_failed++;
+ if (!no_failed_counting)
+ nr_failed++;
break;
}
}
}
- nr_failed += retry + thp_retry;
+ /* get thp_retry before it can be reset in THP subpage migration. */
nr_thp_failed += thp_retry;
+ /* try to migrate subpages of fail-to-migrate THPs, no nr_failed
+ * counting in this round, since all subpages of a THP is counted as
+ * 1 failure in the first round. */
+ if (!list_empty(&thp_split_pages)) {
+ list_splice(from, &thp_split_pages);
+ no_failed_counting = true;
+ goto thp_subpage_migration;
+ }
+
+ nr_failed += retry + thp_retry;
rc = nr_failed;
out:
/*
--
Best Regards,
Yan, Zi
Download attachment "signature.asc" of type "application/pgp-signature" (855 bytes)
Powered by blists - more mailing lists