lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220921060616.73086-4-ying.huang@intel.com>
Date:   Wed, 21 Sep 2022 14:06:13 +0800
From:   Huang Ying <ying.huang@...el.com>
To:     linux-mm@...ck.org
Cc:     linux-kernel@...r.kernel.org,
        Andrew Morton <akpm@...ux-foundation.org>,
        Huang Ying <ying.huang@...el.com>, Zi Yan <ziy@...dia.com>,
        Yang Shi <shy828301@...il.com>,
        Baolin Wang <baolin.wang@...ux.alibaba.com>,
        Oscar Salvador <osalvador@...e.de>,
        Matthew Wilcox <willy@...radead.org>
Subject: [RFC 3/6] mm/migrate_pages: restrict number of pages to migrate in batch

This is a preparation patch to batch the page unmapping and moving
for the normal pages and THP.

If we had batched the page unmapping, all pages to be migrated would
be unmapped before copying the contents and flags of the pages.  If
the number of pages that were passed to migrate_pages() was too large,
too many pages would be unmapped.  Then, the execution of their
processes would be stopped for too long time.  For example,
migrate_pages() syscall will call migrate_pages() with all pages of a
process.  To avoid this possible issue, in this patch, we restrict the
number of pages to be migrated to be no more than HPAGE_PMD_NR.  That
is, the influence is at the same level of THP migration.

Signed-off-by: "Huang, Ying" <ying.huang@...el.com>
Cc: Zi Yan <ziy@...dia.com>
Cc: Yang Shi <shy828301@...il.com>
Cc: Baolin Wang <baolin.wang@...ux.alibaba.com>
Cc: Oscar Salvador <osalvador@...e.de>
Cc: Matthew Wilcox <willy@...radead.org>
---
 mm/migrate.c | 93 +++++++++++++++++++++++++++++++++++++---------------
 1 file changed, 67 insertions(+), 26 deletions(-)

diff --git a/mm/migrate.c b/mm/migrate.c
index 4a81e0bfdbcd..1077af858e36 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1439,32 +1439,7 @@ static inline int try_split_thp(struct page *page, struct list_head *split_pages
 	return rc;
 }
 
-/*
- * migrate_pages - migrate the pages specified in a list, to the free pages
- *		   supplied as the target for the page migration
- *
- * @from:		The list of pages to be migrated.
- * @get_new_page:	The function used to allocate free pages to be used
- *			as the target of the page migration.
- * @put_new_page:	The function used to free target pages if migration
- *			fails, or NULL if no special handling is necessary.
- * @private:		Private data to be passed on to get_new_page()
- * @mode:		The migration mode that specifies the constraints for
- *			page migration, if any.
- * @reason:		The reason for page migration.
- * @ret_succeeded:	Set to the number of normal pages migrated successfully if
- *			the caller passes a non-NULL pointer.
- *
- * The function returns after 10 attempts or if no pages are movable any more
- * because the list has become empty or no retryable pages exist any more.
- * It is caller's responsibility to call putback_movable_pages() to return pages
- * to the LRU or free list only if ret != 0.
- *
- * Returns the number of {normal page, THP, hugetlb} that were not migrated, or
- * an error code. The number of THP splits will be considered as the number of
- * non-migrated THP, no matter how many subpages of the THP are migrated successfully.
- */
-int migrate_pages(struct list_head *from, new_page_t get_new_page,
+static int migrate_pages_batch(struct list_head *from, new_page_t get_new_page,
 		free_page_t put_new_page, unsigned long private,
 		enum migrate_mode mode, int reason, unsigned int *ret_succeeded)
 {
@@ -1709,6 +1684,72 @@ int migrate_pages(struct list_head *from, new_page_t get_new_page,
 	return rc;
 }
 
+/*
+ * migrate_pages - migrate the pages specified in a list, to the free pages
+ *		   supplied as the target for the page migration
+ *
+ * @from:		The list of pages to be migrated.
+ * @get_new_page:	The function used to allocate free pages to be used
+ *			as the target of the page migration.
+ * @put_new_page:	The function used to free target pages if migration
+ *			fails, or NULL if no special handling is necessary.
+ * @private:		Private data to be passed on to get_new_page()
+ * @mode:		The migration mode that specifies the constraints for
+ *			page migration, if any.
+ * @reason:		The reason for page migration.
+ * @ret_succeeded:	Set to the number of normal pages migrated successfully if
+ *			the caller passes a non-NULL pointer.
+ *
+ * The function returns after 10 attempts or if no pages are movable any more
+ * because the list has become empty or no retryable pages exist any more.
+ * It is caller's responsibility to call putback_movable_pages() to return pages
+ * to the LRU or free list only if ret != 0.
+ *
+ * Returns the number of {normal page, THP, hugetlb} that were not migrated, or
+ * an error code. The number of THP splits will be considered as the number of
+ * non-migrated THP, no matter how many subpages of the THP are migrated successfully.
+ */
+int migrate_pages(struct list_head *from, new_page_t get_new_page,
+		free_page_t put_new_page, unsigned long private,
+		enum migrate_mode mode, int reason, unsigned int *pret_succeeded)
+{
+	int rc, rc_gether = 0;
+	int ret_succeeded, ret_succeeded_gether = 0;
+	int nr_pages;
+	struct page *page;
+	LIST_HEAD(pagelist);
+	LIST_HEAD(ret_pages);
+
+again:
+	nr_pages = 0;
+	list_for_each_entry(page, from, lru) {
+		nr_pages += compound_nr(page);
+		if (nr_pages > HPAGE_PMD_NR)
+			break;
+	}
+	if (nr_pages > HPAGE_PMD_NR)
+		list_cut_before(&pagelist, from, &page->lru);
+	else
+		list_splice_init(from, &pagelist);
+	rc = migrate_pages_batch(&pagelist, get_new_page, put_new_page, private,
+				 mode, reason, &ret_succeeded);
+	ret_succeeded_gether += ret_succeeded;
+	list_splice_tail_init(&pagelist, &ret_pages);
+	if (rc == -ENOMEM) {
+		rc_gether = rc;
+		goto out;
+	}
+	rc_gether += rc;
+	if (!list_empty(from))
+		goto again;
+out:
+	if (pret_succeeded)
+		*pret_succeeded = ret_succeeded_gether;
+	list_splice(&ret_pages, from);
+
+	return rc_gether;
+}
+
 struct page *alloc_migration_target(struct page *page, unsigned long private)
 {
 	struct folio *folio = page_folio(page);
-- 
2.35.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ