[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20170217112453.307-5-khandual@linux.vnet.ibm.com>
Date: Fri, 17 Feb 2017 16:54:51 +0530
From: Anshuman Khandual <khandual@...ux.vnet.ibm.com>
To: linux-kernel@...r.kernel.org, linux-mm@...ck.org
Cc: mhocko@...e.com, vbabka@...e.cz, mgorman@...e.de,
minchan@...nel.org, aneesh.kumar@...ux.vnet.ibm.com,
bsingharora@...il.com, srikar@...ux.vnet.ibm.com,
haren@...ux.vnet.ibm.com, jglisse@...hat.com,
dave.hansen@...el.com, dan.j.williams@...el.com,
zi.yan@...rutgers.edu
Subject: [PATCH 4/6] mm/migrate: Add new migrate mode MIGRATE_MT
From: Zi Yan <ziy@...dia.com>
This change adds a new migration mode called MIGRATE_MT to enable multi
threaded page copy implementation inside copy_huge_page() function by
selectively calling copy_pages_mthread() when requested. But it still
falls back using the regular page copy mechanism instead the previous
multi threaded attempt fails. It also attempts multi threaded copy for
regular pages.
Signed-off-by: Zi Yan <zi.yan@...rutgers.edu>
Signed-off-by: Anshuman Khandual <khandual@...ux.vnet.ibm.com>
---
include/linux/migrate_mode.h | 1 +
mm/migrate.c | 25 ++++++++++++++++++-------
2 files changed, 19 insertions(+), 7 deletions(-)
diff --git a/include/linux/migrate_mode.h b/include/linux/migrate_mode.h
index 89c1700..d344ad6 100644
--- a/include/linux/migrate_mode.h
+++ b/include/linux/migrate_mode.h
@@ -12,6 +12,7 @@ enum migrate_mode {
MIGRATE_SYNC_LIGHT = 1<<1,
MIGRATE_SYNC = 1<<2,
MIGRATE_ST = 1<<3,
+ MIGRATE_MT = 1<<4,
};
#endif /* MIGRATE_MODE_H_INCLUDED */
diff --git a/mm/migrate.c b/mm/migrate.c
index 63c3682..6ac3572 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -594,6 +594,7 @@ static void copy_huge_page(struct page *dst, struct page *src,
{
int i;
int nr_pages;
+ int rc = -EFAULT;
if (PageHuge(src)) {
/* hugetlbfs page */
@@ -610,10 +611,14 @@ static void copy_huge_page(struct page *dst, struct page *src,
nr_pages = hpage_nr_pages(src);
}
- for (i = 0; i < nr_pages; i++) {
- cond_resched();
- copy_highpage(dst + i, src + i);
- }
+ if (mode & MIGRATE_MT)
+ rc = copy_pages_mthread(dst, src, nr_pages);
+
+ if (rc)
+ for (i = 0; i < nr_pages; i++) {
+ cond_resched();
+ copy_highpage(dst + i, src + i);
+ }
}
/*
@@ -624,10 +629,16 @@ void migrate_page_copy(struct page *newpage, struct page *page,
{
int cpupid;
- if (PageHuge(page) || PageTransHuge(page))
+ if (PageHuge(page) || PageTransHuge(page)) {
copy_huge_page(newpage, page, mode);
- else
- copy_highpage(newpage, page);
+ } else {
+ if (mode & MIGRATE_MT) {
+ if (copy_pages_mthread(newpage, page, 1))
+ copy_highpage(newpage, page);
+ } else {
+ copy_highpage(newpage, page);
+ }
+ }
if (PageError(page))
SetPageError(newpage);
--
2.9.3
Powered by blists - more mailing lists