[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1478561517-4317-13-git-send-email-n-horiguchi@ah.jp.nec.com>
Date: Tue, 8 Nov 2016 08:31:57 +0900
From: Naoya Horiguchi <n-horiguchi@...jp.nec.com>
To: linux-mm@...ck.org
Cc: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Hugh Dickins <hughd@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Dave Hansen <dave.hansen@...el.com>,
Andrea Arcangeli <aarcange@...hat.com>,
Mel Gorman <mgorman@...hsingularity.net>,
Michal Hocko <mhocko@...nel.org>,
Vlastimil Babka <vbabka@...e.cz>,
Pavel Emelyanov <xemul@...allels.com>,
Zi Yan <zi.yan@...rutgers.edu>,
Balbir Singh <bsingharora@...il.com>,
linux-kernel@...r.kernel.org,
Naoya Horiguchi <n-horiguchi@...jp.nec.com>,
Naoya Horiguchi <nao.horiguchi@...il.com>
Subject: [PATCH v2 12/12] mm: memory_hotplug: memory hotremove supports thp migration
This patch enables thp migration for memory hotremove.
Signed-off-by: Naoya Horiguchi <n-horiguchi@...jp.nec.com>
---
ChangeLog v1->v2:
- base code switched from alloc_migrate_target to new_node_page()
---
mm/memory_hotplug.c | 17 ++++++++++++++---
1 file changed, 14 insertions(+), 3 deletions(-)
diff --git v4.9-rc2-mmotm-2016-10-27-18-27/mm/memory_hotplug.c v4.9-rc2-mmotm-2016-10-27-18-27_patched/mm/memory_hotplug.c
index b18dab40..a9c3fe1 100644
--- v4.9-rc2-mmotm-2016-10-27-18-27/mm/memory_hotplug.c
+++ v4.9-rc2-mmotm-2016-10-27-18-27_patched/mm/memory_hotplug.c
@@ -1543,6 +1543,7 @@ static struct page *new_node_page(struct page *page, unsigned long private,
int nid = page_to_nid(page);
nodemask_t nmask = node_states[N_MEMORY];
struct page *new_page = NULL;
+ unsigned int order = 0;
/*
* TODO: allocate a destination hugepage from a nearest neighbor node,
@@ -1553,6 +1554,11 @@ static struct page *new_node_page(struct page *page, unsigned long private,
return alloc_huge_page_node(page_hstate(compound_head(page)),
next_node_in(nid, nmask));
+ if (thp_migration_supported() && PageTransHuge(page)) {
+ order = HPAGE_PMD_ORDER;
+ gfp_mask |= GFP_TRANSHUGE;
+ }
+
node_clear(nid, nmask);
if (PageHighMem(page)
@@ -1560,12 +1566,15 @@ static struct page *new_node_page(struct page *page, unsigned long private,
gfp_mask |= __GFP_HIGHMEM;
if (!nodes_empty(nmask))
- new_page = __alloc_pages_nodemask(gfp_mask, 0,
+ new_page = __alloc_pages_nodemask(gfp_mask, order,
node_zonelist(nid, gfp_mask), &nmask);
if (!new_page)
- new_page = __alloc_pages(gfp_mask, 0,
+ new_page = __alloc_pages(gfp_mask, order,
node_zonelist(nid, gfp_mask));
+ if (new_page && order == HPAGE_PMD_ORDER)
+ prep_transhuge_page(new_page);
+
return new_page;
}
@@ -1595,7 +1604,9 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
if (isolate_huge_page(page, &source))
move_pages -= 1 << compound_order(head);
continue;
- }
+ } else if (thp_migration_supported() && PageTransHuge(page))
+ pfn = page_to_pfn(compound_head(page))
+ + HPAGE_PMD_NR - 1;
if (!get_page_unless_zero(page))
continue;
--
2.7.0
Powered by blists - more mailing lists