[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <57918BAC.8000008@huawei.com>
Date: Fri, 22 Jul 2016 10:57:48 +0800
From: Xishi Qiu <qiuxishi@...wei.com>
To: Joonsoo Kim <iamjoonsoo.kim@....com>,
Vlastimil Babka <vbabka@...e.cz>,
Andrew Morton <akpm@...ux-foundation.org>,
Naoya Horiguchi <n-horiguchi@...jp.nec.com>,
David Rientjes <rientjes@...gle.com>
CC: Linux MM <linux-mm@...ck.org>, LKML <linux-kernel@...r.kernel.org>
Subject: [PATCH v2] mem-hotplug: alloc new page from the next node if zone
is MOVABLE_ZONE
Memory offline could happen on both movable zone and non-movable zone.
We can offline the whole node if the zone is movable zone, and if the
zone is non-movable zone, we cannot offline the whole node, because
some kernel memory can't be migrated.
So if we offline a node with movable zone, use prefer mempolicy to alloc
new page from the next node instead of the current node or other remote
nodes, because re-migrate is a waste of time and the distance of the
remote nodes is often very large.
Also use GFP_HIGHUSER_MOVABLE to alloc new page if the zone is movable
zone.
Signed-off-by: Xishi Qiu <qiuxishi@...wei.com>
---
mm/memory_hotplug.c | 35 +++++++++++++++++++++++++++++------
1 file changed, 29 insertions(+), 6 deletions(-)
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index e3cbdca..930a5c6 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -1501,6 +1501,16 @@ static unsigned long scan_movable_pages(unsigned long start, unsigned long end)
return 0;
}
+static struct page *new_node_page(struct page *page, unsigned long node,
+ int **result)
+{
+ if (PageHuge(page))
+ return alloc_huge_page_node(page_hstate(compound_head(page)),
+ node);
+ else
+ return __alloc_pages_node(node, GFP_HIGHUSER_MOVABLE, 0);
+}
+
#define NR_OFFLINE_AT_ONCE_PAGES (256)
static int
do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
@@ -1510,6 +1520,7 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
int move_pages = NR_OFFLINE_AT_ONCE_PAGES;
int not_managed = 0;
int ret = 0;
+ int nid = NUMA_NO_NODE;
LIST_HEAD(source);
for (pfn = start_pfn; pfn < end_pfn && move_pages > 0; pfn++) {
@@ -1564,12 +1575,24 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
goto out;
}
- /*
- * alloc_migrate_target should be improooooved!!
- * migrate_pages returns # of failed pages.
- */
- ret = migrate_pages(&source, alloc_migrate_target, NULL, 0,
- MIGRATE_SYNC, MR_MEMORY_HOTPLUG);
+ for (pfn = start_pfn; pfn < end_pfn; pfn++) {
+ if (!pfn_valid(pfn))
+ continue;
+ page = pfn_to_page(pfn);
+ if (zone_idx(page_zone(page)) == ZONE_MOVABLE)
+ nid = next_node_in(page_to_nid(page),
+ node_online_map);
+ break;
+ }
+
+ /* Alloc new page from the next node if possible */
+ if (nid != NUMA_NO_NODE)
+ ret = migrate_pages(&source, new_node_page, NULL,
+ nid, MIGRATE_SYNC, MR_MEMORY_HOTPLUG);
+ else
+ ret = migrate_pages(&source, alloc_migrate_target, NULL,
+ 0, MIGRATE_SYNC, MR_MEMORY_HOTPLUG);
+
if (ret)
putback_movable_pages(&source);
}
--
1.8.3.1
Powered by blists - more mailing lists