lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Tue, 20 Jun 2017 19:07:15 -0400 From: Zi Yan <zi.yan@...t.com> To: kirill.shutemov@...ux.intel.com, linux-kernel@...r.kernel.org, linux-mm@...ck.org Cc: akpm@...ux-foundation.org, minchan@...nel.org, vbabka@...e.cz, mgorman@...hsingularity.net, mhocko@...nel.org, khandual@...ux.vnet.ibm.com, zi.yan@...rutgers.edu, dnellans@...dia.com, dave.hansen@...el.com, n-horiguchi@...jp.nec.com Subject: [PATCH v7 10/10] mm: memory_hotplug: memory hotremove supports thp migration From: Naoya Horiguchi <n-horiguchi@...jp.nec.com> This patch enables thp migration for memory hotremove. --- ChangeLog v1->v2: - base code switched from alloc_migrate_target to new_node_page() Signed-off-by: Naoya Horiguchi <n-horiguchi@...jp.nec.com> ChangeLog v2->v7: - base code switched from new_node_page() new_page_nodemask() Signed-off-by: Zi Yan <zi.yan@...rutgers.edu> --- include/linux/migrate.h | 15 ++++++++++++++- mm/memory_hotplug.c | 4 +++- 2 files changed, 17 insertions(+), 2 deletions(-) diff --git a/include/linux/migrate.h b/include/linux/migrate.h index f80c9882403a..f67755ae72c9 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -35,16 +35,29 @@ static inline struct page *new_page_nodemask(struct page *page, int preferred_ni nodemask_t *nodemask) { gfp_t gfp_mask = GFP_USER | __GFP_MOVABLE; + unsigned int order = 0; + struct page *new_page = NULL; if (PageHuge(page)) return alloc_huge_page_nodemask(page_hstate(compound_head(page)), nodemask); + if (thp_migration_supported() && PageTransHuge(page)) { + order = HPAGE_PMD_ORDER; + gfp_mask |= GFP_TRANSHUGE; + } + if (PageHighMem(page) || (zone_idx(page_zone(page)) == ZONE_MOVABLE)) gfp_mask |= __GFP_HIGHMEM; - return __alloc_pages_nodemask(gfp_mask, 0, preferred_nid, nodemask); + new_page = __alloc_pages_nodemask(gfp_mask, order, + preferred_nid, nodemask); + + if (new_page && PageTransHuge(page)) + prep_transhuge_page(new_page); + + return new_page; } #ifdef CONFIG_MIGRATION diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 567a1dcafa1a..1975acfc7326 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -1483,7 +1483,9 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn) if (isolate_huge_page(page, &source)) move_pages -= 1 << compound_order(head); continue; - } + } else if (thp_migration_supported() && PageTransHuge(page)) + pfn = page_to_pfn(compound_head(page)) + + hpage_nr_pages(page) - 1; if (!get_page_unless_zero(page)) continue; -- 2.11.0
Powered by blists - more mailing lists