[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <76fec3ce-986e-406b-6fe1-c785590dc1bd@linux.vnet.ibm.com>
Date: Fri, 19 May 2017 19:26:27 +0530
From: Anshuman Khandual <khandual@...ux.vnet.ibm.com>
To: Zi Yan <zi.yan@...t.com>, n-horiguchi@...jp.nec.com,
kirill.shutemov@...ux.intel.com, linux-kernel@...r.kernel.org,
linux-mm@...ck.org
Cc: akpm@...ux-foundation.org, minchan@...nel.org, vbabka@...e.cz,
mgorman@...hsingularity.net, mhocko@...nel.org,
khandual@...ux.vnet.ibm.com, zi.yan@...rutgers.edu,
dnellans@...dia.com
Subject: Re: [PATCH v5 11/11] mm: memory_hotplug: memory hotremove supports
thp migration
On 04/21/2017 02:17 AM, Zi Yan wrote:
> From: Naoya Horiguchi <n-horiguchi@...jp.nec.com>
>
> This patch enables thp migration for memory hotremove.
>
> Signed-off-by: Naoya Horiguchi <n-horiguchi@...jp.nec.com>
> ---
> ChangeLog v1->v2:
> - base code switched from alloc_migrate_target to new_node_page()
> ---
> include/linux/huge_mm.h | 8 ++++++++
> mm/memory_hotplug.c | 17 ++++++++++++++---
> 2 files changed, 22 insertions(+), 3 deletions(-)
>
> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> index 6f44a2352597..92c2161704c3 100644
> --- a/include/linux/huge_mm.h
> +++ b/include/linux/huge_mm.h
> @@ -189,6 +189,13 @@ static inline int hpage_nr_pages(struct page *page)
> return 1;
> }
>
> +static inline int hpage_order(struct page *page)
> +{
> + if (unlikely(PageTransHuge(page)))
> + return HPAGE_PMD_ORDER;
> + return 0;
> +}
> +
This function seems to be redundant.
> struct page *follow_devmap_pmd(struct vm_area_struct *vma, unsigned long addr,
> pmd_t *pmd, int flags);
> struct page *follow_devmap_pud(struct vm_area_struct *vma, unsigned long addr,
> @@ -233,6 +240,7 @@ static inline bool thp_migration_supported(void)
> #define HPAGE_PUD_SIZE ({ BUILD_BUG(); 0; })
>
> #define hpage_nr_pages(x) 1
> +#define hpage_order(x) 0
>
> #define transparent_hugepage_enabled(__vma) 0
>
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index 257166ebdff0..ecae0852994f 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -1574,6 +1574,7 @@ static struct page *new_node_page(struct page *page, unsigned long private,
> int nid = page_to_nid(page);
> nodemask_t nmask = node_states[N_MEMORY];
> struct page *new_page = NULL;
> + unsigned int order = 0;
>
> /*
> * TODO: allocate a destination hugepage from a nearest neighbor node,
> @@ -1584,6 +1585,11 @@ static struct page *new_node_page(struct page *page, unsigned long private,
> return alloc_huge_page_node(page_hstate(compound_head(page)),
> next_node_in(nid, nmask));
>
> + if (thp_migration_supported() && PageTransHuge(page)) {
> + order = hpage_order(page);
We have already tested the page as THP, we can just use HPAGE_PMD_ORDER.
> + gfp_mask |= GFP_TRANSHUGE;
> + }
> +
> node_clear(nid, nmask);
>
> if (PageHighMem(page)
> @@ -1591,12 +1597,15 @@ static struct page *new_node_page(struct page *page, unsigned long private,
> gfp_mask |= __GFP_HIGHMEM;
>
> if (!nodes_empty(nmask))
> - new_page = __alloc_pages_nodemask(gfp_mask, 0,
> + new_page = __alloc_pages_nodemask(gfp_mask, order,
> node_zonelist(nid, gfp_mask), &nmask);
> if (!new_page)
> - new_page = __alloc_pages(gfp_mask, 0,
> + new_page = __alloc_pages(gfp_mask, order,
> node_zonelist(nid, gfp_mask));
>
> + if (new_page && order == hpage_order(page))
> + prep_transhuge_page(new_page);
> +
new_page has been allocated with 'order' already. I guess just checking
for PageTransHuge(page) on the old THP 'page' should be sufficient as
that has not been changed in any way.
Powered by blists - more mailing lists