[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <216a335d-f7c6-26ad-2ac1-427c8a73ca2f@arm.com>
Date: Mon, 24 Jun 2019 10:46:36 +0530
From: Anshuman Khandual <anshuman.khandual@....com>
To: Pingfan Liu <kernelfans@...il.com>, linux-mm@...ck.org
Cc: Mike Kravetz <mike.kravetz@...cle.com>,
Oscar Salvador <osalvador@...e.de>,
David Hildenbrand <david@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm/hugetlb: allow gigantic page allocation to migrate
away smaller huge page
On 06/24/2019 09:51 AM, Pingfan Liu wrote:
> The current pfn_range_valid_gigantic() rejects the pud huge page allocation
> if there is a pmd huge page inside the candidate range.
>
> But pud huge resource is more rare, which should align on 1GB on x86. It is
> worth to allow migrating away pmd huge page to make room for a pud huge
> page.
>
> The same logic is applied to pgd and pud huge pages.
The huge page in the range can either be a THP or HugeTLB and migrating them has
different costs and chances of success. THP migration will involve splitting if
THP migration is not enabled and all related TLB related costs. Are you sure
that a PUD HugeTLB allocation really should go through these ? Is there any
guarantee that after migration of multiple PMD sized THP/HugeTLB pages on the
given range, the allocation request for PUD will succeed ?
>
> Signed-off-by: Pingfan Liu <kernelfans@...il.com>
> Cc: Mike Kravetz <mike.kravetz@...cle.com>
> Cc: Oscar Salvador <osalvador@...e.de>
> Cc: David Hildenbrand <david@...hat.com>
> Cc: Andrew Morton <akpm@...ux-foundation.org>
> Cc: linux-kernel@...r.kernel.org
> ---
> mm/hugetlb.c | 8 +++++---
> 1 file changed, 5 insertions(+), 3 deletions(-)
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index ac843d3..02d1978 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -1081,7 +1081,11 @@ static bool pfn_range_valid_gigantic(struct zone *z,
> unsigned long start_pfn, unsigned long nr_pages)
> {
> unsigned long i, end_pfn = start_pfn + nr_pages;
> - struct page *page;
> + struct page *page = pfn_to_page(start_pfn);
> +
> + if (PageHuge(page))
> + if (compound_order(compound_head(page)) >= nr_pages)
> + return false;
>
> for (i = start_pfn; i < end_pfn; i++) {
> if (!pfn_valid(i))
> @@ -1098,8 +1102,6 @@ static bool pfn_range_valid_gigantic(struct zone *z,
> if (page_count(page) > 0)
> return false;
>
> - if (PageHuge(page))
> - return false;
> }
>
> return true;
>
So except in the case where there is a bigger huge page in the range this will
attempt migrating everything on the way. As mentioned before if it all this is
a good idea, it needs to differentiate between HugeTLB and THP and also take
into account costs of migrations and chance of subsequence allocation attempt
into account.
Powered by blists - more mailing lists