[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1561350068-8966-1-git-send-email-kernelfans@gmail.com>
Date: Mon, 24 Jun 2019 12:21:08 +0800
From: Pingfan Liu <kernelfans@...il.com>
To: linux-mm@...ck.org
Cc: Pingfan Liu <kernelfans@...il.com>,
Mike Kravetz <mike.kravetz@...cle.com>,
Oscar Salvador <osalvador@...e.de>,
David Hildenbrand <david@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org
Subject: [PATCH] mm/hugetlb: allow gigantic page allocation to migrate away smaller huge page
The current pfn_range_valid_gigantic() rejects the pud huge page allocation
if there is a pmd huge page inside the candidate range.
But pud huge resource is more rare, which should align on 1GB on x86. It is
worth to allow migrating away pmd huge page to make room for a pud huge
page.
The same logic is applied to pgd and pud huge pages.
Signed-off-by: Pingfan Liu <kernelfans@...il.com>
Cc: Mike Kravetz <mike.kravetz@...cle.com>
Cc: Oscar Salvador <osalvador@...e.de>
Cc: David Hildenbrand <david@...hat.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>
Cc: linux-kernel@...r.kernel.org
---
mm/hugetlb.c | 8 +++++---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index ac843d3..02d1978 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1081,7 +1081,11 @@ static bool pfn_range_valid_gigantic(struct zone *z,
unsigned long start_pfn, unsigned long nr_pages)
{
unsigned long i, end_pfn = start_pfn + nr_pages;
- struct page *page;
+ struct page *page = pfn_to_page(start_pfn);
+
+ if (PageHuge(page))
+ if (compound_order(compound_head(page)) >= nr_pages)
+ return false;
for (i = start_pfn; i < end_pfn; i++) {
if (!pfn_valid(i))
@@ -1098,8 +1102,6 @@ static bool pfn_range_valid_gigantic(struct zone *z,
if (page_count(page) > 0)
return false;
- if (PageHuge(page))
- return false;
}
return true;
--
2.7.5
Powered by blists - more mailing lists