[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20160926172811.94033-4-gerald.schaefer@de.ibm.com>
Date: Mon, 26 Sep 2016 19:28:11 +0200
From: Gerald Schaefer <gerald.schaefer@...ibm.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Naoya Horiguchi <n-horiguchi@...jp.nec.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Michal Hocko <mhocko@...e.cz>,
"Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
Vlastimil Babka <vbabka@...e.cz>,
Mike Kravetz <mike.kravetz@...cle.com>,
"Aneesh Kumar K . V" <aneesh.kumar@...ux.vnet.ibm.com>,
Martin Schwidefsky <schwidefsky@...ibm.com>,
Heiko Carstens <heiko.carstens@...ibm.com>,
Rui Teng <rui.teng@...ux.vnet.ibm.com>,
Dave Hansen <dave.hansen@...ux.intel.com>
Subject: [PATCH v4 3/3] mm/hugetlb: improve locking in dissolve_free_huge_pages()
For every pfn aligned to minimum_order, dissolve_free_huge_pages() will
call dissolve_free_huge_page() which takes the hugetlb spinlock, even if
the page is not huge at all or a hugepage that is in-use.
Improve this by doing the PageHuge() and page_count() checks already in
dissolve_free_huge_pages() before calling dissolve_free_huge_page(). In
dissolve_free_huge_page(), when holding the spinlock, those checks need
to be revalidated.
Signed-off-by: Gerald Schaefer <gerald.schaefer@...ibm.com>
---
mm/hugetlb.c | 12 +++++++++---
1 file changed, 9 insertions(+), 3 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 91ae1f5..770d83e 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1476,14 +1476,20 @@ static int dissolve_free_huge_page(struct page *page)
int dissolve_free_huge_pages(unsigned long start_pfn, unsigned long end_pfn)
{
unsigned long pfn;
+ struct page *page;
int rc = 0;
if (!hugepages_supported())
return rc;
- for (pfn = start_pfn; pfn < end_pfn; pfn += 1 << minimum_order)
- if (rc = dissolve_free_huge_page(pfn_to_page(pfn)))
- break;
+ for (pfn = start_pfn; pfn < end_pfn; pfn += 1 << minimum_order) {
+ page = pfn_to_page(pfn);
+ if (PageHuge(page) && !page_count(page)) {
+ rc = dissolve_free_huge_page(page);
+ if (rc)
+ break;
+ }
+ }
return rc;
}
--
2.8.4
Powered by blists - more mailing lists