[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160929123216.GH408@dhcp22.suse.cz>
Date: Thu, 29 Sep 2016 14:32:17 +0200
From: Michal Hocko <mhocko@...nel.org>
To: Gerald Schaefer <gerald.schaefer@...ibm.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Naoya Horiguchi <n-horiguchi@...jp.nec.com>,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
"Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
Vlastimil Babka <vbabka@...e.cz>,
Mike Kravetz <mike.kravetz@...cle.com>,
"Aneesh Kumar K . V" <aneesh.kumar@...ux.vnet.ibm.com>,
Martin Schwidefsky <schwidefsky@...ibm.com>,
Heiko Carstens <heiko.carstens@...ibm.com>,
Rui Teng <rui.teng@...ux.vnet.ibm.com>,
Dave Hansen <dave.hansen@...ux.intel.com>
Subject: Re: [PATCH v4 3/3] mm/hugetlb: improve locking in
dissolve_free_huge_pages()
On Mon 26-09-16 19:28:11, Gerald Schaefer wrote:
> For every pfn aligned to minimum_order, dissolve_free_huge_pages() will
> call dissolve_free_huge_page() which takes the hugetlb spinlock, even if
> the page is not huge at all or a hugepage that is in-use.
>
> Improve this by doing the PageHuge() and page_count() checks already in
> dissolve_free_huge_pages() before calling dissolve_free_huge_page(). In
> dissolve_free_huge_page(), when holding the spinlock, those checks need
> to be revalidated.
>
> Signed-off-by: Gerald Schaefer <gerald.schaefer@...ibm.com>
Acked-by: Michal Hocko <mhocko@...e.com>
> ---
> mm/hugetlb.c | 12 +++++++++---
> 1 file changed, 9 insertions(+), 3 deletions(-)
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 91ae1f5..770d83e 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -1476,14 +1476,20 @@ static int dissolve_free_huge_page(struct page *page)
> int dissolve_free_huge_pages(unsigned long start_pfn, unsigned long end_pfn)
> {
> unsigned long pfn;
> + struct page *page;
> int rc = 0;
>
> if (!hugepages_supported())
> return rc;
>
> - for (pfn = start_pfn; pfn < end_pfn; pfn += 1 << minimum_order)
> - if (rc = dissolve_free_huge_page(pfn_to_page(pfn)))
> - break;
> + for (pfn = start_pfn; pfn < end_pfn; pfn += 1 << minimum_order) {
> + page = pfn_to_page(pfn);
> + if (PageHuge(page) && !page_count(page)) {
> + rc = dissolve_free_huge_page(page);
> + if (rc)
> + break;
> + }
> + }
>
> return rc;
> }
> --
> 2.8.4
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists