[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201022225308.2927890-3-guro@fb.com>
Date: Thu, 22 Oct 2020 15:53:08 -0700
From: Roman Gushchin <guro@...com>
To: Andrew Morton <akpm@...ux-foundation.org>
CC: Zi Yan <ziy@...dia.com>, Joonsoo Kim <iamjoonsoo.kim@....com>,
Mike Kravetz <mike.kravetz@...cle.com>,
<saberlily.xia@...ilicon.com>, <linux-kernel@...r.kernel.org>,
<linux-mm@...ck.org>, <kernel-team@...com>,
Roman Gushchin <guro@...com>
Subject: [PATCH v1 2/2] mm: hugetlb: don't drop hugetlb_lock around cma_release() call
Replace blocking cma_release() with a non-blocking cma_release_nowait()
call, so there is no more need to temporarily drop hugetlb_lock.
Signed-off-by: Roman Gushchin <guro@...com>
---
mm/hugetlb.c | 11 +++--------
1 file changed, 3 insertions(+), 8 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index fe76f8fd5a73..230e9b6c9a2b 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1224,10 +1224,11 @@ static void free_gigantic_page(struct page *page, unsigned int order)
{
/*
* If the page isn't allocated using the cma allocator,
- * cma_release() returns false.
+ * cma_release_nowait() returns false.
*/
#ifdef CONFIG_CMA
- if (cma_release(hugetlb_cma[page_to_nid(page)], page, 1 << order))
+ if (cma_release_nowait(hugetlb_cma[page_to_nid(page)], page,
+ 1 << order))
return;
#endif
@@ -1312,14 +1313,8 @@ static void update_and_free_page(struct hstate *h, struct page *page)
set_compound_page_dtor(page, NULL_COMPOUND_DTOR);
set_page_refcounted(page);
if (hstate_is_gigantic(h)) {
- /*
- * Temporarily drop the hugetlb_lock, because
- * we might block in free_gigantic_page().
- */
- spin_unlock(&hugetlb_lock);
destroy_compound_gigantic_page(page, huge_page_order(h));
free_gigantic_page(page, huge_page_order(h));
- spin_lock(&hugetlb_lock);
} else {
__free_pages(page, huge_page_order(h));
}
--
2.26.2
Powered by blists - more mailing lists