[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210809184832.18342-2-mike.kravetz@oracle.com>
Date: Mon, 9 Aug 2021 11:48:30 -0700
From: Mike Kravetz <mike.kravetz@...cle.com>
To: linux-mm@...ck.org, linux-kernel@...r.kernel.org
Cc: Muchun Song <songmuchun@...edance.com>,
Michal Hocko <mhocko@...e.com>,
Oscar Salvador <osalvador@...e.de>,
David Hildenbrand <david@...hat.com>,
Matthew Wilcox <willy@...radead.org>,
Naoya Horiguchi <naoya.horiguchi@...ux.dev>,
Mina Almasry <almasrymina@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Mike Kravetz <mike.kravetz@...cle.com>
Subject: [PATCH v2 1/3] hugetlb: simplify prep_compound_gigantic_page ref count racing code
Code in prep_compound_gigantic_page waits for a rcu grace period if it
notices a temporarily inflated ref count on a tail page. This was due
to the identified potential race with speculative page cache references
which could only last for a rcu grace period. This is overly complicated
as this situation is VERY unlikely to ever happen. Instead, just quickly
return an error.
Also, only print a warning in prep_compound_gigantic_page instead of
multiple callers.
Signed-off-by: Mike Kravetz <mike.kravetz@...cle.com>
---
mm/hugetlb.c | 17 +++++++----------
1 file changed, 7 insertions(+), 10 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index dfc940d5221d..791ee699d635 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1657,16 +1657,14 @@ static bool prep_compound_gigantic_page(struct page *page, unsigned int order)
* cache adding could take a ref on a 'to be' tail page.
* We need to respect any increased ref count, and only set
* the ref count to zero if count is currently 1. If count
- * is not 1, we call synchronize_rcu in the hope that a rcu
- * grace period will cause ref count to drop and then retry.
- * If count is still inflated on retry we return an error and
- * must discard the pages.
+ * is not 1, we return an error. An error return indicates
+ * the set of pages can not be converted to a gigantic page.
+ * The caller who allocated the pages should then discard the
+ * pages using the appropriate free interface.
*/
if (!page_ref_freeze(p, 1)) {
- pr_info("HugeTLB unexpected inflated ref count on freshly allocated page\n");
- synchronize_rcu();
- if (!page_ref_freeze(p, 1))
- goto out_error;
+ pr_warn("HugeTLB page can not be used due to unexpected inflated ref count\n");
+ goto out_error;
}
set_page_count(p, 0);
set_compound_head(p, page);
@@ -1830,7 +1828,6 @@ static struct page *alloc_fresh_huge_page(struct hstate *h,
retry = true;
goto retry;
}
- pr_warn("HugeTLB page can not be used due to unexpected inflated ref count\n");
return NULL;
}
}
@@ -2828,8 +2825,8 @@ static void __init gather_bootmem_prealloc(void)
prep_new_huge_page(h, page, page_to_nid(page));
put_page(page); /* add to the hugepage allocator */
} else {
+ /* VERY unlikely inflated ref count on a tail page */
free_gigantic_page(page, huge_page_order(h));
- pr_warn("HugeTLB page can not be used due to unexpected inflated ref count\n");
}
/*
--
2.31.1
Powered by blists - more mailing lists