[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220204195852.1751729-10-willy@infradead.org>
Date: Fri, 4 Feb 2022 19:57:46 +0000
From: "Matthew Wilcox (Oracle)" <willy@...radead.org>
To: linux-mm@...ck.org
Cc: "Matthew Wilcox (Oracle)" <willy@...radead.org>,
linux-kernel@...r.kernel.org, Christoph Hellwig <hch@....de>,
John Hubbard <jhubbard@...dia.com>,
Jason Gunthorpe <jgg@...dia.com>,
William Kucharski <william.kucharski@...cle.com>
Subject: [PATCH 09/75] mm/gup: Handle page split race more efficiently
If we hit the page split race, the current code returns NULL which will
presumably trigger a retry under the mmap_lock. This isn't necessary;
we can just retry the compound_head() lookup. This is a very minor
optimisation of an unlikely path, but conceptually it matches (eg)
the page cache RCU-protected lookup.
Signed-off-by: Matthew Wilcox (Oracle) <willy@...radead.org>
Reviewed-by: Christoph Hellwig <hch@....de>
Reviewed-by: John Hubbard <jhubbard@...dia.com>
Reviewed-by: Jason Gunthorpe <jgg@...dia.com>
Reviewed-by: William Kucharski <william.kucharski@...cle.com>
---
mm/gup.c | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/mm/gup.c b/mm/gup.c
index fa75b71820a2..923a0d44203c 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -68,7 +68,10 @@ static void put_page_refs(struct page *page, int refs)
*/
static inline struct page *try_get_compound_head(struct page *page, int refs)
{
- struct page *head = compound_head(page);
+ struct page *head;
+
+retry:
+ head = compound_head(page);
if (WARN_ON_ONCE(page_ref_count(head) < 0))
return NULL;
@@ -86,7 +89,7 @@ static inline struct page *try_get_compound_head(struct page *page, int refs)
*/
if (unlikely(compound_head(page) != head)) {
put_page_refs(head, refs);
- return NULL;
+ goto retry;
}
return head;
--
2.34.1
Powered by blists - more mailing lists