[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c7ca4374-4389-8bb5-ad0d-8d2e8c0784e2@nvidia.com>
Date: Mon, 10 Jan 2022 19:30:25 -0800
From: John Hubbard <jhubbard@...dia.com>
To: "Matthew Wilcox (Oracle)" <willy@...radead.org>, linux-mm@...ck.org
Cc: Christoph Hellwig <hch@...radead.org>,
William Kucharski <william.kucharski@...cle.com>,
linux-kernel@...r.kernel.org, Jason Gunthorpe <jgg@...pe.ca>
Subject: Re: [PATCH v2 08/28] gup: Handle page split race more efficiently
On 1/9/22 20:23, Matthew Wilcox (Oracle) wrote:
> If we hit the page split race, the current code returns NULL which will
> presumably trigger a retry under the mmap_lock. This isn't necessary;
> we can just retry the compound_head() lookup. This is a very minor
> optimisation of an unlikely path, but conceptually it matches (eg)
> the page cache RCU-protected lookup.
>
> Signed-off-by: Matthew Wilcox (Oracle) <willy@...radead.org>
> ---
> mm/gup.c | 7 +++++--
> 1 file changed, 5 insertions(+), 2 deletions(-)
Reviewed-by: John Hubbard <jhubbard@...dia.com>
thanks,
--
John Hubbard
NVIDIA
>
> diff --git a/mm/gup.c b/mm/gup.c
> index afb638a30e44..dbb1b54d0def 100644
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -68,7 +68,10 @@ static void put_page_refs(struct page *page, int refs)
> */
> static inline struct page *try_get_compound_head(struct page *page, int refs)
> {
> - struct page *head = compound_head(page);
> + struct page *head;
> +
> +retry:
> + head = compound_head(page);
>
> if (WARN_ON_ONCE(page_ref_count(head) < 0))
> return NULL;
> @@ -86,7 +89,7 @@ static inline struct page *try_get_compound_head(struct page *page, int refs)
> */
> if (unlikely(compound_head(page) != head)) {
> put_page_refs(head, refs);
> - return NULL;
> + goto retry;
> }
>
> return head;
Powered by blists - more mailing lists