[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a35eb904-aa1d-8f70-c868-8e50b791118b@nvidia.com>
Date: Mon, 10 Jan 2022 22:47:23 -0800
From: John Hubbard <jhubbard@...dia.com>
To: "Matthew Wilcox (Oracle)" <willy@...radead.org>, linux-mm@...ck.org
Cc: Christoph Hellwig <hch@...radead.org>,
William Kucharski <william.kucharski@...cle.com>,
linux-kernel@...r.kernel.org, Jason Gunthorpe <jgg@...pe.ca>
Subject: Re: [PATCH v2 18/28] hugetlb: Use try_grab_folio() instead of
try_grab_compound_head()
On 1/9/22 20:23, Matthew Wilcox (Oracle) wrote:
> follow_hugetlb_page() only cares about success or failure, so it doesn't
> need to know the type of the returned pointer, only whether it's NULL
> or not.
>
> Signed-off-by: Matthew Wilcox (Oracle) <willy@...radead.org>
> ---
> include/linux/mm.h | 3 ---
> mm/gup.c | 2 +-
> mm/hugetlb.c | 7 +++----
> 3 files changed, 4 insertions(+), 8 deletions(-)
>
Reviewed-by: John Hubbard <jhubbard@...dia.com>
thanks,
--
John Hubbard
NVIDIA
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index b249156f7cf1..c103c6401ecd 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -1195,9 +1195,6 @@ static inline void get_page(struct page *page)
> }
>
> bool __must_check try_grab_page(struct page *page, unsigned int flags);
> -struct page *try_grab_compound_head(struct page *page, int refs,
> - unsigned int flags);
> -
>
> static inline __must_check bool try_get_page(struct page *page)
> {
> diff --git a/mm/gup.c b/mm/gup.c
> index 719252fa0402..20703de2f107 100644
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -146,7 +146,7 @@ struct folio *try_grab_folio(struct page *page, int refs, unsigned int flags)
> return NULL;
> }
>
> -struct page *try_grab_compound_head(struct page *page,
> +static inline struct page *try_grab_compound_head(struct page *page,
> int refs, unsigned int flags)
> {
> return &try_grab_folio(page, refs, flags)->page;
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index abcd1785c629..ab67b13c4a71 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -6072,7 +6072,7 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma,
>
> if (pages) {
> /*
> - * try_grab_compound_head() should always succeed here,
> + * try_grab_folio() should always succeed here,
> * because: a) we hold the ptl lock, and b) we've just
> * checked that the huge page is present in the page
> * tables. If the huge page is present, then the tail
> @@ -6081,9 +6081,8 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma,
> * any way. So this page must be available at this
> * point, unless the page refcount overflowed:
> */
> - if (WARN_ON_ONCE(!try_grab_compound_head(pages[i],
> - refs,
> - flags))) {
> + if (WARN_ON_ONCE(!try_grab_folio(pages[i], refs,
> + flags))) {
> spin_unlock(ptl);
> remainder = 0;
> err = -ENOMEM;
Powered by blists - more mailing lists