[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAMZfGtU7ktsz9VnQ5A0j+yx9irXyRLqyAkMxrjxnG=A62=WHaA@mail.gmail.com>
Date: Thu, 8 Dec 2022 12:41:36 +0800
From: Muchun Song <songmuchun@...edance.com>
To: John Hubbard <jhubbard@...dia.com>
Cc: Sidhartha Kumar <sidhartha.kumar@...cle.com>,
LKML <linux-kernel@...r.kernel.org>,
Linux Memory Management List <linux-mm@...ck.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Mike Kravetz <mike.kravetz@...cle.com>, willy@...radead.org,
tsahu@...ux.ibm.com, David Hildenbrand <david@...hat.com>,
muchun.song@...ux.dev
Subject: Re: [PATCH mm-unstable] mm: clarify folio_set_compound_order() zero support
On Thu, Dec 8, 2022 at 10:27 AM John Hubbard <jhubbard@...dia.com> wrote:
>
> On 12/7/22 17:42, Sidhartha Kumar wrote:
> >> Wouldn't it be better to instead just create a new function for that
> >> case, such as:
> >>
> >> dissolve_large_folio()
> >>
> >
> > Prior to the folio conversion, the helper function __destroy_compound_gigantic_page() did:
> >
> > set_compound_order(page, 0);
> > #ifdef CONFIG_64BIT
> > page[1].compound_nr = 0;
> > #endif
> >
> > as part of dissolving the page. My goal for this patch was to create a function that would encapsulate that segment of code with a single call of folio_set_compound_order(folio, 0). set_compound_order() does not set compound_nr to 0 when 0 is passed in to the order argument so explicitly setting it is required. I don't think a separate dissolve_large_folio() function for the hugetlb case is needed as __destroy_compound_gigantic_folio() is pretty concise as it is.
> >
>
> Instead of "this is abusing function X()" comments, we should prefer
> well-named functions that do something understandable. And you can get
> that by noticing that folio_set_compound_order() collapses down to
> nearly nothing in the special "order 0" case. So just inline that code
> directly into __destroy_compound_gigantic_folio(), taking a moment to
> fill in and consolidate the CONFIG_64BIT missing parts in mm.h.
>
> And now you can get rid of this cruft and "abuse" comment, and instead
> just end up with two simple lines of code that are crystal clear--as
> they should be, in a "__destroy" function. Like this:
>
>
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 105878936485..cf227ed00945 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -1754,6 +1754,7 @@ static inline void set_page_links(struct page *page, enum zone_type zone,
> #endif
> }
>
> +#ifdef CONFIG_64BIT
> /**
> * folio_nr_pages - The number of pages in the folio.
> * @folio: The folio.
> @@ -1764,13 +1765,32 @@ static inline long folio_nr_pages(struct folio *folio)
> {
> if (!folio_test_large(folio))
> return 1;
> -#ifdef CONFIG_64BIT
> return folio->_folio_nr_pages;
> +}
> +
> +static inline void folio_set_nr_pages(struct folio *folio, long nr_pages)
I like this approach and helper name since it is more consistent
with folio_nr_pages.
> +{
> + folio->_folio_nr_pages = nr_pages;
> +}
> #else
> +/**
> + * folio_nr_pages - The number of pages in the folio.
> + * @folio: The folio.
> + *
> + * Return: A positive power of two.
> + */
> +static inline long folio_nr_pages(struct folio *folio)
> +{
> + if (!folio_test_large(folio))
> + return 1;
> return 1L << folio->_folio_order;
> -#endif
> }
>
> +static inline void folio_set_nr_pages(struct folio *folio, long nr_pages)
> +{
> +}
> +#endif
> +
> /**
> * folio_next - Move to the next physical folio.
> * @folio: The folio we're currently operating on.
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index e3500c087893..b507a98063e6 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -1344,7 +1344,8 @@ static void __destroy_compound_gigantic_folio(struct folio *folio,
> set_page_refcounted(p);
> }
>
> - folio_set_compound_order(folio, 0);
> + folio->_folio_order = 0;
I suggest not touch _folio_order directly, we can introduce another helper like
folio_sert_order to set -> _folio_order pairing with folio_order.
Thanks.
> + folio_set_nr_pages(folio, 0);
> __folio_clear_head(folio);
> }
>
>
> Yes?
>
> thanks,
> --
> John Hubbard
> NVIDIA
Powered by blists - more mailing lists