[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <BAD34D59-187B-4BB3-B139-7983A8B62648@linux.dev>
Date: Wed, 7 Dec 2022 11:34:31 +0800
From: Muchun Song <muchun.song@...ux.dev>
To: Sidhartha Kumar <sidhartha.kumar@...cle.com>
Cc: linux-kernel@...r.kernel.org,
Linux Memory Management List <linux-mm@...ck.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Muchun Song <songmuchun@...edance.com>,
Mike Kravetz <mike.kravetz@...cle.com>,
Matthew Wilcox <willy@...radead.org>,
Mina Almasry <almasrymina@...gle.com>,
Miaohe Lin <linmiaohe@...wei.com>, hughd@...gle.com,
tsahu@...ux.ibm.com, jhubbard@...dia.com,
David Hildenbrand <david@...hat.com>
Subject: Re: [PATCH mm-unstable v5 01/10] mm: add folio dtor and order setter
functions
> On Nov 30, 2022, at 06:50, Sidhartha Kumar <sidhartha.kumar@...cle.com> wrote:
>
> Add folio equivalents for set_compound_order() and set_compound_page_dtor().
>
> Also remove extra new-lines introduced by mm/hugetlb: convert
> move_hugetlb_state() to folios and mm/hugetlb_cgroup: convert
> hugetlb_cgroup_uncharge_page() to folios.
>
> Suggested-by: Mike Kravetz <mike.kravetz@...cle.com>
> Suggested-by: Muchun Song <songmuchun@...edance.com>
> Signed-off-by: Sidhartha Kumar <sidhartha.kumar@...cle.com>
> ---
> include/linux/mm.h | 16 ++++++++++++++++
> mm/hugetlb.c | 4 +---
> 2 files changed, 17 insertions(+), 3 deletions(-)
>
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index a48c5ad16a5e..2bdef8a5298a 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -972,6 +972,13 @@ static inline void set_compound_page_dtor(struct page *page,
> page[1].compound_dtor = compound_dtor;
> }
>
> +static inline void folio_set_compound_dtor(struct folio *folio,
> + enum compound_dtor_id compound_dtor)
> +{
> + VM_BUG_ON_FOLIO(compound_dtor >= NR_COMPOUND_DTORS, folio);
> + folio->_folio_dtor = compound_dtor;
> +}
> +
> void destroy_large_folio(struct folio *folio);
>
> static inline int head_compound_pincount(struct page *head)
> @@ -987,6 +994,15 @@ static inline void set_compound_order(struct page *page, unsigned int order)
> #endif
> }
>
> +static inline void folio_set_compound_order(struct folio *folio,
> + unsigned int order)
> +{
> + folio->_folio_order = order;
> +#ifdef CONFIG_64BIT
> + folio->_folio_nr_pages = order ? 1U << order : 0;
It seems that you think the user could pass 0 to order. However,
->_folio_nr_pages and ->_folio_order fields are invalid for order-0 pages.
You should not touch it. So this should be:
static inline void folio_set_compound_order(struct folio *folio,
unsigned int order)
{
if (!folio_test_large(folio))
return;
folio->_folio_order = order;
#ifdef CONFIG_64BIT
folio->_folio_nr_pages = 1U << order;
#endif
}
If we can make sure all the users of folio_set_compound_order() should pass
a non-order-0 page (it is true for now), then I suggest adding a VM_BUG_ON()
here to catch unexpected users.
static inline void folio_set_compound_order(struct folio *folio,
unsigned int order)
{
VM_BUG_ON_FOLIO(!folio_test_large(folio), folio);
folio->_folio_order = order;
#ifdef CONFIG_64BIT
folio->_folio_nr_pages = 1U << order;
#endif
}
Thanks.
> +#endif
> +}
> +
Powered by blists - more mailing lists