[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YxDsu8Ol/yOg7sMV@monkey>
Date: Thu, 1 Sep 2022 10:32:43 -0700
From: Mike Kravetz <mike.kravetz@...cle.com>
To: Sidhartha Kumar <sidhartha.kumar@...cle.com>, willy@...radead.org
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org,
akpm@...ux-foundation.org, songmuchun@...edance.com,
vbabka@...e.cz, william.kucharski@...cle.com, dhowells@...hat.com,
peterx@...hat.com, arnd@...db.de, ccross@...gle.com,
hughd@...gle.com, ebiederm@...ssion.com
Subject: Re: [PATCH 2/7] mm: add private field of first tail to struct page
and struct folio
On 08/29/22 16:00, Sidhartha Kumar wrote:
> Allows struct folio to store hugetlb metadata that is contained in the
> private field of the first tail page.
>
> Signed-off-by: Sidhartha Kumar <sidhartha.kumar@...cle.com>
> ---
> include/linux/mm_types.h | 15 +++++++++++++++
> 1 file changed, 15 insertions(+)
>
> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> index 8a9ee9d24973..726c5304172c 100644
> --- a/include/linux/mm_types.h
> +++ b/include/linux/mm_types.h
> @@ -144,6 +144,7 @@ struct page {
> #ifdef CONFIG_64BIT
> unsigned int compound_nr; /* 1 << compound_order */
> #endif
> + unsigned long _private_1;
> };
> struct { /* Second tail page of compound page */
> unsigned long _compound_pad_1; /* compound_head */
> @@ -251,6 +252,7 @@ struct page {
> * @_total_mapcount: Do not use directly, call folio_entire_mapcount().
> * @_pincount: Do not use directly, call folio_maybe_dma_pinned().
> * @_folio_nr_pages: Do not use directly, call folio_nr_pages().
> + * @_private_1: Do not use directly, call folio_get_private_1().
> *
> * A folio is a physically, virtually and logically contiguous set
> * of bytes. It is a power-of-two in size, and it is aligned to that
Not really an issue with this patch, but it made me read more of this
comment about folios. It goes on to say ...
* same power-of-two. It is at least as large as %PAGE_SIZE. If it is
* in the page cache, it is at a file offset which is a multiple of that
* power-of-two. It may be mapped into userspace at an address which is
* at an arbitrary page offset, but its kernel virtual address is aligned
* to its size.
*/
This series is to begin converting hugetlb code to folios. Just want to
note that 'hugetlb folios' have specific user space alignment restrictions.
So, I do not think the comment about arbitrary page offset would apply to
hugetlb.
Matthew, should we note that hugetlb is special in the comment? Or, is it
not worth updating?
Also, folio_get_private_1 will be used for the hugetlb subpool pointer
which resides in page[1].private. This is used in the next patch of
this series. I'm sure you are aware that hugetlb also uses page private
in sub pages 2 and 3. Can/will/should this method of accessing private
in sub pages be expanded to cover these as well? Expansion can happen
later, but if this can not be expanded perhaps we should come up with
another scheme.
--
Mike Kravetz
> @@ -298,6 +300,8 @@ struct folio {
> #ifdef CONFIG_64BIT
> unsigned int _folio_nr_pages;
> #endif
> + unsigned long _private_1;
> +
> };
>
> #define FOLIO_MATCH(pg, fl) \
> @@ -325,6 +329,7 @@ FOLIO_MATCH(compound_mapcount, _total_mapcount);
> FOLIO_MATCH(compound_pincount, _pincount);
> #ifdef CONFIG_64BIT
> FOLIO_MATCH(compound_nr, _folio_nr_pages);
> +FOLIO_MATCH(_private_1, _private_1);
> #endif
> #undef FOLIO_MATCH
>
> @@ -370,6 +375,16 @@ static inline void *folio_get_private(struct folio *folio)
> return folio->private;
> }
>
> +static inline void folio_set_private_1(struct folio *folio, unsigned long private)
> +{
> + folio->_private_1 = private;
> +}
> +
> +static inline unsigned long folio_get_private_1(struct folio *folio)
> +{
> + return folio->_private_1;
> +}
> +
> struct page_frag_cache {
> void * va;
> #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
> --
> 2.31.1
>
Powered by blists - more mailing lists