[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZlBVRXfZxDJjIIBd@kernel.org>
Date: Fri, 24 May 2024 11:52:21 +0300
From: Mike Rapoport <rppt@...nel.org>
To: David Hildenbrand <david@...hat.com>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Andrew Morton <akpm@...ux-foundation.org>,
"Matthew Wilcox (Oracle)" <willy@...radead.org>,
Minchan Kim <minchan@...nel.org>,
Sergey Senozhatsky <senozhatsky@...omium.org>,
Hyeonggon Yoo <42.hyeyoo@...il.com>
Subject: Re: [PATCH RFC 3/6] mm/zsmalloc: use a proper page type
On Thu, May 23, 2024 at 04:55:44PM +0200, David Hildenbrand wrote:
> On 22.05.24 23:03, David Hildenbrand wrote:
> > Let's clean it up: use a proper page type and store our data (offset
> > into a page) in the lower 16 bit as documented.
> >
> > We'll have to restrict ourselves to <= 64KB base page size (so the offset
> > fits into 16 bit), which sounds reasonable. Unfortunately, we don't have
> > any space to store it elsewhere for now.
> >
> > Based on this, we should do a proper "struct zsdesc" conversion, as
> > proposed in [1].
> >
> > This removes the last _mapcount/page_type offender.
> >
> > [1] https://lore.kernel.org/all/20231130101242.2590384-1-42.hyeyoo@gmail.com/
> >
> > Cc: Hyeonggon Yoo <42.hyeyoo@...il.com>
> > Signed-off-by: David Hildenbrand <david@...hat.com>
> > ---
> > include/linux/page-flags.h | 3 +++
> > mm/Kconfig | 1 +
> > mm/zsmalloc.c | 23 +++++++++++++++++++----
> > 3 files changed, 23 insertions(+), 4 deletions(-)
> >
> > diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
> > index ed9ac4b5233d..ccaf16656de9 100644
> > --- a/include/linux/page-flags.h
> > +++ b/include/linux/page-flags.h
> > @@ -959,6 +959,7 @@ PAGEFLAG_FALSE(HasHWPoisoned, has_hwpoisoned)
> > #define PG_guard 0x00080000
> > #define PG_hugetlb 0x00100800
> > #define PG_slab 0x00200000
> > +#define PG_zsmalloc 0x00400000
> > #define PageType(page, flag) \
> > ((page->page_type & (PAGE_TYPE_BASE | flag)) == PAGE_TYPE_BASE)
> > @@ -1073,6 +1074,8 @@ FOLIO_TYPE_OPS(hugetlb, hugetlb)
> > FOLIO_TEST_FLAG_FALSE(hugetlb)
> > #endif
> > +PAGE_TYPE_OPS(Zsmalloc, zsmalloc, zsmalloc)
> > +
> > /**
> > * PageHuge - Determine if the page belongs to hugetlbfs
> > * @page: The page to test.
> > diff --git a/mm/Kconfig b/mm/Kconfig
> > index b4cb45255a54..0371d79b1b75 100644
> > --- a/mm/Kconfig
> > +++ b/mm/Kconfig
> > @@ -190,6 +190,7 @@ config ZSMALLOC
> > tristate
> > prompt "N:1 compression allocator (zsmalloc)" if ZSWAP
> > depends on MMU
> > + depends on PAGE_SIZE_LESS_THAN_256KB # we want <= 64KB
> > help
> > zsmalloc is a slab-based memory allocator designed to store
> > pages of various compression levels efficiently. It achieves
> > diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> > index b42d3545ca85..6f0032e06242 100644
> > --- a/mm/zsmalloc.c
> > +++ b/mm/zsmalloc.c
> > @@ -20,7 +20,8 @@
> > * page->index: links together all component pages of a zspage
> > * For the huge page, this is always 0, so we use this field
> > * to store handle.
> > - * page->page_type: first object offset in a subpage of zspage
> > + * page->page_type: PG_zsmalloc, lower 16 bit locate the first object
> > + * offset in a subpage of a zspage
> > *
> > * Usage of struct page flags:
> > * PG_private: identifies the first component page
> > @@ -450,14 +451,22 @@ static inline struct page *get_first_page(struct zspage *zspage)
> > return first_page;
> > }
> > +static inline void reset_first_obj_offset(struct page *page)
> > +{
> > + page->page_type |= 0xffff;
> > +}
> > +
> > static inline unsigned int get_first_obj_offset(struct page *page)
> > {
> > - return page->page_type;
> > + return page->page_type & 0xffff;
> > }
> > static inline void set_first_obj_offset(struct page *page, unsigned int offset)
> > {
> > - page->page_type = offset;
> > + BUILD_BUG_ON(PAGE_SIZE & ~0xffff);
>
> Buildbot is correctly complaining with PAGE_SIZE=64KiB.
>
> We must check BUILD_BUG_ON((PAGE_SIZE -1) & ~0xffff);
Won't
BUILD_BUG_ON(PAGE_SIZE > SZ_64K)
be clearer?
> --
> Cheers,
>
> David / dhildenb
>
>
--
Sincerely yours,
Mike.
Powered by blists - more mailing lists