lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240814060354.GC8686@google.com>
Date: Wed, 14 Aug 2024 15:03:54 +0900
From: Sergey Senozhatsky <senozhatsky@...omium.org>
To: Matthew Wilcox <willy@...radead.org>
Cc: Sergey Senozhatsky <senozhatsky@...omium.org>,
	Yosry Ahmed <yosryahmed@...gle.com>,
	Andrew Morton <akpm@...ux-foundation.org>, alexs@...nel.org,
	Vitaly Wool <vitaly.wool@...sulko.com>,
	Miaohe Lin <linmiaohe@...wei.com>, linux-kernel@...r.kernel.org,
	linux-mm@...ck.org, minchan@...nel.org, david@...hat.com,
	42.hyeyoo@...il.com, nphamcs@...il.com
Subject: Re: [PATCH v5 00/21] mm/zsmalloc: add zpdesc memory descriptor for
 zswap.zpool

On (24/08/08 04:37), Matthew Wilcox wrote:
[..]
>
> I don't know if it's _your_ problem.  It's _our_ problem.  The arguments
> for (at least attempting) to shrink struct page seem quite compelling.
> We have a plan for most of the users of struct page, in greater or
> lesser detail.  I don't think we have a plan for zsmalloc.  Or at least
> if there is a plan, I don't know what it is.

Got you, thanks.  And sorry for a very delayed reply.

> > > Do you allocate a per-page struct zpdesc, and have each one pointing
> > > to a zspage?
> > 
> > I'm not very knowledgeable when it comes to memdesc, excuse my
> > ignorance, and please feel free to educate me.
> 
> I've written about it here:
> https://kernelnewbies.org/MatthewWilcox/Memdescs
> https://kernelnewbies.org/MatthewWilcox/FolioAlloc
> https://kernelnewbies.org/MatthewWilcox/Memdescs/Path

Thanks a lot!

> > So I guess if we have something
> > 
> > struct zspage {
> > 	..
> > 	struct zpdesc *first_desc;
> > 	..
> > }
> > 
> > and we "chain" zpdesc-s to form a zspage, and make each of them point to
> > a corresponding struct page (memdesc -> *page), then it'll resemble current
> > zsmalloc and should work for everyone? I also assume for zspdesc-s zsmalloc
> > will need to maintain a dedicated kmem_cache?
> 
> Right, we could do that.  Each memdesc has to be a multiple of 16 bytes,
> sp we'd be doing something like allocating 32 bytes for each page.
> Is there really 32 bytes of information that we want to store for
> each page?  Or could we store all of the information in (a somewhat
> larger) zspage?  Assuming we allocate 3 pages per zspage, if we allocate
> an extra 64 bytes in the zspage, we've saved 32 bytes per zspage.

I certainly like (and appreciate) the approach that saves us
some bytes here and there.  zsmalloc page can consist of 1 to
up to CONFIG_ZSMALLOC_CHAIN_SIZE (max 16) physical pages.  I'm
trying to understand (in pseudo-C code) what does a "somewhat larger
zspage" mean.  A fixed size array (given that we know the max number
of physical pages) per-zspage?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ