lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 20 Jul 2023 00:54:28 -0700
From:   Yosry Ahmed <yosryahmed@...gle.com>
To:     Sergey Senozhatsky <senozhatsky@...omium.org>
Cc:     Hyeonggon Yoo <42.hyeyoo@...il.com>,
        Minchan Kim <minchan@...nel.org>,
        Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org, Matthew Wilcox <willy@...radead.org>,
        Mike Rapoport <rppt@...nel.org>
Subject: Re: [RFC PATCH v2 00/21] mm/zsmalloc: Split zsdesc from struct page

On Thu, Jul 20, 2023 at 12:18 AM Sergey Senozhatsky
<senozhatsky@...omium.org> wrote:
>
> On (23/07/13 13:20), Hyeonggon Yoo wrote:
> > The purpose of this series is to define own memory descriptor for zsmalloc,
> > instead of re-using various fields of struct page. This is a part of the
> > effort to reduce the size of struct page to unsigned long and enable
> > dynamic allocation of memory descriptors.
> >
> > While [1] outlines this ultimate objective, the current use of struct page
> > is highly dependent on its definition, making it challenging to separately
> > allocate memory descriptors.
>
> I glanced through the series and it all looks pretty straight forward to
> me. I'll have a closer look. And we definitely need Minchan to ACK it.
>
> > Therefore, this series introduces new descriptor for zsmalloc, called
> > zsdesc. It overlays struct page for now, but will eventually be allocated
> > independently in the future.
>
> So I don't expect zsmalloc memory usage increase. On one hand for each
> physical page that zspage consists of we will allocate zsdesc (extra bytes),
> but at the same time struct page gets slimmer. So we should be even, or
> am I wrong?

Well, it depends. Here is my understanding (which may be completely wrong):

The end goal would be to have an 8-byte memdesc for each order-0 page,
and then allocate a specialized struct per-folio according to the use
case. In this case, we would have a memdesc and a zsdesc for each
order-0 page. If sizeof(zsdesc) is 64 bytes (on 64-bit), then it's a
net loss. The savings only start kicking in with higher order folios.
As of now, zsmalloc only uses order-0 pages as far as I can tell, so
the usage would increase if I understand correctly.

It seems to me though the sizeof(zsdesc) is actually 56 bytes (on
64-bit), so sizeof(zsdesc) + sizeof(memdesc) would be equal to the
current size of struct page. If that's true, then there is no loss,
and there's potential gain if we start using higher order folios in
zsmalloc in the future.

(That is of course unless we want to maintain cache line alignment for
the zsdescs, then we might end up using 64 bytes anyway).

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ