lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAF8kJuPymUX+DwouwgH6og0BO6ZYheGXsk=GYqYuMjKMz-Xqbw@mail.gmail.com>
Date:   Fri, 17 Nov 2023 15:47:49 -0800
From:   Chris Li <chrisl@...nel.org>
To:     Zhongkun He <hezhongkun.hzk@...edance.com>
Cc:     Yosry Ahmed <yosryahmed@...gle.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Johannes Weiner <hannes@...xchg.org>,
        Nhat Pham <nphamcs@...il.com>,
        Seth Jennings <sjenning@...hat.com>,
        Dan Streetman <ddstreet@...e.org>,
        Vitaly Wool <vitaly.wool@...sulko.com>,
        linux-mm <linux-mm@...ck.org>,
        LKML <linux-kernel@...r.kernel.org>, Ying <ying.huang@...el.com>
Subject: Re: [External] Re: [PATCH] mm:zswap: fix zswap entry reclamation
 failure in two scenarios

On Fri, Nov 17, 2023 at 1:56 AM Zhongkun He
<hezhongkun.hzk@...edance.com> wrote:
> Hi Chris, thanks for your feedback.  I have the same concerns,
> maybe we should just move the zswap_invalidate() out of batches,
> as Yosry mentioned above.

As I replied in the previous email, I just want to understand the
other side effects of the change better.

To me, this patching is actually freeing the memory that does not
require actual page IO write from zswap. Which means the memory is
from some kind of cache. It would be interesting if we can not
complicate the write back path further. Instead, we can drop those
memories from the different cache if needed. I assume those caches are
doing something useful in the common case. If not, we should have a
patch to remove these caches instead.  Not sure how big a mess it will
be to implement separate the write and drop caches.

While you are here, I have some questions for you.

Can you help me understand how much memory you can free from this
patch? For example, are we talking about a few pages or a few GB?

Where does the freed memory come from?
If the memory comes from zswap entry struct. Due to the slab allocator
fragmentation. It would take a lot of zswap entries to have meaningful
memory reclaimed from the slab allocator.

If the memory comes from the swap cached pages, that would be much
more meaningful. But that is not what this patch is doing, right?

Chris

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ