lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKEwX=Mz_tmU1Qjm8ExfnmCVvkNcd2cYpcLQLZBBx0QCXJvOpA@mail.gmail.com>
Date: Thu, 29 Aug 2024 17:06:43 -0700
From: Nhat Pham <nphamcs@...il.com>
To: Yosry Ahmed <yosryahmed@...gle.com>
Cc: Kanchana P Sridhar <kanchana.p.sridhar@...el.com>, linux-kernel@...r.kernel.org, 
	linux-mm@...ck.org, hannes@...xchg.org, chengming.zhou@...ux.dev, 
	usamaarif642@...il.com, ryan.roberts@....com, ying.huang@...el.com, 
	21cnbao@...il.com, akpm@...ux-foundation.org, nanhai.zou@...el.com, 
	wajdi.k.feghali@...el.com, vinodh.gopal@...el.com
Subject: Re: [PATCH v6 0/3] mm: ZSWAP swap-out of mTHP folios

On Thu, Aug 29, 2024 at 4:55 PM Yosry Ahmed <yosryahmed@...gle.com> wrote:
>
> On Thu, Aug 29, 2024 at 4:45 PM Nhat Pham <nphamcs@...il.com> wrote:
> I think it's also the fact that the processes exit right after they
> are done allocating the memory. So I think in the case of SSD, when we
> stall waiting for IO some processes get to exit and free up memory, so
> we need to do less swapping out in general because the processes are
> more serialized. With zswap, all processes try to access memory at the
> same time so the required amount of memory at any given point is
> higher, leading to more thrashing.
>
> I suggested keeping the memory allocated for a long time to even the
> playing field, or we can make the processes keep looping and accessing
> the memory (or part of it) for a while.
>
> That being said, I think this may be a signal that the memory.high
> throttling is not performing as expected in the zswap case. Not sure
> tbh, but I don't think SSD swap should perform better than zswap in
> that case.

Yeah something is fishy there. That said, the benchmarking in v4 is wack:

1. We use lz4, which has a really poor compression factor.

2. The swapfile is really small, so we occasionally see problems with
swap allocation failure.

Both of these factors affect benchmarking validity and stability a
lot. I think in this version's benchmarks, with zstd as the software
compressor + a much larger swapfile (albeit on top of a ZRAM block
device), we no longer see memory.high violation, even at a lower
memory.high value...? The performance number is wack indeed - not a
lot of values in the case 2 section.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ