lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJD7tkY9SQ3NOukRY3Zh9ML4yyN-zC0krNkpoUzeCd5tyE1Zgw@mail.gmail.com>
Date: Thu, 29 Aug 2024 17:14:15 -0700
From: Yosry Ahmed <yosryahmed@...gle.com>
To: Nhat Pham <nphamcs@...il.com>
Cc: Kanchana P Sridhar <kanchana.p.sridhar@...el.com>, linux-kernel@...r.kernel.org, 
	linux-mm@...ck.org, hannes@...xchg.org, chengming.zhou@...ux.dev, 
	usamaarif642@...il.com, ryan.roberts@....com, ying.huang@...el.com, 
	21cnbao@...il.com, akpm@...ux-foundation.org, nanhai.zou@...el.com, 
	wajdi.k.feghali@...el.com, vinodh.gopal@...el.com
Subject: Re: [PATCH v6 0/3] mm: ZSWAP swap-out of mTHP folios

On Thu, Aug 29, 2024 at 5:06 PM Nhat Pham <nphamcs@...il.com> wrote:
>
> On Thu, Aug 29, 2024 at 4:55 PM Yosry Ahmed <yosryahmed@...gle.com> wrote:
> >
> > On Thu, Aug 29, 2024 at 4:45 PM Nhat Pham <nphamcs@...il.com> wrote:
> > I think it's also the fact that the processes exit right after they
> > are done allocating the memory. So I think in the case of SSD, when we
> > stall waiting for IO some processes get to exit and free up memory, so
> > we need to do less swapping out in general because the processes are
> > more serialized. With zswap, all processes try to access memory at the
> > same time so the required amount of memory at any given point is
> > higher, leading to more thrashing.
> >
> > I suggested keeping the memory allocated for a long time to even the
> > playing field, or we can make the processes keep looping and accessing
> > the memory (or part of it) for a while.
> >
> > That being said, I think this may be a signal that the memory.high
> > throttling is not performing as expected in the zswap case. Not sure
> > tbh, but I don't think SSD swap should perform better than zswap in
> > that case.
>
> Yeah something is fishy there. That said, the benchmarking in v4 is wack:
>
> 1. We use lz4, which has a really poor compression factor.
>
> 2. The swapfile is really small, so we occasionally see problems with
> swap allocation failure.
>
> Both of these factors affect benchmarking validity and stability a
> lot. I think in this version's benchmarks, with zstd as the software
> compressor + a much larger swapfile (albeit on top of a ZRAM block
> device), we no longer see memory.high violation, even at a lower
> memory.high value...? The performance number is wack indeed - not a
> lot of values in the case 2 section.

But when we use zram we are essentially comparing two swap mechanisms
compressing mTHPs page by page, with the only difference being that
zram does not account the memory. For this to have any value imo it
should be on an SSD to at least provide the value of being a practical
sanity check as you mentioned earlier. In its current form I don't
think it's providing any value.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ