lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <SJ0PR11MB567819D1B7D112512778D0D6C96C2@SJ0PR11MB5678.namprd11.prod.outlook.com>
Date: Fri, 20 Sep 2024 02:22:20 +0000
From: "Sridhar, Kanchana P" <kanchana.p.sridhar@...el.com>
To: Yosry Ahmed <yosryahmed@...gle.com>, Nhat Pham <nphamcs@...il.com>
CC: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"linux-mm@...ck.org" <linux-mm@...ck.org>, "hannes@...xchg.org"
	<hannes@...xchg.org>, "chengming.zhou@...ux.dev" <chengming.zhou@...ux.dev>,
	"usamaarif642@...il.com" <usamaarif642@...il.com>, "ryan.roberts@....com"
	<ryan.roberts@....com>, "Huang, Ying" <ying.huang@...el.com>,
	"21cnbao@...il.com" <21cnbao@...il.com>, "akpm@...ux-foundation.org"
	<akpm@...ux-foundation.org>, "Zou, Nanhai" <nanhai.zou@...el.com>, "Feghali,
 Wajdi K" <wajdi.k.feghali@...el.com>, "Gopal, Vinodh"
	<vinodh.gopal@...el.com>, "Sridhar, Kanchana P"
	<kanchana.p.sridhar@...el.com>
Subject: RE: [PATCH v6 0/3] mm: ZSWAP swap-out of mTHP folios


> -----Original Message-----
> From: Yosry Ahmed <yosryahmed@...gle.com>
> Sent: Thursday, August 29, 2024 4:55 PM
> To: Nhat Pham <nphamcs@...il.com>
> Cc: Sridhar, Kanchana P <kanchana.p.sridhar@...el.com>; linux-
> kernel@...r.kernel.org; linux-mm@...ck.org; hannes@...xchg.org;
> chengming.zhou@...ux.dev; usamaarif642@...il.com;
> ryan.roberts@....com; Huang, Ying <ying.huang@...el.com>;
> 21cnbao@...il.com; akpm@...ux-foundation.org; Zou, Nanhai
> <nanhai.zou@...el.com>; Feghali, Wajdi K <wajdi.k.feghali@...el.com>;
> Gopal, Vinodh <vinodh.gopal@...el.com>
> Subject: Re: [PATCH v6 0/3] mm: ZSWAP swap-out of mTHP folios
> 
> On Thu, Aug 29, 2024 at 4:45 PM Nhat Pham <nphamcs@...il.com> wrote:
> >
> > On Thu, Aug 29, 2024 at 3:49 PM Yosry Ahmed <yosryahmed@...gle.com>
> wrote:
> > >
> > > On Thu, Aug 29, 2024 at 2:27 PM Kanchana P Sridhar
> > >
> > > We are basically comparing zram with zswap in this case, and it's not
> > > fair because, as you mentioned, the zswap compressed data is being
> > > accounted for while the zram compressed data isn't. I am not really
> > > sure how valuable these test results are. Even if we remove the cgroup
> > > accounting from zswap, we won't see an improvement, we should expect
> a
> > > similar performance to zram.
> > >
> > > I think the test results that are really valuable are case 1, where
> > > zswap users are currently disabling CONFIG_THP_SWAP, and get to enable
> > > it after this series.
> >
> > Ah, this is a good point.
> >
> > I think the point of comparing mTHP zswap v.s mTHP (SSD)swap is more
> > of a sanity check. IOW, if mTHP swap outperforms mTHP zswap, then
> > something is wrong (otherwise why would enable zswap - might as well
> > just use swap, since SSD swap with mTHP >>> zswap with mTHP >>> zswap
> > without mTHP).
> 
> Yeah, good point, but as you mention below..
> 
> >
> > That said, I don't think this benchmark can show it anyway. The access
> > pattern here is such that all the allocated memories are really cold,
> > so swap to disk (or to zram, which does not account memory usage
> > towards cgroup) is better by definition... And Kanchana does not seem
> > to have access to setup with larger SSD swapfiles? :)
> 
> I think it's also the fact that the processes exit right after they
> are done allocating the memory. So I think in the case of SSD, when we
> stall waiting for IO some processes get to exit and free up memory, so
> we need to do less swapping out in general because the processes are
> more serialized. With zswap, all processes try to access memory at the
> same time so the required amount of memory at any given point is
> higher, leading to more thrashing.
> 
> I suggested keeping the memory allocated for a long time to even the
> playing field, or we can make the processes keep looping and accessing
> the memory (or part of it) for a while.

Thanks for the suggestion, Yosry. I have shared the data in my earlier
response today, that seems to confirm your hypothesis. Please do let
me know if you have any other suggestions.

We generally see better throughput of usemem with zswap-mTHP
as compared to SSD-mTHP.

Thanks,
Kanchana

> 
> That being said, I think this may be a signal that the memory.high
> throttling is not performing as expected in the zswap case. Not sure
> tbh, but I don't think SSD swap should perform better than zswap in
> that case.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ