[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <SJ0PR11MB567808506EEBB2CF5ECC6931C96C2@SJ0PR11MB5678.namprd11.prod.outlook.com>
Date: Fri, 20 Sep 2024 02:26:56 +0000
From: "Sridhar, Kanchana P" <kanchana.p.sridhar@...el.com>
To: Nhat Pham <nphamcs@...il.com>, Yosry Ahmed <yosryahmed@...gle.com>
CC: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>, "hannes@...xchg.org"
<hannes@...xchg.org>, "chengming.zhou@...ux.dev" <chengming.zhou@...ux.dev>,
"usamaarif642@...il.com" <usamaarif642@...il.com>, "ryan.roberts@....com"
<ryan.roberts@....com>, "Huang, Ying" <ying.huang@...el.com>,
"21cnbao@...il.com" <21cnbao@...il.com>, "akpm@...ux-foundation.org"
<akpm@...ux-foundation.org>, "Zou, Nanhai" <nanhai.zou@...el.com>, "Feghali,
Wajdi K" <wajdi.k.feghali@...el.com>, "Gopal, Vinodh"
<vinodh.gopal@...el.com>, "Sridhar, Kanchana P"
<kanchana.p.sridhar@...el.com>
Subject: RE: [PATCH v6 0/3] mm: ZSWAP swap-out of mTHP folios
> -----Original Message-----
> From: Nhat Pham <nphamcs@...il.com>
> Sent: Thursday, August 29, 2024 5:07 PM
> To: Yosry Ahmed <yosryahmed@...gle.com>
> Cc: Sridhar, Kanchana P <kanchana.p.sridhar@...el.com>; linux-
> kernel@...r.kernel.org; linux-mm@...ck.org; hannes@...xchg.org;
> chengming.zhou@...ux.dev; usamaarif642@...il.com;
> ryan.roberts@....com; Huang, Ying <ying.huang@...el.com>;
> 21cnbao@...il.com; akpm@...ux-foundation.org; Zou, Nanhai
> <nanhai.zou@...el.com>; Feghali, Wajdi K <wajdi.k.feghali@...el.com>;
> Gopal, Vinodh <vinodh.gopal@...el.com>
> Subject: Re: [PATCH v6 0/3] mm: ZSWAP swap-out of mTHP folios
>
> On Thu, Aug 29, 2024 at 4:55 PM Yosry Ahmed <yosryahmed@...gle.com>
> wrote:
> >
> > On Thu, Aug 29, 2024 at 4:45 PM Nhat Pham <nphamcs@...il.com>
> wrote:
> > I think it's also the fact that the processes exit right after they
> > are done allocating the memory. So I think in the case of SSD, when we
> > stall waiting for IO some processes get to exit and free up memory, so
> > we need to do less swapping out in general because the processes are
> > more serialized. With zswap, all processes try to access memory at the
> > same time so the required amount of memory at any given point is
> > higher, leading to more thrashing.
> >
> > I suggested keeping the memory allocated for a long time to even the
> > playing field, or we can make the processes keep looping and accessing
> > the memory (or part of it) for a while.
> >
> > That being said, I think this may be a signal that the memory.high
> > throttling is not performing as expected in the zswap case. Not sure
> > tbh, but I don't think SSD swap should perform better than zswap in
> > that case.
>
> Yeah something is fishy there. That said, the benchmarking in v4 is wack:
>
> 1. We use lz4, which has a really poor compression factor.
>
> 2. The swapfile is really small, so we occasionally see problems with
> swap allocation failure.
>
> Both of these factors affect benchmarking validity and stability a
> lot. I think in this version's benchmarks, with zstd as the software
> compressor + a much larger swapfile (albeit on top of a ZRAM block
> device), we no longer see memory.high violation, even at a lower
> memory.high value...? The performance number is wack indeed - not a
> lot of values in the case 2 section.
Hopefully the latest data from the two sets of experiments (4G SSD with
usemem --sleep 10, and 179G SSD) should make better sense?
Thanks,
Kanchana
Powered by blists - more mailing lists