[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <SJ0PR11MB5678FC506E95506EF883EB71C9962@SJ0PR11MB5678.namprd11.prod.outlook.com>
Date: Thu, 29 Aug 2024 02:38:04 +0000
From: "Sridhar, Kanchana P" <kanchana.p.sridhar@...el.com>
To: Barry Song <21cnbao@...il.com>
CC: "akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
"baolin.wang@...ux.alibaba.com" <baolin.wang@...ux.alibaba.com>,
"chrisl@...nel.org" <chrisl@...nel.org>, "david@...hat.com"
<david@...hat.com>, "hanchuanhua@...o.com" <hanchuanhua@...o.com>,
"hannes@...xchg.org" <hannes@...xchg.org>, "hch@...radead.org"
<hch@...radead.org>, "hughd@...gle.com" <hughd@...gle.com>,
"kaleshsingh@...gle.com" <kaleshsingh@...gle.com>, "kasong@...cent.com"
<kasong@...cent.com>, "linux-kernel@...r.kernel.org"
<linux-kernel@...r.kernel.org>, "linux-mm@...ck.org" <linux-mm@...ck.org>,
"mhocko@...e.com" <mhocko@...e.com>, "minchan@...nel.org"
<minchan@...nel.org>, "nphamcs@...il.com" <nphamcs@...il.com>,
"ryan.roberts@....com" <ryan.roberts@....com>, "ryncsn@...il.com"
<ryncsn@...il.com>, "senozhatsky@...omium.org" <senozhatsky@...omium.org>,
"shakeel.butt@...ux.dev" <shakeel.butt@...ux.dev>, "shy828301@...il.com"
<shy828301@...il.com>, "surenb@...gle.com" <surenb@...gle.com>,
"v-songbaohua@...o.com" <v-songbaohua@...o.com>, "willy@...radead.org"
<willy@...radead.org>, "xiang@...nel.org" <xiang@...nel.org>, "Huang, Ying"
<ying.huang@...el.com>, "yosryahmed@...gle.com" <yosryahmed@...gle.com>,
"zhengtangquan@...o.com" <zhengtangquan@...o.com>, "Feghali, Wajdi K"
<wajdi.k.feghali@...el.com>, "Gopal, Vinodh" <vinodh.gopal@...el.com>,
"Sridhar, Kanchana P" <kanchana.p.sridhar@...el.com>
Subject: RE: [PATCH v7 2/2] mm: support large folios swap-in for sync io
devices
Hi Barry,
> -----Original Message-----
> From: Barry Song <21cnbao@...il.com>
> Sent: Wednesday, August 28, 2024 7:25 PM
> To: Sridhar, Kanchana P <kanchana.p.sridhar@...el.com>
> Cc: akpm@...ux-foundation.org; baolin.wang@...ux.alibaba.com;
> chrisl@...nel.org; david@...hat.com; hanchuanhua@...o.com;
> hannes@...xchg.org; hch@...radead.org; hughd@...gle.com;
> kaleshsingh@...gle.com; kasong@...cent.com; linux-
> kernel@...r.kernel.org; linux-mm@...ck.org; mhocko@...e.com;
> minchan@...nel.org; nphamcs@...il.com; ryan.roberts@....com;
> ryncsn@...il.com; senozhatsky@...omium.org; shakeel.butt@...ux.dev;
> shy828301@...il.com; surenb@...gle.com; v-songbaohua@...o.com;
> willy@...radead.org; xiang@...nel.org; Huang, Ying
> <ying.huang@...el.com>; yosryahmed@...gle.com;
> zhengtangquan@...o.com; Feghali, Wajdi K <wajdi.k.feghali@...el.com>;
> Gopal, Vinodh <vinodh.gopal@...el.com>
> Subject: Re: [PATCH v7 2/2] mm: support large folios swap-in for sync io
> devices
>
> On Thu, Aug 29, 2024 at 1:01 PM Kanchana P Sridhar
> <kanchana.p.sridhar@...el.com> wrote:
> >
> > Hi Shakeel,
> >
> > We submitted an RFC patchset [1] with the Intel In-Memory Analytics
> > Accelerator (Intel IAA) sometime back. This introduces a new 'canned-by_n'
> > compression algorithm in the IAA crypto driver.
> >
> > Relative to software compressors, we could get a 10X improvement in zram
> > write latency and 7X improvement in zram read latency.
> >
> > [1]
> https://lore.kernel.org/all/cover.1714581792.git.andre.glover@linux.intel.co
> m/
>
> Hi Kanchana,
> Thanks for sharing. I understand you’ll need this mTHP swap-in series
> to leverage your
> IAA for parallel decompression, right? Without mTHP swap-in, you won't
> get this 7X
> improvement, right?
Yes, that is correct.
>
> This is another important use case for the mTHP swap-in series,
> highlighting the strong
> need to start the work from the sync IO device.
Sure, this makes sense!
>
> I’ll try to find some time to review your patch and explore how we can
> better support both
> software and hardware improvements in zsmalloc/zram with a more
> compatible approach.
> Also, I have a talk[1] at LPC2024—would you mind if I include a
> description of your use
> case?
Sure, this sounds good.
Thanks,
Kanchana
>
> [1] https://lpc.events/event/18/contributions/1780/
>
> >
> > Thanks,
> > Kanchana
>
> Thanks
> Barry
Powered by blists - more mailing lists