[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <SA3PR11MB8120AD2AD0A9208BDA861580C9CF2@SA3PR11MB8120.namprd11.prod.outlook.com>
Date: Sat, 1 Mar 2025 01:09:22 +0000
From: "Sridhar, Kanchana P" <kanchana.p.sridhar@...el.com>
To: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>, "hannes@...xchg.org"
<hannes@...xchg.org>, "yosry.ahmed@...ux.dev" <yosry.ahmed@...ux.dev>,
"nphamcs@...il.com" <nphamcs@...il.com>, "chengming.zhou@...ux.dev"
<chengming.zhou@...ux.dev>, "usamaarif642@...il.com"
<usamaarif642@...il.com>, "ryan.roberts@....com" <ryan.roberts@....com>,
"21cnbao@...il.com" <21cnbao@...il.com>, "ying.huang@...ux.alibaba.com"
<ying.huang@...ux.alibaba.com>, "akpm@...ux-foundation.org"
<akpm@...ux-foundation.org>, "linux-crypto@...r.kernel.org"
<linux-crypto@...r.kernel.org>, "herbert@...dor.apana.org.au"
<herbert@...dor.apana.org.au>, "davem@...emloft.net" <davem@...emloft.net>,
"clabbe@...libre.com" <clabbe@...libre.com>, "ardb@...nel.org"
<ardb@...nel.org>, "ebiggers@...gle.com" <ebiggers@...gle.com>,
"surenb@...gle.com" <surenb@...gle.com>, "Accardi, Kristen C"
<kristen.c.accardi@...el.com>, "Sridhar, Kanchana P"
<kanchana.p.sridhar@...el.com>
CC: "Feghali, Wajdi K" <wajdi.k.feghali@...el.com>, "Gopal, Vinodh"
<vinodh.gopal@...el.com>
Subject: RE: [PATCH v7 00/15] zswap IAA compress batching
Hi All,
> Performance testing (Kernel compilation, allmodconfig):
> =======================================================
>
> The experiments with kernel compilation test, 32 threads, in tmpfs use the
> "allmodconfig" that takes ~12 minutes, and has considerable swapout/swapin
> activity. The cgroup's memory.max is set to 2G.
>
>
> 64K folios: Kernel compilation/allmodconfig:
> ============================================
>
> -------------------------------------------------------------------------------
> mm-unstable v7 mm-unstable v7
> -------------------------------------------------------------------------------
> zswap compressor deflate-iaa deflate-iaa zstd zstd
> -------------------------------------------------------------------------------
> real_sec 775.83 765.90 769.39 772.63
> user_sec 15,659.10 15,659.14 15,666.28 15,665.98
> sys_sec 4,209.69 4,040.44 5,277.86 5,358.61
> -------------------------------------------------------------------------------
> Max_Res_Set_Size_KB 1,871,116 1,874,128 1,873,200 1,873,488
> -------------------------------------------------------------------------------
> memcg_high 0 0 0 0
> memcg_swap_fail 0 0 0 0
> zswpout 107,305,181 106,985,511 86,621,912 89,355,274
> zswpin 32,418,991 32,184,517 25,337,514 26,522,042
> pswpout 272 80 94 16
> pswpin 274 69 54 16
> thp_swpout 0 0 0 0
> thp_swpout_fallback 0 0 0 0
> 64kB_swpout_fallback 494 0 0 0
> pgmajfault 34,577,545 34,333,290 26,892,991 28,132,682
> ZSWPOUT-64kB 3,498,796 3,460,751 2,737,544 2,823,211
> SWPOUT-64kB 17 4 4 1
> -------------------------------------------------------------------------------
>
> [...]
>
> Summary:
> ========
> The performance testing data with usemem 30 processes and kernel
> compilation test show 61%-73% throughput gains and 27%-37% sys time
> reduction (usemem30) and 4% sys time reduction (kernel compilation) with
> zswap_store() large folios using IAA compress batching as compared to
> IAA sequential. There is no performance regression for zstd/usemem30 and a
> slight 1.5% sys time zstd regression with kernel compilation allmod
> config.
I think I know why kernel_compilation with zstd shows a regression whereas
usemem30 does not. It is because I lock/unlock the acomp_ctx mutex once
per folio. This can cause decomp jobs to wait for the mutex, which can cause
more compressions, and this repeats. kernel_compilation has 25M+ decomps
with zstd, whereas usemem30 has practically no decomps, but is
compression-intensive, because of which it benefits the once-per-folio lock
acquire/release.
I am testing a fix where I return zswap_compress() to do the mutex lock/unlock,
and expect to post v8 by end of the day. I would appreciate it if you can hold off
on reviewing only the zswap patches [14, 15] in my v7 and instead review the v8
versions of these two patches.
Thanks!
Kanchana
Powered by blists - more mailing lists