lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240621-zsmalloc-lock-mm-everything-v2-0-d30e9cd2b793@linux.dev>
Date: Fri, 21 Jun 2024 15:15:08 +0800
From: Chengming Zhou <chengming.zhou@...ux.dev>
To: Minchan Kim <minchan@...nel.org>, 
 Sergey Senozhatsky <senozhatsky@...omium.org>, 
 Andrew Morton <akpm@...ux-foundation.org>, 
 Johannes Weiner <hannes@...xchg.org>, Yosry Ahmed <yosryahmed@...gle.com>, 
 Nhat Pham <nphamcs@...il.com>
Cc: Yu Zhao <yuzhao@...gle.com>, Takero Funaki <flintglass@...il.com>, 
 Chengming Zhou <zhouchengming@...edance.com>, 
 Dan Carpenter <dan.carpenter@...aro.org>, linux-mm@...ck.org, 
 linux-kernel@...r.kernel.org, Chengming Zhou <chengming.zhou@...ux.dev>
Subject: [PATCH v2 0/2] mm/zsmalloc: change back to per-size_class lock

Changes in v2:
- Fix error handling in zswap_pool_create(), thanks Dan Carpenter.
- Add Reviewed-by tag from Nhat, thanks.
- Improve changelog to explain about other backends, per Yu Zhao.
- Link to v1: https://lore.kernel.org/r/20240617-zsmalloc-lock-mm-everything-v1-0-5e5081ea11b3@linux.dev

Commit c0547d0b6a4b ("zsmalloc: consolidate zs_pool's migrate_lock and
size_class's locks") changed per-size_class lock to pool spinlock to
prepare reclaim support in zsmalloc. Then reclaim support in zsmalloc
had been dropped in favor of LRU reclaim in zswap, but this locking
change had been left there.

Obviously, the scalability of pool spinlock is worse than per-size_class.
And we have a workaround that using 32 pools in zswap to avoid this
scalability problem, which brings its own problems like memory waste
and more memory fragmentation.

So this series changes back to use per-size_class lock and using testing
data in much stressed situation to verify that we can use only one pool
in zswap. Note we only test and care about the zsmalloc backend, which
makes sense now since zsmalloc became a lot more popular than other
backends.

Testing kernel build (make bzImage -j32) on tmpfs with memory.max=1GB,
and zswap shrinker enabled with 10GB swapfile on ext4.

				real	user    sys
6.10.0-rc3			138.18	1241.38 1452.73
6.10.0-rc3-onepool		149.45	1240.45 1844.69
6.10.0-rc3-onepool-perclass	138.23	1242.37 1469.71

We can see from "sys" column that per-size_class locking with only one
pool in zswap can have near performance with the current 32 pools.

Signed-off-by: Chengming Zhou <chengming.zhou@...ux.dev>
---
Chengming Zhou (2):
      mm/zsmalloc: change back to per-size_class lock
      mm/zswap: use only one pool in zswap

 mm/zsmalloc.c | 85 +++++++++++++++++++++++++++++++++++------------------------
 mm/zswap.c    | 60 +++++++++++++----------------------------
 2 files changed, 69 insertions(+), 76 deletions(-)
---
base-commit: 7c4c5a2ebbcea9031dbb130bb529c8eba025b16a
change-id: 20240617-zsmalloc-lock-mm-everything-387ada6e3ac9

Best regards,
-- 
Chengming Zhou <chengming.zhou@...ux.dev>


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ