lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20251204123124.1822965-1-chenridong@huaweicloud.com>
Date: Thu,  4 Dec 2025 12:31:22 +0000
From: Chen Ridong <chenridong@...weicloud.com>
To: akpm@...ux-foundation.org,
	axelrasmussen@...gle.com,
	yuanchu@...gle.com,
	weixugc@...gle.com,
	david@...nel.org,
	lorenzo.stoakes@...cle.com,
	Liam.Howlett@...cle.com,
	vbabka@...e.cz,
	rppt@...nel.org,
	surenb@...gle.com,
	mhocko@...e.com,
	corbet@....net,
	hannes@...xchg.org,
	roman.gushchin@...ux.dev,
	shakeel.butt@...ux.dev,
	muchun.song@...ux.dev,
	yuzhao@...gle.com,
	zhengqi.arch@...edance.com
Cc: linux-mm@...ck.org,
	linux-doc@...r.kernel.org,
	linux-kernel@...r.kernel.org,
	cgroups@...r.kernel.org,
	lujialin4@...wei.com,
	chenridong@...wei.com
Subject: [RFC PATCH -next 0/2]  mm/mglru: remove memcg lru

From: Chen Ridong <chenridong@...wei.com>

The memcg LRU was introduced for global reclaim to improve scalability,
but its implementation has grown complex. Moreover, it can cause
performance regressions when creating a large number of memory cgroups [1].

This series implements mem_cgroup_iter with a reclaim cookie in
shrink_many() for global reclaim, following the pattern already established
in shrink_node_memcgs(), an approach suggested by Johannes [1]. The new
approach provides good fairness across cgroups by preserving iteration
state between reclaim passes.

Testing was performed using the original stress test from Zhao Yu [2] on a
1 TB, 4-node NUMA system. The results show:

                                            before         after
    stddev(pgsteal) / mean(pgsteal)            91.2%         75.7%
    sum(pgsteal) / sum(requested)             216.4%        230.5%

The new implementation reduces the standard deviation relative to the mean
by 15.5 percentage points, indicating improved fairness in memory reclaim
distribution. The total pages reclaimed increased from 85,086,871 to
90,633,890 (6.5% increase), resulting in a higher ratio of actual to
requested reclaim.

To simplify review:
- Patch 1 uses mem_cgroup_iter with reclaim cookie in shrink_many()
- Patch 2 removes the now-unused memcg LRU code

[1] https://lore.kernel.org/r/20251126171513.GC135004@cmpxchg.org
[2] https://lore.kernel.org/r/20221222041905.2431096-7-yuzhao@google.com

Chen Ridong (2):
  mm/mglru: use mem_cgroup_iter for global reclaim
  mm/mglru: remove memcg lru

 Documentation/mm/multigen_lru.rst |  30 ----
 include/linux/mmzone.h            |  89 ----------
 mm/memcontrol-v1.c                |   6 -
 mm/memcontrol.c                   |   4 -
 mm/mm_init.c                      |   1 -
 mm/vmscan.c                       | 270 ++++--------------------------
 6 files changed, 37 insertions(+), 363 deletions(-)

-- 
2.34.1


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ