[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAMgjq7DQmdoQKZeFjpnYQ4wgMx3j-Lu7na+Ghs_Dh=Rq36MDOw@mail.gmail.com>
Date: Sun, 28 Dec 2025 01:49:50 +0800
From: Kairui Song <ryncsn@...il.com>
To: Chen Ridong <chenridong@...weicloud.com>
Cc: akpm@...ux-foundation.org, axelrasmussen@...gle.com, yuanchu@...gle.com,
weixugc@...gle.com, david@...nel.org, lorenzo.stoakes@...cle.com,
Liam.Howlett@...cle.com, vbabka@...e.cz, rppt@...nel.org, surenb@...gle.com,
mhocko@...e.com, corbet@....net, hannes@...xchg.org, roman.gushchin@...ux.dev,
shakeel.butt@...ux.dev, muchun.song@...ux.dev, zhengqi.arch@...edance.com,
mkoutny@...e.com, linux-mm@...ck.org, linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org, cgroups@...r.kernel.org, lujialin4@...wei.com
Subject: Re: [PATCH -next v2 0/7] mm/mglru: remove memcg lru
On Wed, Dec 24, 2025 at 3:56 PM Chen Ridong <chenridong@...weicloud.com> wrote:
>
> From: Chen Ridong <chenridong@...wei.com>
>
> The memcg LRU was introduced to improve scalability in global reclaim,
> but its implementation has grown complex and can cause performance
> regressions when creating many memory cgroups [1].
>
> This series implements mem_cgroup_iter with a reclaim cookie in
> shrink_many() for global reclaim, following the pattern already used in
> shrink_node_memcgs(), an approach suggested by Johannes [1]. The new
> design maintains good fairness across cgroups by preserving iteration
> state between reclaim passes.
>
> Testing was performed using the original stress test from Yu Zhao [2] on a
> 1 TB, 4-node NUMA system. The results show:
>
> pgsteal:
> memcg LRU memcg iter
> stddev(pgsteal) / mean(pgsteal) 106.03% 93.20%
> sum(pgsteal) / sum(requested) 98.10% 99.28%
>
> workingset_refault_anon:
> memcg LRU memcg iter
> stddev(refault) / mean(refault) 193.97% 134.67%
> sum(refault) 1,963,229 2,027,567
Hi Ridong,
Thanks for helping simplify the code, I would also like to see it get simpler.
But refault isn't what the memcg LRU is trying to prevent, memcg LRU
is intended to reduce the overhead of reclaim. If there are multiple
memcg running, the memcg LRU helps to scale up and reclaim the least
reclaimed one and hence reduce the total system time spent on
eviction.
That test you used was only posted to show that memcg LRU is
effective. The scalability test is posted elsewhere, both from Yu:
https://lore.kernel.org/all/20221220214923.1229538-1-yuzhao@google.com/
https://lore.kernel.org/all/20221221000748.1374772-1-yuzhao@google.com/
I'm not entirely sure the performance impact of this series on that,
but I think this test postes here doesn't really prove that. Just my
two cents.
Powered by blists - more mailing lists