[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YmGHYNuAp8957ouq@carbon>
Date: Thu, 21 Apr 2022 09:33:36 -0700
From: Roman Gushchin <roman.gushchin@...ux.dev>
To: Waiman Long <longman@...hat.com>
Cc: Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...nel.org>,
Shakeel Butt <shakeelb@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org, cgroups@...r.kernel.org,
linux-mm@...ck.org, Muchun Song <songmuchun@...edance.com>,
"Matthew Wilcox (Oracle)" <willy@...radead.org>,
Yang Shi <shy828301@...il.com>,
Vlastimil Babka <vbabka@...e.cz>
Subject: Re: [PATCH] mm/memcg: Free percpu stats memory of dying memcg's
On Thu, Apr 21, 2022 at 10:58:45AM -0400, Waiman Long wrote:
> For systems with large number of CPUs, the majority of the memory
> consumed by the mem_cgroup structure is actually the percpu stats
> memory. When a large number of memory cgroups are continuously created
> and destroyed (like in a container host), it is possible that more
> and more mem_cgroup structures remained in the dying state holding up
> increasing amount of percpu memory.
>
> We can't free up the memory of the dying mem_cgroup structure due to
> active references in some other places. However, the percpu stats memory
> allocated to that mem_cgroup is a different story.
>
> This patch adds a new percpu_stats_disabled variable to keep track of
> the state of the percpu stats memory. If the variable is set, percpu
> stats update will be disabled for that particular memcg. All the stats
> update will be forward to its parent instead. Reading of the its percpu
> stats will return 0.
>
> The flushing and freeing of the percpu stats memory is a multi-step
> process. The percpu_stats_disabled variable is set when the memcg is
> being set to offline state. After a grace period with the help of RCU,
> the percpu stats data are flushed and then freed.
>
> This will greatly reduce the amount of memory held up by dying memory
> cgroups.
>
> By running a simple management tool for container 2000 times per test
> run, below are the results of increases of percpu memory (as reported
> in /proc/meminfo) and nr_dying_descendants in root's cgroup.stat.
Hi Waiman!
I've been proposing the same idea some time ago:
https://lore.kernel.org/all/20190312223404.28665-7-guro@fb.com/T/ .
However I dropped it with the thinking that with many other fixes
preventing the accumulation of the dying cgroups it's not worth the added
complexity and a potential cpu overhead.
I think it ultimately comes to the number of dying cgroups. If it's low,
memory savings are not worth the cpu overhead. If it's high, they are.
I hope long-term to drive it down significantly (with lru-pages reparenting
being the first major milestone), but it might take a while.
I don't have a strong opinion either way, just want to dump my thoughts
on this.
Thanks!
Powered by blists - more mailing lists