lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <112a4d7f-bc53-6e59-7bb8-6fecb65d045d@redhat.com>
Date:   Thu, 21 Apr 2022 13:28:20 -0400
From:   Waiman Long <longman@...hat.com>
To:     Roman Gushchin <roman.gushchin@...ux.dev>
Cc:     Johannes Weiner <hannes@...xchg.org>,
        Michal Hocko <mhocko@...nel.org>,
        Shakeel Butt <shakeelb@...gle.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        linux-kernel@...r.kernel.org, cgroups@...r.kernel.org,
        linux-mm@...ck.org, Muchun Song <songmuchun@...edance.com>,
        "Matthew Wilcox (Oracle)" <willy@...radead.org>,
        Yang Shi <shy828301@...il.com>,
        Vlastimil Babka <vbabka@...e.cz>
Subject: Re: [PATCH] mm/memcg: Free percpu stats memory of dying memcg's


On 4/21/22 12:33, Roman Gushchin wrote:
> On Thu, Apr 21, 2022 at 10:58:45AM -0400, Waiman Long wrote:
>> For systems with large number of CPUs, the majority of the memory
>> consumed by the mem_cgroup structure is actually the percpu stats
>> memory. When a large number of memory cgroups are continuously created
>> and destroyed (like in a container host), it is possible that more
>> and more mem_cgroup structures remained in the dying state holding up
>> increasing amount of percpu memory.
>>
>> We can't free up the memory of the dying mem_cgroup structure due to
>> active references in some other places. However, the percpu stats memory
>> allocated to that mem_cgroup is a different story.
>>
>> This patch adds a new percpu_stats_disabled variable to keep track of
>> the state of the percpu stats memory. If the variable is set, percpu
>> stats update will be disabled for that particular memcg. All the stats
>> update will be forward to its parent instead. Reading of the its percpu
>> stats will return 0.
>>
>> The flushing and freeing of the percpu stats memory is a multi-step
>> process. The percpu_stats_disabled variable is set when the memcg is
>> being set to offline state. After a grace period with the help of RCU,
>> the percpu stats data are flushed and then freed.
>>
>> This will greatly reduce the amount of memory held up by dying memory
>> cgroups.
>>
>> By running a simple management tool for container 2000 times per test
>> run, below are the results of increases of percpu memory (as reported
>> in /proc/meminfo) and nr_dying_descendants in root's cgroup.stat.
> Hi Waiman!
>
> I've been proposing the same idea some time ago:
> https://lore.kernel.org/all/20190312223404.28665-7-guro@fb.com/T/ .
>
> However I dropped it with the thinking that with many other fixes
> preventing the accumulation of the dying cgroups it's not worth the added
> complexity and a potential cpu overhead.
>
> I think it ultimately comes to the number of dying cgroups. If it's low,
> memory savings are not worth the cpu overhead. If it's high, they are.
> I hope long-term to drive it down significantly (with lru-pages reparenting
> being the first major milestone), but it might take a while.
>
> I don't have a strong opinion either way, just want to dump my thoughts
> on this.

I have quite a number of customer cases complaining about increasing 
percpu memory usages. The number of dying memcg's can go to tens of 
thousands. From my own investigation, I believe that those dying memcg's 
are not freed because they are pinned down by references in the page 
structure. I am aware that we support the use of objcg in the page 
structure which will allow easy reparenting, but most pages don't do 
that and it is not easy to do this conversion and it may take quite a 
while to do that.

In term of overhead, it is mostly one more memory read from the 
mem_cgroup structure in the update path. I don't expect there will be 
that many updates when the memcg is in an offline state as updates will 
be slower in this case. Freeing the dying memcg will take a bit longer 
though, but its impact on the overall system performance should still be 
negligible.

I am also thinking about using a static_key for turning it on only for 
systems with more than, say, 20 cpus as the percpu memory overhead 
increases linearly with the number of possible cpus.

Any other suggestions and improvements are welcome.

Cheers,
Longman

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ