[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAMZfGtVYX=SoHsqRPFeqY4JK=M3cq2VuXJrkns=Q2rQGVZnCnA@mail.gmail.com>
Date: Wed, 12 Jan 2022 12:48:00 +0800
From: Muchun Song <songmuchun@...edance.com>
To: Roman Gushchin <guro@...com>
Cc: Matthew Wilcox <willy@...radead.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...nel.org>,
Vladimir Davydov <vdavydov.dev@...il.com>,
Shakeel Butt <shakeelb@...gle.com>,
Yang Shi <shy828301@...il.com>, Alex Shi <alexs@...nel.org>,
Wei Yang <richard.weiyang@...il.com>,
Dave Chinner <david@...morbit.com>,
trond.myklebust@...merspace.com, anna.schumaker@...app.com,
jaegeuk@...nel.org, chao@...nel.org,
Kari Argillander <kari.argillander@...il.com>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>,
Linux Memory Management List <linux-mm@...ck.org>,
linux-nfs@...r.kernel.org, Qi Zheng <zhengqi.arch@...edance.com>,
Xiongchun duan <duanxiongchun@...edance.com>,
Fam Zheng <fam.zheng@...edance.com>,
Muchun Song <smuchun@...il.com>
Subject: Re: [PATCH v5 10/16] mm: list_lru: allocate list_lru_one only when needed
On Wed, Jan 12, 2022 at 4:00 AM Roman Gushchin <guro@...com> wrote:
>
> On Mon, Dec 20, 2021 at 04:56:43PM +0800, Muchun Song wrote:
> > In our server, we found a suspected memory leak problem. The kmalloc-32
> > consumes more than 6GB of memory. Other kmem_caches consume less than
> > 2GB memory.
> >
> > After our in-depth analysis, the memory consumption of kmalloc-32 slab
> > cache is the cause of list_lru_one allocation.
> >
> > crash> p memcg_nr_cache_ids
> > memcg_nr_cache_ids = $2 = 24574
> >
> > memcg_nr_cache_ids is very large and memory consumption of each list_lru
> > can be calculated with the following formula.
> >
> > num_numa_node * memcg_nr_cache_ids * 32 (kmalloc-32)
> >
> > There are 4 numa nodes in our system, so each list_lru consumes ~3MB.
> >
> > crash> list super_blocks | wc -l
> > 952
> >
> > Every mount will register 2 list lrus, one is for inode, another is for
> > dentry. There are 952 super_blocks. So the total memory is 952 * 2 * 3
> > MB (~5.6GB). But the number of memory cgroup is less than 500. So I
> > guess more than 12286 containers have been deployed on this machine (I
> > do not know why there are so many containers, it may be a user's bug or
> > the user really want to do that). And memcg_nr_cache_ids has not been
> > reduced to a suitable value. This can waste a lot of memory.
>
> But on the other side you increase the size of struct list_lru_per_memcg,
> so if number of cgroups is close to memcg_nr_cache_ids, we can actually
> waste more memory.
The saving comes from the fact that we currently allocate scope for every
memcg to be able to be tracked on every superblock instantiated in the system,
regardless of whether that superblock is even accessible to that memcg.
In theory, increasing struct list_lru_per_memcg is not significant, most
savings is from decreasing the number of allocations of struct
list_lru_per_memcg.
> I'm not saying the change is not worth it, but would be
> nice to add some real-world numbers.
OK. I will do a test.
>
> Or it's all irrelevant and is done as a preparation to the conversion to xarray?
Right. It's also a preparation to transfer to xarray.
> If so, please, make it clear.
Will do.
Thanks.
Powered by blists - more mailing lists