lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210428094949.43579-1-songmuchun@bytedance.com>
Date:   Wed, 28 Apr 2021 17:49:40 +0800
From:   Muchun Song <songmuchun@...edance.com>
To:     willy@...radead.org, akpm@...ux-foundation.org, hannes@...xchg.org,
        mhocko@...nel.org, vdavydov.dev@...il.com, shakeelb@...gle.com,
        guro@...com, shy828301@...il.com, alexs@...nel.org,
        alexander.h.duyck@...ux.intel.com, richard.weiyang@...il.com
Cc:     linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
        linux-mm@...ck.org, Muchun Song <songmuchun@...edance.com>
Subject: [PATCH 0/9] Shrink the list lru size on memory cgroup removal

In our server, we found a suspected memory leak problem. The kmalloc-32
consumes more than 6GB of memory. Other kmem_caches consume less than 2GB
memory.

After our in-depth analysis, the memory consumption of kmalloc-32 slab
cache is the cause of list_lru_one allocation.

  crash> p memcg_nr_cache_ids
  memcg_nr_cache_ids = $2 = 24574

memcg_nr_cache_ids is very large and memory consumption of each list_lru
can be calculated with the following formula.

  num_numa_node * memcg_nr_cache_ids * 32 (kmalloc-32)

There are 4 numa nodes in our system, so each list_lru consumes ~3MB.

  crash> list super_blocks | wc -l
  952

Every mount will register 2 list lrus, one is for inode, another is for
dentry. There are 952 super_blocks. So the total memory is 952 * 2 * 3
MB (~5.6GB). But the number of memory cgroup is less than 500. So I
guess more than 12286 containers have been deployed on this machine (I
do not know why there are so many containers, it may be a user's bug or
the user really want to do that). But now there are less than 500
containers in the system. And memcg_nr_cache_ids has not been reduced
to a suitable value. This can waste a lot of memory. If we want to reduce
memcg_nr_cache_ids, we have to reboot the server. This is not what we
want.

So this patchset will dynamically adjust the value of memcg_nr_cache_ids
to keep healthy memory consumption. In this case, we may be able to restore
a healthy environment even if the users have created tens of thousands of
memory cgroups and then destroyed those memory cgroups. This patchset also
contains some code simplification.

Muchun Song (9):
  mm: list_lru: fix list_lru_count_one() return value
  mm: memcontrol: remove kmemcg_id reparenting
  mm: list_lru: rename memcg_drain_all_list_lrus to
    memcg_reparent_list_lrus
  mm: memcontrol: remove the kmem states
  mm: memcontrol: move memcg_online_kmem() to mem_cgroup_css_online()
  mm: list_lru: support for shrinking list lru
  ida: introduce ida_max() to return the maximum allocated ID
  mm: memcontrol: shrink the list lru size
  mm: memcontrol: rename memcg_{get,put}_cache_ids to
    memcg_list_lru_resize_{lock,unlock}

 include/linux/idr.h        |   1 +
 include/linux/list_lru.h   |   2 +-
 include/linux/memcontrol.h |  15 ++----
 lib/idr.c                  |  40 +++++++++++++++
 mm/list_lru.c              |  89 +++++++++++++++++++++++++--------
 mm/memcontrol.c            | 121 +++++++++++++++++++++++++--------------------
 6 files changed, 183 insertions(+), 85 deletions(-)

-- 
2.11.0

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ