[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201110010615.1273043-1-guro@fb.com>
Date: Mon, 9 Nov 2020 17:06:15 -0800
From: Roman Gushchin <guro@...com>
To: Andrew Morton <akpm@...ux-foundation.org>, <linux-mm@...ck.org>
CC: Shakeel Butt <shakeelb@...gle.com>,
Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...nel.org>,
<linux-kernel@...r.kernel.org>, <kernel-team@...com>,
Roman Gushchin <guro@...com>
Subject: [PATCH] mm: memcg/slab: enable slab memory accounting atomically
Many kernel memory accounting paths are guarded by the
memcg_kmem_enabled_key static key. It changes it's state during
the onlining of the first non-root cgroup. However is doesn't
happen atomically: before all call sites will become patched
some charges/uncharges can be skipped, resulting in an unbalanced
charge. The problem is mostly a theoretical issue, unlikely
having a noticeable impact ion the real life.
Before the rework of the slab controller we relied at setting
kmemcg_id after enabling the memcg_kmem_enabled_key static key.
Now we can use the setting of memcg->objcg to enable the slab
memory accounting atomically.
The patch also removes obsolete comments related to already deleted
members of kmem_cache->memcg_params.
Signed-off-by: Roman Gushchin <guro@...com>
Fixes: 10befea91b61 ("mm: memcg/slab: use a single set of kmem_caches for all allocations")
---
include/linux/memcontrol.h | 6 ++----
mm/memcontrol.c | 7 ++++---
2 files changed, 6 insertions(+), 7 deletions(-)
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 20108e426f84..01099dfa839c 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -310,7 +310,6 @@ struct mem_cgroup {
int tcpmem_pressure;
#ifdef CONFIG_MEMCG_KMEM
- /* Index in the kmem_cache->memcg_params.memcg_caches array */
int kmemcg_id;
enum memcg_kmem_state kmem_state;
struct obj_cgroup __rcu *objcg;
@@ -1641,9 +1640,8 @@ static inline void memcg_kmem_uncharge_page(struct page *page, int order)
}
/*
- * helper for accessing a memcg's index. It will be used as an index in the
- * child cache array in kmem_cache, and also to derive its name. This function
- * will return -1 when this is not a kmem-limited memcg.
+ * A helper for accessing memcg's kmem_id, used for getting
+ * corresponding LRU lists.
*/
static inline int memcg_cache_id(struct mem_cgroup *memcg)
{
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 69a2893a6455..267cc68fba05 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -3675,17 +3675,18 @@ static int memcg_online_kmem(struct mem_cgroup *memcg)
memcg_free_cache_id(memcg_id);
return -ENOMEM;
}
- objcg->memcg = memcg;
- rcu_assign_pointer(memcg->objcg, objcg);
static_branch_enable(&memcg_kmem_enabled_key);
/*
* A memory cgroup is considered kmem-online as soon as it gets
- * kmemcg_id. Setting the id after enabling static branching will
+ * objcg. Setting the objcg after enabling static branching will
* guarantee no one starts accounting before all call sites are
* patched.
*/
+ objcg->memcg = memcg;
+ rcu_assign_pointer(memcg->objcg, objcg);
+
memcg->kmemcg_id = memcg_id;
memcg->kmem_state = KMEM_ONLINE;
--
2.26.2
Powered by blists - more mailing lists