[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20201208095132.79383-1-songmuchun@bytedance.com>
Date: Tue, 8 Dec 2020 17:51:32 +0800
From: Muchun Song <songmuchun@...edance.com>
To: hannes@...xchg.org, mhocko@...nel.org, vdavydov.dev@...il.com,
akpm@...ux-foundation.org, shakeelb@...gle.com, guro@...com,
sfr@...b.auug.org.au, chris@...isdown.name, laoar.shao@...il.com,
richard.weiyang@...il.com
Cc: linux-kernel@...r.kernel.org, cgroups@...r.kernel.org,
linux-mm@...ck.org, Muchun Song <songmuchun@...edance.com>
Subject: [PATCH v2] mm: memcontrol: optimize per-lruvec stats counter memory usage
The vmstat threshold is 32 (MEMCG_CHARGE_BATCH), so the type of s32
of lruvec_stat_cpu is enough. And introduce struct per_cpu_lruvec_stat
to optimize memory usage.
The size of struct lruvec_stat is 304 bytes on 64 bits system. As it
is a per-cpu structure. So with this patch, we can save 304 / 2 * ncpu
bytes per-memcg per-node where ncpu is the number of the possible CPU.
If there are c memory cgroup (include dying cgroup) and n NUMA node in
the system. Finally, we can save (152 * ncpu * c * n) bytes.
Signed-off-by: Muchun Song <songmuchun@...edance.com>
---
Changes in v1 -> v2:
- Update the commit log to point out how many bytes that we can save.
include/linux/memcontrol.h | 6 +++++-
mm/memcontrol.c | 10 +++++++++-
2 files changed, 14 insertions(+), 2 deletions(-)
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 3febf64d1b80..290d6ec8535a 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -92,6 +92,10 @@ struct lruvec_stat {
long count[NR_VM_NODE_STAT_ITEMS];
};
+struct per_cpu_lruvec_stat {
+ s32 count[NR_VM_NODE_STAT_ITEMS];
+};
+
/*
* Bitmap of shrinker::id corresponding to memcg-aware shrinkers,
* which have elements charged to this memcg.
@@ -111,7 +115,7 @@ struct mem_cgroup_per_node {
struct lruvec_stat __percpu *lruvec_stat_local;
/* Subtree VM stats (batched updates) */
- struct lruvec_stat __percpu *lruvec_stat_cpu;
+ struct per_cpu_lruvec_stat __percpu *lruvec_stat_cpu;
atomic_long_t lruvec_stat[NR_VM_NODE_STAT_ITEMS];
unsigned long lru_zone_size[MAX_NR_ZONES][NR_LRU_LISTS];
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index eec44918d373..da6dc6ca388d 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -5198,7 +5198,7 @@ static int alloc_mem_cgroup_per_node_info(struct mem_cgroup *memcg, int node)
return 1;
}
- pn->lruvec_stat_cpu = alloc_percpu_gfp(struct lruvec_stat,
+ pn->lruvec_stat_cpu = alloc_percpu_gfp(struct per_cpu_lruvec_stat,
GFP_KERNEL_ACCOUNT);
if (!pn->lruvec_stat_cpu) {
free_percpu(pn->lruvec_stat_local);
@@ -7089,6 +7089,14 @@ static int __init mem_cgroup_init(void)
{
int cpu, node;
+ /*
+ * Currently s32 type (can refer to struct per_cpu_lruvec_stat) is
+ * used for per-memcg-per-cpu caching of per-node statistics. In order
+ * to work fine, we should make sure that the overfill threshold can't
+ * exceed S32_MAX / PAGE_SIZE.
+ */
+ BUILD_BUG_ON(MEMCG_CHARGE_BATCH > S32_MAX / PAGE_SIZE);
+
cpuhp_setup_state_nocalls(CPUHP_MM_MEMCQ_DEAD, "mm/memctrl:dead", NULL,
memcg_hotplug_cpu_dead);
--
2.11.0
Powered by blists - more mailing lists