[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240723171244.747521-1-roman.gushchin@linux.dev>
Date: Tue, 23 Jul 2024 17:12:44 +0000
From: Roman Gushchin <roman.gushchin@...ux.dev>
To: Andrew Morton <akpm@...ux-foundation.org>,
Shakeel Butt <shakeel.butt@...ux.dev>
Cc: linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...nel.org>,
Muchun Song <muchun.song@...ux.dev>,
Roman Gushchin <roman.gushchin@...ux.dev>,
kernel test robot <oliver.sang@...el.com>
Subject: [PATCH] mm: memcg: add cacheline padding after lruvec in mem_cgroup_per_node
Oliver Sand reported a performance regression caused by
commit 98c9daf5ae6b ("mm: memcg: guard memcg1-specific members of struct
mem_cgroup_per_node"), which puts some fields of the
mem_cgroup_per_node structure under the CONFIG_MEMCG_V1 config option.
Apparently it causes a false cache sharing between lruvec and
lru_zone_size members of the structure. Fix it by adding an explicit
padding after the lruvec member.
Even though the padding is not required with CONFIG_MEMCG_V1 set,
it seems like the introduced memory overhead is not significant
enough to warrant another divergence in the mem_cgroup_per_node
layout, so the padding is added unconditionally.
Fixes: 98c9daf5ae6b ("mm: memcg: guard memcg1-specific members of struct mem_cgroup_per_node")
Reported-by: kernel test robot <oliver.sang@...el.com>
Closes: https://lore.kernel.org/oe-lkp/202407121335.31a10cb6-oliver.sang@intel.com
Tested-by: Oliver Sang <oliver.sang@...el.com>
Signed-off-by: Roman Gushchin <roman.gushchin@...ux.dev>
---
include/linux/memcontrol.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 7e2eb091049a..0e5bf25d324f 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -109,6 +109,7 @@ struct mem_cgroup_per_node {
/* Fields which get updated often at the end. */
struct lruvec lruvec;
+ CACHELINE_PADDING(_pad2_);
unsigned long lru_zone_size[MAX_NR_ZONES][NR_LRU_LISTS];
struct mem_cgroup_reclaim_iter iter;
};
--
2.45.2.1089.g2a221341d9-goog
Powered by blists - more mailing lists