[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20240203044612.1234216-1-yosryahmed@google.com>
Date: Sat, 3 Feb 2024 04:46:12 +0000
From: Yosry Ahmed <yosryahmed@...gle.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Johannes Weiner <hannes@...xchg.org>, Michal Hocko <mhocko@...nel.org>,
Roman Gushchin <roman.gushchin@...ux.dev>, Shakeel Butt <shakeelb@...gle.com>,
Muchun Song <muchun.song@...ux.dev>, cgroups@...r.kernel.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Yosry Ahmed <yosryahmed@...gle.com>,
Greg Thelen <gthelen@...gle.com>
Subject: [PATCH mm-hotfixes-unstable v2] mm: memcg: fix struct
memcg_vmstats_percpu size and alignment
Commit da10d7e140196 ("mm: memcg: optimize parent iteration in
memcg_rstat_updated()") added two additional pointers to the end of
struct memcg_vmstats_percpu with CACHELINE_PADDING to put them in a
separate cacheline. This caused the struct size to increase from 1200 to
1280 on my config (80 extra bytes instead of 16).
Upon revisiting, the relevant struct members do not need to be on a
separate cacheline, they just need to fit in a single one. This is a
percpu struct, so there shouldn't be any contention on that cacheline
anyway. Move the members to the beginning of the struct and make sure
the struct itself is cacheline aligned. Add a comment about the members
that need to fit together in a cacheline.
The struct size is now 1216 on my config with this change.
Fixes: da10d7e14019 ("mm: memcg: optimize parent iteration in memcg_rstat_updated()")
Reported-by: Greg Thelen <gthelen@...gle.com>
Signed-off-by: Yosry Ahmed <yosryahmed@...gle.com>
---
v1 -> v2:
- Moved ____cacheline_aligned to the end of the struct definition as
recommended by Shakeel.
v1: https://lore.kernel.org/lkml/20240203003414.1067730-1-yosryahmed@google.com/
---
mm/memcontrol.c | 21 ++++++++++-----------
1 file changed, 10 insertions(+), 11 deletions(-)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index d9ca0fdbe4ab0..1ed40f9d3a277 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -621,6 +621,15 @@ static inline int memcg_events_index(enum vm_event_item idx)
}
struct memcg_vmstats_percpu {
+ /* Stats updates since the last flush */
+ unsigned int stats_updates;
+
+ /* Cached pointers for fast iteration in memcg_rstat_updated() */
+ struct memcg_vmstats_percpu *parent;
+ struct memcg_vmstats *vmstats;
+
+ /* The above should fit a single cacheline for memcg_rstat_updated() */
+
/* Local (CPU and cgroup) page state & events */
long state[MEMCG_NR_STAT];
unsigned long events[NR_MEMCG_EVENTS];
@@ -632,17 +641,7 @@ struct memcg_vmstats_percpu {
/* Cgroup1: threshold notifications & softlimit tree updates */
unsigned long nr_page_events;
unsigned long targets[MEM_CGROUP_NTARGETS];
-
- /* Fit members below in a single cacheline for memcg_rstat_updated() */
- CACHELINE_PADDING(_pad1_);
-
- /* Stats updates since the last flush */
- unsigned int stats_updates;
-
- /* Cached pointers for fast iteration in memcg_rstat_updated() */
- struct memcg_vmstats_percpu *parent;
- struct memcg_vmstats *vmstats;
-};
+} ____cacheline_aligned;
struct memcg_vmstats {
/* Aggregated (CPU and subtree) page state & events */
--
2.43.0.594.gd9cf4e227d-goog
Powered by blists - more mailing lists