[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230911075437.74027-1-zeil@nebius.com>
Date: Mon, 11 Sep 2023 07:55:09 +0000
From: "Yakunin, Dmitry (Nebius)" <zeil@...ius.com>
To: "cgroups@...r.kernel.org" <cgroups@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>
CC: NB-Core Team <NB-CoreTeam@...ius.com>,
"tj@...nel.org" <tj@...nel.org>,
"hannes@...xchg.org" <hannes@...xchg.org>,
"mhocko@...nel.org" <mhocko@...nel.org>,
"Yakunin, Dmitry (Nebius)" <zeil@...ius.com>
Subject: [RFC PATCH 0/3] Helpers for debugging dying cgroups
This patch series is mostly based on Konstantin's patches which he had sent
years ago [1].
This functionality still seems very useful for debugging the difference
between entities in cgroupfs and counters in /proc/cgroups, e.g.
searching for files that have page cache which prevents destruction of memcg.
I saw the comments in the original thread but didn't understand the Tejun's
comment about usage of filehandle instead of ino. Also I saved the original
output format in debugfs with extra counters. We can rework this format in
the future but now it seems straightforward for just filtering through
cmdline utilities.
[1] https://lore.kernel.org/lkml/153414348591.737150.14229960913953276515.stgit@buzz/
Dmitry Yakunin (3):
cgroup: list all subsystem states in debugfs files
proc/kpagecgroup: report also inode numbers of offline cgroups
tools/mm/page-types: add flag for showing inodes of offline cgroups
fs/proc/page.c | 24 ++++++++-
include/linux/cgroup-defs.h | 1 +
include/linux/memcontrol.h | 2 +-
kernel/cgroup/cgroup.c | 101 ++++++++++++++++++++++++++++++++++++
mm/memcontrol.c | 19 ++++++-
mm/memory-failure.c | 2 +-
tools/mm/page-types.c | 18 ++++++-
7 files changed, 159 insertions(+), 8 deletions(-)
--
2.25.1
Powered by blists - more mailing lists