[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220518223815.809858-1-vaibhav@linux.ibm.com>
Date: Thu, 19 May 2022 04:08:15 +0530
From: Vaibhav Jain <vaibhav@...ux.ibm.com>
To: cgroups@...r.kernel.org, linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Cc: Vaibhav Jain <vaibhav@...ux.ibm.com>, Tejun Heo <tj@...nel.org>,
Zefan Li <lizefan.x@...edance.com>,
Johannes Weiner <hannes@...xchg.org>,
Jonathan Corbet <corbet@....net>,
Michal Hocko <mhocko@...nel.org>,
Vladimir Davydov <vdavydov.dev@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
"Aneesh Kumar K . V" <aneesh.kumar@...ux.ibm.com>,
Shakeel Butt <shakeelb@...gle.com>,
Yosry Ahmed <yosryahmed@...gle.com>
Subject: [PATCH] memcg: provide reclaim stats via 'memory.reclaim'
[1] Provides a way for user-space to trigger proactive reclaim by introducing
a write-only memcg file 'memory.reclaim'. However reclaim stats like number
of pages scanned and reclaimed is still not directly available to the
user-space.
This patch proposes to extend [1] to make the memcg file 'memory.reclaim'
readable which returns the number of pages scanned / reclaimed during the
reclaim process from 'struct vmpressure' associated with each memcg. This should
let user-space asses how successful proactive reclaim triggered from memcg
'memory.reclaim' was ?
With the patch following command flow is expected:
# echo "1M" > memory.reclaim
# cat memory.reclaim
scanned 76
reclaimed 32
[1]: https://lore.kernel.org/r/20220425190040.2475377-1-yosryahmed@google.com
Cc: Shakeel Butt <shakeelb@...gle.com>
Cc: Yosry Ahmed <yosryahmed@...gle.com>
Signed-off-by: Vaibhav Jain <vaibhav@...ux.ibm.com>
---
Documentation/admin-guide/cgroup-v2.rst | 15 ++++++++++++---
mm/memcontrol.c | 14 ++++++++++++++
2 files changed, 26 insertions(+), 3 deletions(-)
diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst
index 27ebef2485a3..44610165261d 100644
--- a/Documentation/admin-guide/cgroup-v2.rst
+++ b/Documentation/admin-guide/cgroup-v2.rst
@@ -1209,18 +1209,27 @@ PAGE_SIZE multiple when read back.
utility is limited to providing the final safety net.
memory.reclaim
- A write-only nested-keyed file which exists for all cgroups.
+ A nested-keyed file which exists for all cgroups.
- This is a simple interface to trigger memory reclaim in the
- target cgroup.
+ This is a simple interface to trigger memory reclaim and retrieve
+ reclaim stats in the target cgroup.
This file accepts a single key, the number of bytes to reclaim.
No nested keys are currently supported.
+ Reading the file returns number of pages scanned and number of
+ pages reclaimed from the memcg. This information fetched from
+ vmpressure info associated with each cgroup.
+
Example::
echo "1G" > memory.reclaim
+ cat memory.reclaim
+
+ scanned 78
+ reclaimed 30
+
The interface can be later extended with nested keys to
configure the reclaim behavior. For example, specify the
type of memory to reclaim from (anon, file, ..).
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 2e2bfbed4717..9e43580a8726 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -6423,6 +6423,19 @@ static ssize_t memory_oom_group_write(struct kernfs_open_file *of,
return nbytes;
}
+static int memory_reclaim_show(struct seq_file *m, void *v)
+{
+ struct mem_cgroup *memcg = mem_cgroup_from_seq(m);
+ struct vmpressure *vmpr = memcg_to_vmpressure(memcg);
+
+ spin_lock(&vmpr->sr_lock);
+ seq_printf(m, "scanned %lu\nreclaimed %lu\n",
+ vmpr->scanned, vmpr->reclaimed);
+ spin_unlock(&vmpr->sr_lock);
+
+ return 0;
+}
+
static ssize_t memory_reclaim(struct kernfs_open_file *of, char *buf,
size_t nbytes, loff_t off)
{
@@ -6525,6 +6538,7 @@ static struct cftype memory_files[] = {
.name = "reclaim",
.flags = CFTYPE_NS_DELEGATABLE,
.write = memory_reclaim,
+ .seq_show = memory_reclaim_show,
},
{ } /* terminate */
};
--
2.35.1
Powered by blists - more mailing lists