[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240829101918.3454840-1-hezhongkun.hzk@bytedance.com>
Date: Thu, 29 Aug 2024 18:19:16 +0800
From: Zhongkun He <hezhongkun.hzk@...edance.com>
To: akpm@...ux-foundation.org,
hannes@...xchg.org,
mhocko@...nel.org
Cc: roman.gushchin@...ux.dev,
shakeel.butt@...ux.dev,
muchun.song@...ux.dev,
lizefan.x@...edance.com,
linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
cgroups@...r.kernel.org,
Zhongkun He <hezhongkun.hzk@...edance.com>
Subject: [RFC PATCH 0/2] Add disable_unmap_file arg to memory.reclaim
This patch proposes augmenting the memory.reclaim interface with a
disable_unmap_file argument that will skip the mapped pages in
that reclaim attempt.
For example:
echo "2M disable_unmap_file" > /sys/fs/cgroup/test/memory.reclaim
will perform reclaim on the test cgroup with no mapped file page.
The memory.reclaim is a useful interface. We can carry out proactive
memory reclaim in the user space, which can increase the utilization
rate of memory.
In the actual usage scenarios, we found that when there are sufficient
anonymous pages, mapped file pages with a relatively small proportion
would still be reclaimed. This is likely to cause an increase in
refaults and an increase in task delay, because mapped file pages
usually include important executable codes, data, and shared libraries,
etc. According to the verified situation, if we can skip this part of
the memory, the task delay will be reduced.
IMO,it is difficult to balance the priorities of various pages in the
kernel, there are too many scenarios to consider. However, for the
scenario of proactive memory reclaim in user space, we can make a
simple judgment in this case.
Zhongkun He (2):
mm: vmscan: modify the semantics of scan_control.may_unmap to
UNMAP_ANON and UNMAP_FILE
mm: memcg: add disbale_unmap_file arg to memory.reclaim
include/linux/swap.h | 1 +
mm/memcontrol.c | 9 ++++--
mm/vmscan.c | 65 ++++++++++++++++++++++++++++++++++----------
3 files changed, 59 insertions(+), 16 deletions(-)
--
2.20.1
Powered by blists - more mailing lists