[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20250628165144.55528-9-sj@kernel.org>
Date: Sat, 28 Jun 2025 09:51:41 -0700
From: SeongJae Park <sj@...nel.org>
To:
Cc: SeongJae Park <sj@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
damon@...ts.linux.dev,
kernel-team@...a.com,
linux-kernel@...r.kernel.org,
linux-mm@...ck.org
Subject: [RFC PATCH 08/11] mm/damon/lru_sort: support active:inactive memory ratio based auto-tuning
Doing DAMOS_LRU_[DE]PRIO with DAMOS_QUOTA_[IN]ACTIVE_MEM_BP based quota
auto-tuning can be useful. For example, users can ask DAMON to "find
hot/cold pages and activate/deactivate those aiming 50:50
active:inactive memory size." But DAMON_LRU_SORT has no interface to do
that. Add a module parameter for setting the target ratio.
Signed-off-by: SeongJae Park <sj@...nel.org>
---
mm/damon/lru_sort.c | 37 +++++++++++++++++++++++++++++++++++++
1 file changed, 37 insertions(+)
diff --git a/mm/damon/lru_sort.c b/mm/damon/lru_sort.c
index 3ccde23a8566..99c5a22cf3f2 100644
--- a/mm/damon/lru_sort.c
+++ b/mm/damon/lru_sort.c
@@ -41,6 +41,20 @@ static bool enabled __read_mostly;
static bool commit_inputs __read_mostly;
module_param(commit_inputs, bool, 0600);
+/*
+ * Desired active to [in]active memory ratio in bp (1/10,000).
+ *
+ * While keeping the caps that set by other quotas, DAMON_LRU_SORT
+ * automatically increases and decreases the effective level of the quota
+ * aiming the LRU [de]prioritizations of the hot and cold memory resulting in
+ * this active to [in]active memory ratio. Value zero means disabling this
+ * auto-tuning feature.
+ *
+ * Disabled by default.
+ */
+static unsigned long active_mem_bp __read_mostly;
+module_param(active_mem_bp, ulong, 0600);
+
/*
* Filter [none-]young pages accordingly for LRU [de]prioritizations.
*
@@ -201,6 +215,26 @@ static struct damos *damon_lru_sort_new_cold_scheme(unsigned int cold_thres)
return damon_lru_sort_new_scheme(&pattern, DAMOS_LRU_DEPRIO);
}
+static int damon_lru_sort_add_quota_goals(struct damos *hot_scheme,
+ struct damos *cold_scheme)
+{
+ struct damos_quota_goal *goal;
+
+ if (!active_mem_bp)
+ return 0;
+ goal = damos_new_quota_goal(DAMOS_QUOTA_ACTIVE_MEM_BP, active_mem_bp);
+ if (!goal)
+ return -ENOMEM;
+ damos_add_quota_goal(&hot_scheme->quota, goal);
+ /* aim 0.2 % goal conflict, to keep little ping pong */
+ goal = damos_new_quota_goal(DAMOS_QUOTA_INACTIVE_MEM_BP,
+ 10000 - active_mem_bp + 2);
+ if (!goal)
+ return -ENOMEM;
+ damos_add_quota_goal(&hot_scheme->quota, goal);
+ return 0;
+}
+
static int damon_lru_sort_add_filters(struct damos *hot_scheme,
struct damos *cold_scheme)
{
@@ -256,6 +290,9 @@ static int damon_lru_sort_apply_parameters(void)
damon_set_schemes(param_ctx, &hot_scheme, 1);
damon_add_scheme(param_ctx, cold_scheme);
+ err = damon_lru_sort_add_quota_goals(hot_scheme, cold_scheme);
+ if (err)
+ goto out;
err = damon_lru_sort_add_filters(hot_scheme, cold_scheme);
if (err)
goto out;
--
2.39.5
Powered by blists - more mailing lists