[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20250628165144.55528-11-sj@kernel.org>
Date: Sat, 28 Jun 2025 09:51:43 -0700
From: SeongJae Park <sj@...nel.org>
To:
Cc: SeongJae Park <sj@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
damon@...ts.linux.dev,
kernel-team@...a.com,
linux-kernel@...r.kernel.org,
linux-mm@...ck.org
Subject: [RFC PATCH 10/11] mm/damon/lru_sort: add monitoring intervals auto-tuning parameter
DAMON monitoring intervals tuning was crucial for every DAMON use case.
Now there are a tuning guideline and an automated intervals tuning
feature. DAMON_LRU_SORT is still using manual control of intervals.
Add a module parameter for utilizing the auto-tuning feature with a
suggested auto-tuning parameters.
Signed-off-by: SeongJae Park <sj@...nel.org>
---
mm/damon/lru_sort.c | 30 +++++++++++++++++++++++++++---
1 file changed, 27 insertions(+), 3 deletions(-)
diff --git a/mm/damon/lru_sort.c b/mm/damon/lru_sort.c
index 99c5a22cf3f2..1e9e19f15b33 100644
--- a/mm/damon/lru_sort.c
+++ b/mm/damon/lru_sort.c
@@ -55,6 +55,20 @@ module_param(commit_inputs, bool, 0600);
static unsigned long active_mem_bp __read_mostly;
module_param(active_mem_bp, ulong, 0600);
+/*
+ * Auto-tune monitoring intervals.
+ *
+ * If this parameter is set as ``Y``, DAMON_LRU_SORT automatically tune
+ * DAMON's sampling and aggregation intervals. The auto-tuning aims to capture
+ * meaningful amount of access events in each DAMON-snapshot, while keeping the
+ * sampling interval 5 milliseconds in minimu, and 10 seconds in maximum.
+ * Setting this as ``Y`` disables the auto-tuning.
+ *
+ * Disabled by default.
+ */
+static bool autotune_monitoring_intervals __read_mostly;
+module_param(autotune_monitoring_intervals, bool, 0600);
+
/*
* Filter [none-]young pages accordingly for LRU [de]prioritizations.
*
@@ -261,6 +275,7 @@ static int damon_lru_sort_apply_parameters(void)
{
struct damon_ctx *param_ctx;
struct damon_target *param_target;
+ struct damon_attrs attrs;
struct damos *hot_scheme, *cold_scheme;
unsigned int hot_thres, cold_thres;
int err;
@@ -269,18 +284,27 @@ static int damon_lru_sort_apply_parameters(void)
if (err)
return err;
- err = damon_set_attrs(ctx, &damon_lru_sort_mon_attrs);
+ attrs = damon_lru_sort_mon_attrs;
+ if (autotune_monitoring_intervals) {
+ attrs.sample_interval = 5000;
+ attrs.aggr_interval = 100000;
+ attrs.intervals_goal.access_bp = 40;
+ attrs.intervals_goal.aggrs = 3;
+ attrs.intervals_goal.min_sample_us = 5000;
+ attrs.intervals_goal.max_sample_us = 10 * 1000 * 1000;
+ }
+ err = damon_set_attrs(ctx, &attrs);
if (err)
goto out;
err = -ENOMEM;
- hot_thres = damon_max_nr_accesses(&damon_lru_sort_mon_attrs) *
+ hot_thres = damon_max_nr_accesses(&attrs) *
hot_thres_access_freq / 1000;
hot_scheme = damon_lru_sort_new_hot_scheme(hot_thres);
if (!hot_scheme)
goto out;
- cold_thres = cold_min_age / damon_lru_sort_mon_attrs.aggr_interval;
+ cold_thres = cold_min_age / attrs.aggr_interval;
cold_scheme = damon_lru_sort_new_cold_scheme(cold_thres);
if (!cold_scheme) {
damon_destroy_scheme(hot_scheme);
--
2.39.5
Powered by blists - more mailing lists