[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20250628165144.55528-5-sj@kernel.org>
Date: Sat, 28 Jun 2025 09:51:37 -0700
From: SeongJae Park <sj@...nel.org>
To:
Cc: SeongJae Park <sj@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
damon@...ts.linux.dev,
kernel-team@...a.com,
linux-kernel@...r.kernel.org,
linux-mm@...ck.org
Subject: [RFC PATCH 04/11] mm/damon/paddr: activate DAMOS_LRU_PRIO targets instead of marking accessed
DAMOS_LRU_DEPRIOD directly deactivate the pages, while DAMOS_LRU_PRIO
calls folio_mark_accessed(), which does incremental activation. The
incremental activation was assumed to be useful for making sure the
pages of the hot memory region are really hot. After the introduction
of DAMOS_LRU_PRIO, the young page filter has added. Users can use the
young page filter to make sure the page is eligible to be activated.
Meanwhile, the asymmetric behavior of DAMOS_LRU_[DE]PRIO can confuse
users.
Directly activate given pages for DAMOS_LRU_PRIO, to eliminate the
unnecessary incremental activation steps, and be symmetric with
DAMOS_LRU_DEPRIO for easier usages.
Signed-off-by: SeongJae Park <sj@...nel.org>
---
mm/damon/paddr.c | 18 ++++++++----------
1 file changed, 8 insertions(+), 10 deletions(-)
diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c
index d4261da48b0a..34d4b3017043 100644
--- a/mm/damon/paddr.c
+++ b/mm/damon/paddr.c
@@ -474,8 +474,8 @@ static unsigned long damon_pa_pageout(struct damon_region *r, struct damos *s,
return applied * PAGE_SIZE;
}
-static inline unsigned long damon_pa_mark_accessed_or_deactivate(
- struct damon_region *r, struct damos *s, bool mark_accessed,
+static inline unsigned long damon_pa_de_activate(
+ struct damon_region *r, struct damos *s, bool activate,
unsigned long *sz_filter_passed)
{
unsigned long addr, applied = 0;
@@ -494,8 +494,8 @@ static inline unsigned long damon_pa_mark_accessed_or_deactivate(
else
*sz_filter_passed += folio_size(folio);
- if (mark_accessed)
- folio_mark_accessed(folio);
+ if (activate)
+ folio_activate(folio);
else
folio_deactivate(folio);
applied += folio_nr_pages(folio);
@@ -507,18 +507,16 @@ static inline unsigned long damon_pa_mark_accessed_or_deactivate(
return applied * PAGE_SIZE;
}
-static unsigned long damon_pa_mark_accessed(struct damon_region *r,
+static unsigned long damon_pa_activate_pages(struct damon_region *r,
struct damos *s, unsigned long *sz_filter_passed)
{
- return damon_pa_mark_accessed_or_deactivate(r, s, true,
- sz_filter_passed);
+ return damon_pa_de_activate(r, s, true, sz_filter_passed);
}
static unsigned long damon_pa_deactivate_pages(struct damon_region *r,
struct damos *s, unsigned long *sz_filter_passed)
{
- return damon_pa_mark_accessed_or_deactivate(r, s, false,
- sz_filter_passed);
+ return damon_pa_de_activate(r, s, false, sz_filter_passed);
}
static unsigned int __damon_pa_migrate_folio_list(
@@ -803,7 +801,7 @@ static unsigned long damon_pa_apply_scheme(struct damon_ctx *ctx,
case DAMOS_PAGEOUT:
return damon_pa_pageout(r, scheme, sz_filter_passed);
case DAMOS_LRU_PRIO:
- return damon_pa_mark_accessed(r, scheme, sz_filter_passed);
+ return damon_pa_activate_pages(r, scheme, sz_filter_passed);
case DAMOS_LRU_DEPRIO:
return damon_pa_deactivate_pages(r, scheme, sz_filter_passed);
case DAMOS_MIGRATE_HOT:
--
2.39.5
Powered by blists - more mailing lists