[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250814153307.1553061-5-raghavendra.kt@amd.com>
Date: Thu, 14 Aug 2025 15:32:54 +0000
From: Raghavendra K T <raghavendra.kt@....com>
To: <raghavendra.kt@....com>
CC: <AneeshKumar.KizhakeVeetil@....com>, <Michael.Day@....com>,
<akpm@...ux-foundation.org>, <bharata@....com>, <dave.hansen@...el.com>,
<david@...hat.com>, <dongjoo.linux.dev@...il.com>, <feng.tang@...el.com>,
<gourry@...rry.net>, <hannes@...xchg.org>, <honggyu.kim@...com>,
<hughd@...gle.com>, <jhubbard@...dia.com>, <jon.grimm@....com>,
<k.shutemov@...il.com>, <kbusch@...a.com>, <kmanaouil.dev@...il.com>,
<leesuyeon0506@...il.com>, <leillc@...gle.com>, <liam.howlett@...cle.com>,
<linux-kernel@...r.kernel.org>, <linux-mm@...ck.org>,
<mgorman@...hsingularity.net>, <mingo@...hat.com>, <nadav.amit@...il.com>,
<nphamcs@...il.com>, <peterz@...radead.org>, <riel@...riel.com>,
<rientjes@...gle.com>, <rppt@...nel.org>, <santosh.shukla@....com>,
<shivankg@....com>, <shy828301@...il.com>, <sj@...nel.org>, <vbabka@...e.cz>,
<weixugc@...gle.com>, <willy@...radead.org>, <ying.huang@...ux.alibaba.com>,
<ziy@...dia.com>, <Jonathan.Cameron@...wei.com>, <dave@...olabs.net>,
<yuanchu@...gle.com>, <kinseyho@...gle.com>, <hdanton@...a.com>,
<harry.yoo@...cle.com>
Subject: [RFC PATCH V3 04/17] mm/kscand: Add only hot pages to migration list
Previously all pages, accessed once are added.
Improve it by adding those that are accessed second time.
This logic is closer to current NUMAB implementation
of spotting hot pages.
Signed-off-by: Raghavendra K T <raghavendra.kt@....com>
---
mm/kscand.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/mm/kscand.c b/mm/kscand.c
index 1d883d411664..7552ce32beea 100644
--- a/mm/kscand.c
+++ b/mm/kscand.c
@@ -196,6 +196,7 @@ static int hot_vma_idle_pte_entry(pte_t *pte,
struct kscand_migrate_info *info;
struct kscand_scanctrl *scanctrl = walk->private;
int srcnid;
+ bool prev_idle;
scanctrl->address = addr;
pte_t pteval = ptep_get(pte);
@@ -219,6 +220,7 @@ static int hot_vma_idle_pte_entry(pte_t *pte,
folio_put(folio);
return 0;
}
+ prev_idle = folio_test_idle(folio);
folio_set_idle(folio);
page_idle_clear_pte_refs(page, pte, walk);
srcnid = folio_nid(folio);
@@ -233,7 +235,7 @@ static int hot_vma_idle_pte_entry(pte_t *pte,
folio_put(folio);
return 0;
}
- if (!folio_test_idle(folio) &&
+ if (!folio_test_idle(folio) && !prev_idle &&
(folio_test_young(folio) || folio_test_referenced(folio))) {
/* XXX: Leaking memory. TBD: consume info */
--
2.34.1
Powered by blists - more mailing lists