[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20250703001011.57220-1-sj@kernel.org>
Date: Wed, 2 Jul 2025 17:10:11 -0700
From: SeongJae Park <sj@...nel.org>
To: SeongJae Park <sj@...nel.org>
Cc: Bijan Tabatabai <bijan311@...il.com>,
damon@...ts.linux.dev,
linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
linux-doc@...r.kernel.org,
akpm@...ux-foundation.org,
corbet@....net,
joshua.hahnjy@...il.com,
bijantabatab@...ron.com,
venkataravis@...ron.com,
emirakhur@...ron.com,
ajayjoshi@...ron.com,
vtavarespetr@...ron.com,
Ravi Shankar Jonnalagadda <ravis.opensrc@...ron.com>
Subject: Re: [RFC PATCH v3 09/13] mm/damon/vaddr: Add vaddr versions of migrate_{hot,cold}
On Wed, 2 Jul 2025 16:51:38 -0700 SeongJae Park <sj@...nel.org> wrote:
> On Wed, 2 Jul 2025 15:13:32 -0500 Bijan Tabatabai <bijan311@...il.com> wrote:
>
> > From: Bijan Tabatabai <bijantabatab@...ron.com>
> >
> > migrate_{hot,cold} are paddr schemes that are used to migrate hot/cold
> > data to a specified node. However, these schemes are only available when
> > doing physical address monitoring. This patch adds an implementation for
> > them virtual address monitoring as well.
> >
> > Co-developed-by: Ravi Shankar Jonnalagadda <ravis.opensrc@...ron.com>
> > Signed-off-by: Ravi Shankar Jonnalagadda <ravis.opensrc@...ron.com>
> > Signed-off-by: Bijan Tabatabai <bijantabatab@...ron.com>
> > ---
> > mm/damon/vaddr.c | 102 +++++++++++++++++++++++++++++++++++++++++++++++
> > 1 file changed, 102 insertions(+)
> >
> > diff --git a/mm/damon/vaddr.c b/mm/damon/vaddr.c
> > index 46554e49a478..5cdfdc47c5ff 100644
> > --- a/mm/damon/vaddr.c
> > +++ b/mm/damon/vaddr.c
> > @@ -15,6 +15,7 @@
> > #include <linux/pagewalk.h>
> > #include <linux/sched/mm.h>
> >
> > +#include "../internal.h"
> > #include "ops-common.h"
> >
> > #ifdef CONFIG_DAMON_VADDR_KUNIT_TEST
> > @@ -610,6 +611,65 @@ static unsigned int damon_va_check_accesses(struct damon_ctx *ctx)
> > return max_nr_accesses;
> > }
> >
> > +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
> > +static int damos_va_migrate_pmd_entry(pmd_t *pmd, unsigned long addr,
> > + unsigned long next, struct mm_walk *walk)
>
> I'd suggest to put CONFIG_TRANSPARENT_HUGEPAGE check into the body of this
> function and handle both pmd and pte here, consistent to
> damon_young_pmd_entry().
Ah, unlike damon_young_pmd_entry() which is for a single address, this is for
walking the range of a given DAMON region, and hence should have a separate pte
entry function. Please ignore the above comment.
[...]
> > +static int damos_va_migrate_pte_entry(pte_t *pte, unsigned long addr,
> > + unsigned long enxt, struct mm_walk *walk)
>
> Nit. s/enxt/next/ ?
>
> > +{
> > + struct list_head *migration_list = walk->private;
> > + struct folio *folio;
> > + pte_t ptent;
> > +
> > + ptent = ptep_get(pte);
> > + if (pte_none(*pte) || !pte_present(*pte))
> > + return 0;
>
> Shouldn't we use cached pte value (ptent) instad of *pte? I'd suggest merging
> this into damos_va_migrate_pmd_entry() consistent to damon_young_pmd_entry().
Again, I overlooked the fact that this is for walking not only single address
point but a range. Please ignore the latter suggestion.
Thanks,
SJ
[...]
Powered by blists - more mailing lists