[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0cb3d5a5-683b-4dba-90a8-b45ab83eec53@redhat.com>
Date: Tue, 29 Jul 2025 16:11:32 +0200
From: David Hildenbrand <david@...hat.com>
To: Yueyang Pan <pyyjason@...il.com>, SeongJae Park <sj@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Usama Arif <usamaarif642@...il.com>
Cc: damon@...ts.linux.dev, linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v1 2/2] mm/damon: Add damos_stat support for vaddr
On 29.07.25 15:53, Yueyang Pan wrote:
> From: PanJason <pyyjason@...il.com>
>
> This patch adds support for damos_stat in virtual address space.
> It leverages the walk_page_range to walk the page table and gets
> the folio from page table. The last folio scanned is stored in
> damos->last_applied to prevent double counting.
> ---
> mm/damon/vaddr.c | 113 ++++++++++++++++++++++++++++++++++++++++++++++-
> 1 file changed, 112 insertions(+), 1 deletion(-)
>
> diff --git a/mm/damon/vaddr.c b/mm/damon/vaddr.c
> index 87e825349bdf..3e319b51cfd4 100644
> --- a/mm/damon/vaddr.c
> +++ b/mm/damon/vaddr.c
> @@ -890,6 +890,117 @@ static unsigned long damos_va_migrate(struct damon_target *target,
> return applied * PAGE_SIZE;
> }
>
> +struct damos_va_stat_private {
> + struct damos *scheme;
> + unsigned long *sz_filter_passed;
> +};
> +
> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
> +static int damos_va_stat_pmd_entry(pmd_t *pmd, unsigned long addr,
> + unsigned long next, struct mm_walk *walk)
> +{
> + struct damos_va_stat_private *priv = walk->private;
> + struct damos *s = priv->scheme;
> + unsigned long *sz_filter_passed = priv->sz_filter_passed;
> + struct folio *folio;
> + spinlock_t *ptl;
> + pmd_t pmde;
> +
> + ptl = pmd_lock(walk->mm, pmd);
> + pmde = pmdp_get(pmd);
> +
> + if (!pmd_present(pmde) || !pmd_trans_huge(pmde))
> + goto unlock;
> +
> + /* Tell page walk code to not split the PMD */
> + walk->action = ACTION_CONTINUE;
> +
> + folio = damon_get_folio(pmd_pfn(pmde));
> + if (!folio)
> + goto unlock;
> +
> + if (damon_invalid_damos_folio(folio, s))
> + goto update_last_applied;
> +
> + if (!damos_va_filter_out(s, folio, walk->vma, addr, NULL, pmd)){
> + *sz_filter_passed += folio_size(folio);
See my comment below regarding vm_normal_page and folio references.
But this split into two handlers is fairly odd. Usually we only have a
pmd_entry callback (see madvise_cold_or_pageout_pte_range as an
example), and handle !CONFIG_TRANSPARENT_HUGEPAGE in there.
Then, there is also no need to mess with ACTION_CONTINUE
> + }
> +
> + folio_put(folio);
> +update_last_applied:
> + s->last_applied = folio;
> +unlock:
> + spin_unlock(ptl);
> + return 0;
> +}
> +#else
> +#define damon_va_stat_pmd_entry NULL
> +#endif
> +
> +static int damos_va_stat_pte_entry(pte_t *pte, unsigned long addr,
> + unsigned long next, struct mm_walk *walk)
> +{
> + struct damos_va_stat_private *priv = walk->private;
> + struct damos *s = priv->scheme;
> + unsigned long *sz_filter_passed = priv->sz_filter_passed;
> + struct folio *folio;
> + pte_t ptent;
> +
> + ptent = ptep_get(pte);
> + if (pte_none(ptent) || !pte_present(ptent))
> + return 0;
> +
> + folio = damon_get_folio(pte_pfn(ptent));
> + if (!folio)
> + return 0;
We have vm_normal_folio() and friends for a reason -- so you don't have
to do pte_pfn() manually.
... and now I am confused. We are holding the PTL, so why would you have
to grab+put a folio reference here *at all*.
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists