[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20251209045629.77914-1-sj@kernel.org>
Date: Mon, 8 Dec 2025 20:56:28 -0800
From: SeongJae Park <sj@...nel.org>
To: "David Hildenbrand (Red Hat)" <david@...nel.org>
Cc: SeongJae Park <sj@...nel.org>,
"Liam R. Howlett" <Liam.Howlett@...cle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Jann Horn <jannh@...gle.com>,
Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
Michal Hocko <mhocko@...e.com>,
Mike Rapoport <rppt@...nel.org>,
Pedro Falcato <pfalcato@...e.de>,
Suren Baghdasaryan <surenb@...gle.com>,
Vlastimil Babka <vbabka@...e.cz>,
linux-kernel@...r.kernel.org,
linux-mm@...ck.org,
damon@...ts.linux.dev
Subject: Re: [RFC PATCH v3 05/37] mm/{mprotect,memory}: (no upstream-aimed hack) implement MM_CP_DAMON
+ damon@
On Mon, 8 Dec 2025 12:19:41 +0100 "David Hildenbrand (Red Hat)" <david@...nel.org> wrote:
> On 12/8/25 07:29, SeongJae Park wrote:
> > Note that this is not upstreamable as-is. This is only for helping
> > discussion of other changes of its series.
> >
> > DAMON is using Accessed bits of page table entries as the major source
> > of the access information. It lacks some additional information such as
> > which CPU was making the access. Page faults could be another source of
> > information for such additional information.
> >
> > Implement another change_protection() flag for such use cases, namely
> > MM_CP_DAMON. DAMON will install PAGE_NONE protections using the flag.
> > To avoid interfering with NUMA_BALANCING, which is also using PAGE_NON
> > protection, pass the faults to DAMON only when NUMA_BALANCING is
> > disabled.
> >
> > Again, this is not upstreamable as-is. There were comments about this
> > on the previous version, and I was unable to take time on addressing
> > those. As a result, this version is not addressing any of those
> > previous comments. I'm sending this, though, to help discussions on
> > patches of its series, except this one. Please forgive me adding this
> > to your inbox without addressing your comments, and ignore. I will
> > establish another discussion for this part later.
> >
> > Signed-off-by: SeongJae Park <sj@...nel.org>
> > ---
[...]
> > @@ -6363,8 +6415,12 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma,
> > return 0;
> > }
> > if (pmd_trans_huge(vmf.orig_pmd)) {
> > - if (pmd_protnone(vmf.orig_pmd) && vma_is_accessible(vma))
> > + if (pmd_protnone(vmf.orig_pmd) && vma_is_accessible(vma)) {
> > + if (sysctl_numa_balancing_mode ==
> > + NUMA_BALANCING_DISABLED)
> > + return do_damon_page(&vmf, true);
> > return do_huge_pmd_numa_page(&vmf);
> > + }
>
> I recall that we had a similar discussion already. Ah, it was around
> some arm MTE tag storage reuse [1].
>
> The idea was to let do_*_numa_page() handle the restoring so we don't
> end up with such duplicated code.
Thank you for sharing this, David! As I mentioned on the commit description, I
will revisit this part for making this more upstreamable, after LPC. And this
previous conversation will be really useful for me at preparing the next
iteration. Thank you again!
>
> [1]
> https://lore.kernel.org/all/20240125164256.4147-1-alexandru.elisei@arm.com/
Thanks,
SJ
[...]
Powered by blists - more mailing lists