[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGsJ_4zAxhSDqvN5tzSeHR-kifaxm_GbPhHCeSkYsRJk=uHN-Q@mail.gmail.com>
Date: Fri, 16 Jan 2026 23:14:01 +0800
From: Barry Song <21cnbao@...il.com>
To: Dev Jain <dev.jain@....com>
Cc: Wei Yang <richard.weiyang@...il.com>, Baolin Wang <baolin.wang@...ux.alibaba.com>,
akpm@...ux-foundation.org, david@...nel.org, catalin.marinas@....com,
will@...nel.org, lorenzo.stoakes@...cle.com, ryan.roberts@....com,
Liam.Howlett@...cle.com, vbabka@...e.cz, rppt@...nel.org, surenb@...gle.com,
mhocko@...e.com, riel@...riel.com, harry.yoo@...cle.com, jannh@...gle.com,
willy@...radead.org, linux-mm@...ck.org, linux-arm-kernel@...ts.infradead.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v5 5/5] mm: rmap: support batched unmapping for file large folios
> >
> > I mean maybe we can skip it in try_to_unmap_one(), for example:
> >
> > diff --git a/mm/rmap.c b/mm/rmap.c
> > index 9e5bd4834481..ea1afec7c802 100644
> > --- a/mm/rmap.c
> > +++ b/mm/rmap.c
> > @@ -2250,6 +2250,10 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
> > */
> > if (nr_pages == folio_nr_pages(folio))
> > goto walk_done;
> > + else {
> > + pvmw.address += PAGE_SIZE * (nr_pages - 1);
> > + pvmw.pte += nr_pages - 1;
> > + }
> > continue;
> > walk_abort:
> > ret = false;
>
> I am of the opinion that we should do something like this. In the internal pvmw code,
> we keep skipping ptes till the ptes are none. With my proposed uffd-fix [1], if the old
> ptes were uffd-wp armed, pte_install_uffd_wp_if_needed will convert all ptes from none
> to not none, and we will lose the batching effect. I also plan to extend support to
> anonymous folios (therefore generalizing for all types of memory) which will set a
I posted an RFC on anon folios quite some time ago [1].
It’s great to hear that you’re interested in taking this over.
[1] https://lore.kernel.org/all/20250513084620.58231-1-21cnbao@gmail.com/
> batch of ptes as swap, and the internal pvmw code won't be able to skip through the
> batch.
Interesting — I didn’t catch this issue in the RFC earlier. Back then,
we only supported nr == 1 and nr == folio_nr_pages(folio). When
nr == nr_pages, page_vma_mapped_walk() would break entirely. With
Lance’s commit ddd05742b45b08, arbitrary nr in [1, nr_pages] is now
supported, which means we have to handle all the complexity. :-)
Thanks
Barry
Powered by blists - more mailing lists