[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z_gNBRY_1UVe2-ax@casper.infradead.org>
Date: Thu, 10 Apr 2025 19:25:09 +0100
From: Matthew Wilcox <willy@...radead.org>
To: Zi Yan <ziy@...dia.com>
Cc: nifan.cxl@...il.com, mcgrof@...nel.org, a.manzanares@...sung.com,
dave@...olabs.net, akpm@...ux-foundation.org, david@...hat.com,
linux-mm@...ck.org, linux-kernel@...r.kernel.org, will@...nel.org,
aneesh.kumar@...nel.org, hca@...ux.ibm.com, gor@...ux.ibm.com,
linux-s390@...r.kernel.org, Fan Ni <fan.ni@...sung.com>
Subject: Re: [PATCH] mm: Introduce free_folio_and_swap_cache() to replace
free_page_and_swap_cache()
On Thu, Apr 10, 2025 at 02:16:09PM -0400, Zi Yan wrote:
> > @@ -49,7 +49,7 @@ static inline bool __tlb_remove_page_size(struct mmu_gather *tlb,
> > {
> > VM_WARN_ON_ONCE(delay_rmap);
> >
> > - free_page_and_swap_cache(page);
> > + free_folio_and_swap_cache(page_folio(page));
> > return false;
> > }
>
> __tlb_remove_page_size() is ruining the fun of the conversion. But it will be
> converted to use folio eventually.
Well, hm, I'm not sure. I haven't looked into this in detail.
We have a __tlb_remove_folio_pages() which removes N pages but they must
all be within the same folio:
VM_WARN_ON_ONCE(page_folio(page) != page_folio(page + nr_pages - 1));
but would we be better off just passing in the folio which contains the
page and always flush all pages in the folio? It'd certainly simplify
the "encoded pages" stuff since we'd no longer need to pass (page,
length) tuples. But then, what happens if the folio is split between
being added to the batch and the flush actually happening?
Powered by blists - more mailing lists