[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <83e8b1b3-fc73-4a49-9f6c-36489c3f39d6@redhat.com>
Date: Thu, 10 Apr 2025 20:36:34 +0200
From: David Hildenbrand <david@...hat.com>
To: Matthew Wilcox <willy@...radead.org>, Zi Yan <ziy@...dia.com>
Cc: nifan.cxl@...il.com, mcgrof@...nel.org, a.manzanares@...sung.com,
dave@...olabs.net, akpm@...ux-foundation.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, will@...nel.org, aneesh.kumar@...nel.org,
hca@...ux.ibm.com, gor@...ux.ibm.com, linux-s390@...r.kernel.org,
Fan Ni <fan.ni@...sung.com>
Subject: Re: [PATCH] mm: Introduce free_folio_and_swap_cache() to replace
free_page_and_swap_cache()
On 10.04.25 20:25, Matthew Wilcox wrote:
> On Thu, Apr 10, 2025 at 02:16:09PM -0400, Zi Yan wrote:
>>> @@ -49,7 +49,7 @@ static inline bool __tlb_remove_page_size(struct mmu_gather *tlb,
>>> {
>>> VM_WARN_ON_ONCE(delay_rmap);
>>>
>>> - free_page_and_swap_cache(page);
>>> + free_folio_and_swap_cache(page_folio(page));
>>> return false;
>>> }
>>
>> __tlb_remove_page_size() is ruining the fun of the conversion. But it will be
>> converted to use folio eventually.
>
> Well, hm, I'm not sure. I haven't looked into this in detail.
> We have a __tlb_remove_folio_pages() which removes N pages but they must
> all be within the same folio:
>
> VM_WARN_ON_ONCE(page_folio(page) != page_folio(page + nr_pages - 1));
>
> but would we be better off just passing in the folio which contains the
> page and always flush all pages in the folio?
The delay_rmap needs the precise pages, so we cannot easily switch to
folio + nr_refs.
Once the per-page mapcounts are gone for good, we might no longer need
page+nr_pages but folio+nr_refs would work.
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists