[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <17a11d85-9e27-4d9d-8109-302ef9cfb8ec@redhat.com>
Date: Thu, 25 Sep 2025 12:57:26 +0200
From: David Hildenbrand <david@...hat.com>
To: Dev Jain <dev.jain@....com>, akpm@...ux-foundation.org
Cc: lorenzo.stoakes@...cle.com, Liam.Howlett@...cle.com, vbabka@...e.cz,
rppt@...nel.org, surenb@...gle.com, mhocko@...e.com, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Lokesh Gidra <lokeshgidra@...gle.com>
Subject: Re: [PATCH] mm: move rmap of mTHP upon CoW reuse
On 25.09.25 12:50, Dev Jain wrote:
>
> On 25/09/25 4:07 pm, David Hildenbrand wrote:
>> On 25.09.25 12:33, Dev Jain wrote:
>>>
>>> On 25/09/25 2:46 pm, David Hildenbrand wrote:
>>>> On 25.09.25 10:54, Dev Jain wrote:
>>>>> At wp-fault time, when we find that a folio is exclusively mapped, we
>>>>> move
>>>>> folio->mapping to the faulting VMA's anon_vma, so that rmap overhead
>>>>> reduces. This is currently done for small folios (base pages) and
>>>>> PMD-mapped THPs. Do this for mTHP too.
>>>>
>>>> I deliberately didn't add this back then because I was not able to
>>>> convince myself easily that it is ok in all corner cases. So this
>>>> needs some thought.
>>>
>>> Thanks for your detailed reply.
>>>
>>>
>>>>
>>>>
>>>> We know that the folio is exclusively mapped to a single MM and that
>>>> there are no unexpected references from others (GUP pins, whatsoever).
>>>>
>>>> But a large folio might be
>>>>
>>>> (a) mapped into multiple VMAs (e.g., partial mprotect()) in the same MM
>>>
>>> I think we have the same problem then for PMD-THPs? I see that
>>> vma_adjust_trans_huge() only does a PMD split and not folio split.
>>
>> Sure, we can end up in this reuse function here for any large anon
>> folio, including PMD ones after a PMD->PTE remapping.
>
> Ah alright, I was thinking that something may go wrong through
> folio_move_anon_rmap() in do_huge_pmd_wp_page, but
Right, there we have sine VMA and single PTL.
--
Cheers
David / dhildenb
Powered by blists - more mailing lists