[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <26ca0d6b-5fd4-42f9-b985-936d9a72d307@arm.com>
Date: Thu, 25 Sep 2025 16:20:50 +0530
From: Dev Jain <dev.jain@....com>
To: David Hildenbrand <david@...hat.com>, akpm@...ux-foundation.org
Cc: lorenzo.stoakes@...cle.com, Liam.Howlett@...cle.com, vbabka@...e.cz,
rppt@...nel.org, surenb@...gle.com, mhocko@...e.com, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Lokesh Gidra <lokeshgidra@...gle.com>
Subject: Re: [PATCH] mm: move rmap of mTHP upon CoW reuse
On 25/09/25 4:07 pm, David Hildenbrand wrote:
> On 25.09.25 12:33, Dev Jain wrote:
>>
>> On 25/09/25 2:46 pm, David Hildenbrand wrote:
>>> On 25.09.25 10:54, Dev Jain wrote:
>>>> At wp-fault time, when we find that a folio is exclusively mapped, we
>>>> move
>>>> folio->mapping to the faulting VMA's anon_vma, so that rmap overhead
>>>> reduces. This is currently done for small folios (base pages) and
>>>> PMD-mapped THPs. Do this for mTHP too.
>>>
>>> I deliberately didn't add this back then because I was not able to
>>> convince myself easily that it is ok in all corner cases. So this
>>> needs some thought.
>>
>> Thanks for your detailed reply.
>>
>>
>>>
>>>
>>> We know that the folio is exclusively mapped to a single MM and that
>>> there are no unexpected references from others (GUP pins, whatsoever).
>>>
>>> But a large folio might be
>>>
>>> (a) mapped into multiple VMAs (e.g., partial mprotect()) in the same MM
>>
>> I think we have the same problem then for PMD-THPs? I see that
>> vma_adjust_trans_huge() only does a PMD split and not folio split.
>
> Sure, we can end up in this reuse function here for any large anon
> folio, including PMD ones after a PMD->PTE remapping.
Ah alright, I was thinking that something may go wrong through
folio_move_anon_rmap() in do_huge_pmd_wp_page, but
that case will *not* have the PMD split guaranteeing that it lies in the
same VMA. Interesting.
Powered by blists - more mailing lists