lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <072b8684-47fe-4a2a-bf69-f6d348f0489b@redhat.com>
Date: Thu, 25 Sep 2025 11:16:22 +0200
From: David Hildenbrand <david@...hat.com>
To: Dev Jain <dev.jain@....com>, akpm@...ux-foundation.org
Cc: lorenzo.stoakes@...cle.com, Liam.Howlett@...cle.com, vbabka@...e.cz,
 rppt@...nel.org, surenb@...gle.com, mhocko@...e.com, linux-mm@...ck.org,
 linux-kernel@...r.kernel.org, Lokesh Gidra <lokeshgidra@...gle.com>
Subject: Re: [PATCH] mm: move rmap of mTHP upon CoW reuse

On 25.09.25 10:54, Dev Jain wrote:
> At wp-fault time, when we find that a folio is exclusively mapped, we move
> folio->mapping to the faulting VMA's anon_vma, so that rmap overhead
> reduces. This is currently done for small folios (base pages) and
> PMD-mapped THPs. Do this for mTHP too.

I deliberately didn't add this back then because I was not able to 
convince myself easily that it is ok in all corner cases. So this needs 
some thought.


We know that the folio is exclusively mapped to a single MM and that 
there are no unexpected references from others (GUP pins, whatsoever).

But a large folio might be

(a) mapped into multiple VMAs (e.g., partial mprotect()) in the same MM
(b) mapped into multiple page tables (e.g., mremap()) in the same MM

Regarding (a), are we 100% sure that "vma->anon_vma" will be the same 
for all VMAs? I would hope so, because we can only end up this way by 
splitting a VMA that had an origin_vma->anon_vma.

Once scenario I was concerned about is VM_DONTCOPY, where we don't end 
up calling anon_vma_fork() for a VMA (but for another split one maybe). 
But likely that case is fine, because we don't actually copy the PTEs in 
the other case.


Regarding (b), we could have a page table walker walk over the folio 
(possibly inspecting folio->mapping) through a different page table.

I think the problem I foresaw with that was regarding RMAP walks that 
don't hold the folio lock: that problem might be solved with [1]. Not 
sure if there is anybody else depending on folio->mapping not changing 
while holding the PTL.

[1] 
https://lkml.kernel.org/r/20250918055135.2881413-2-lokeshgidra@google.com


Regarding (b), I think I was also concerned about concurrent fork() at 
some point where a single page table lock would not be sufficient, but 
that cannot happen while we are processing a page fault, not even with 
the VMA lock held (we fixed this at some point).

If you take a look at dup_mmap(), we have this:

for_each_vma(vmi, mpnt) {
	...
	vma_start_write(mpnt);


So we allow concurrent page faults for non-processed VMAs IIRC. Maybe 
that's a problem, maybe not. (my intuition told me: it's a problem). To 
handle that, if required, we would just write-lock all VMAs upfront, 
before doing any copying. (I suggested that in the past, I think there 
is some smaller overhead involved with iterating VMAs twice).


Long story short: you should do all of that investigation and understand 
why it is okay and document it in a patch. :)

> 
> Signed-off-by: Dev Jain <dev.jain@....com>
> ---
> mm-selftests pass.
> 
>   mm/memory.c | 5 +++++
>   1 file changed, 5 insertions(+)
> 
> diff --git a/mm/memory.c b/mm/memory.c
> index 7e32eb79ba99..ec04d2cec6b1 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -4014,6 +4014,11 @@ static bool __wp_can_reuse_large_anon_folio(struct folio *folio,
>   	 * an additional folio reference and never ended up here.
>   	 */
>   	exclusive = true;
> +
> +	if (folio_trylock(folio)) {

We should likely do that only if the folio->mapping is not already 
properly set up, not for each reuse of a page.

> +		folio_move_anon_rmap(folio, vma);
> +		folio_unlock(folio);
> +	}
>   unlock:
>   	folio_unlock_large_mapcount(folio);
>   	return exclusive;


-- 
Cheers

David / dhildenb


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ