lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aFFOTjLtPNp7S8sP@hyeyoo>
Date: Tue, 17 Jun 2025 20:15:52 +0900
From: Harry Yoo <harry.yoo@...cle.com>
To: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
        Vlastimil Babka <vbabka@...e.cz>, Jann Horn <jannh@...gle.com>,
        "Liam R . Howlett" <Liam.Howlett@...cle.com>,
        Suren Baghdasaryan <surenb@...gle.com>,
        Matthew Wilcox <willy@...radead.org>,
        David Hildenbrand <david@...hat.com>, Pedro Falcato <pfalcato@...e.de>,
        Rik van Riel <riel@...riel.com>, Zi Yan <ziy@...dia.com>,
        Baolin Wang <baolin.wang@...ux.alibaba.com>,
        Nico Pache <npache@...hat.com>, Ryan Roberts <ryan.roberts@....com>,
        Dev Jain <dev.jain@....com>, Jakub Matena <matenajakub@...il.com>,
        Wei Yang <richard.weiyang@...il.com>, Barry Song <baohua@...nel.org>,
        linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 01/11] mm/mremap: introduce more mergeable mremap via
 MREMAP_RELOCATE_ANON

On Mon, Jun 09, 2025 at 02:26:35PM +0100, Lorenzo Stoakes wrote:
> When mremap() moves a mapping around in memory, it goes to great lengths to
> avoid having to walk page tables as this is expensive and
> time-consuming.
> 
> Rather, if the VMA was faulted (that is vma->anon_vma != NULL), the virtual
> page offset stored in the VMA at vma->vm_pgoff will remain the same, as
> well all the folio indexes pointed at the associated anon_vma object.
> 
> This means the VMA and page tables can simply be moved and this affects the
> change (and if we can move page tables at a higher page table level, this
> is even faster).
> 
> While this is efficient, it does lead to big problems with VMA merging - in
> essence it causes faulted anonymous VMAs to not be mergeable under many
> circumstances once moved.
> 
> This is limiting and leads to both a proliferation of unreclaimable,
> unmovable kernel metadata (VMAs, anon_vma's, anon_vma_chain's) and has an
> impact on further use of mremap(), which has a requirement that the VMA
> moved (which can also be a partial range within a VMA) may span only a
> single VMA.
> 
> This makes the mergeability or not of VMAs in effect a uAPI concern.
> 
> In some use cases, users may wish to accept the overhead of actually going
> to the trouble of updating VMAs and folios to affect mremap() moves. Let's
> provide them with the choice.
> 
> This patch add a new MREMAP_RELOCATE_ANON flag to do just that, which
> attempts to perform such an operation. If it is unable to do so, it cleanly
> falls back to the usual method.
> 
> It carefully takes the rmap locks such that at no time will a racing rmap
> user encounter incorrect or missing VMAs.
> 
> It is also designed to interact cleanly with the existing mremap() error
> fallback mechanism (inverting the remap should the page table move fail).
> 
> Also, if we could merge cleanly without such a change, we do so, avoiding
> the overhead of the operation if it is not required.
> 
> In the instance that no merge may occur when the move is performed, we
> still perform the folio and VMA updates to ensure that future mremap() or
> mprotect() calls will result in merges.
> 
> In this implementation, we simply give up if we encounter large folios. A
> subsequent commit will extend the functionality to allow for these cases.
> 
> We restrict this flag to purely anonymous memory only.
> 
> we separate out the vma_had_uncowed_parents() helper function for checking
> in should_relocate_anon() and introduce a new function
> vma_maybe_has_shared_anon_folios() which combines a check against this and
> any forked child anon_vma's.
> 
> We carefully check for pinned folios in case a caller who holds a pin might
> make assumptions about index, mapping fields which we are about to
> manipulate.
> 
> Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
> ---
>  include/linux/rmap.h             |   4 +
>  include/uapi/linux/mman.h        |   1 +
>  mm/internal.h                    |   1 +
>  mm/mremap.c                      | 403 +++++++++++++++++++++++++++++--
>  mm/vma.c                         |  77 ++++--
>  mm/vma.h                         |  36 ++-
>  tools/testing/vma/vma.c          |   5 +-
>  tools/testing/vma/vma_internal.h |  38 +++
>  8 files changed, 520 insertions(+), 45 deletions(-)

[...snip...]

> @@ -754,6 +797,209 @@ static unsigned long pmc_progress(struct pagetable_move_control *pmc)
>  	return old_addr < orig_old_addr ? 0 : old_addr - orig_old_addr;
>  }
>  
> +/*
> + * If the folio mapped at the specified pte entry can have its index and mapping
> + * relocated, then do so.
> + *
> + * Returns the number of pages we have traversed, or 0 if the operation failed.
> + */
> +static unsigned long relocate_anon_pte(struct pagetable_move_control *pmc,
> +		struct pte_state *state, bool undo)
> +{
> +	struct folio *folio;
> +	struct vm_area_struct *old, *new;
> +	pgoff_t new_index;
> +	pte_t pte;
> +	unsigned long ret = 1;
> +	unsigned long old_addr = state->old_addr;
> +	unsigned long new_addr = state->new_addr;
> +
> +	old = pmc->old;
> +	new = pmc->new;
> +
> +	pte = ptep_get(state->ptep);
> +
> +	/* Ensure we have truly got an anon folio. */
> +	folio = vm_normal_folio(old, old_addr, pte);
> +	if (!folio)
> +		return ret;
> +
> +	folio_lock(folio);
> +
> +	/* No-op. */
> +	if (!folio_test_anon(folio) || folio_test_ksm(folio))
> +		goto out;

I think the kernel should not observe any KSM pages during mremap
because it breaks KSM pages in prep_move_vma()?

-- 
Cheers,
Harry / Hyeonggon

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ