lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <f698fccc-f94a-e7d9-29de-56a90b57c4a4@google.com>
Date: Wed, 22 Jan 2025 02:34:45 -0800 (PST)
From: Hugh Dickins <hughd@...gle.com>
To: Roman Gushchin <roman.gushchin@...ux.dev>
cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org, 
    Andrew Morton <akpm@...ux-foundation.org>, Jann Horn <jannh@...gle.com>, 
    Peter Zijlstra <peterz@...radead.org>, Will Deacon <will@...nel.org>, 
    "Aneesh Kumar K.V" <aneesh.kumar@...nel.org>, 
    Nick Piggin <npiggin@...il.com>, Hugh Dickins <hughd@...gle.com>, 
    linux-arch@...r.kernel.org
Subject: Re: [PATCH] mmu_gather: move tlb flush for VM_PFNMAP/VM_MIXEDMAP
 vmas into free_pgtables()

On Tue, 21 Jan 2025, Roman Gushchin wrote:

> Commit b67fbebd4cf9 ("mmu_gather: Force tlb-flush VM_PFNMAP vmas")
> added a forced tlbflush to tlb_vma_end(),

Yes, I think that was a poor way of fixing the bug in question.

> which is required to avoid a
> race between munmap() and unmap_mapping_range(). However it added some
> overhead to other paths where tlb_vma_end() is used, but vmas are not
> removed, e.g. madvise(MADV_DONTNEED).

Right.

> 
> Fix this by moving the tlb flush out of tlb_end_vma() into
> free_pgtables(), somewhat similar to the stable version of the
> original commit: e.g. stable commit 895428ee124a ("mm: Force TLB flush
> for PFNMAP mappings before unlink_file_vma()").

Something like this patch will be a good improvement:
but not this version of the patch.

Because the mmu_gather may be gathering across many vmas,
but tlb_start_vma(), well, its "tlb_update_vma_flags()", says
	tlb->vma_pfn  = !!(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP));
so a following vma may reset vma_pfn too soon: more care is needed.

But probably vma_pfn should be reset to 0 somewhere, to avoid an
extra TLB flush in free_pgtables() when it has already been done.

Perhaps vma_pfn should follow the same pattern of initialization,
setting and clearing as cleared_ptes etc, instead of following
vma_huge and vma_exec.  Perhaps, but it is something different,
and I've not yet checked enough to be sure: tlb.h is still a maze
too twisty for me.

Hugh (after power outage interrupted reply)

> 
> Note, that if tlb->fullmm is set, no flush is required, as the whole
> mm is about to be destroyed.
> 
> Suggested-by: Jann Horn <jannh@...gle.com>
> Signed-off-by: Roman Gushchin <roman.gushchin@...ux.dev>
> Cc: Peter Zijlstra <peterz@...radead.org>
> Cc: Will Deacon <will@...nel.org>
> Cc: "Aneesh Kumar K.V" <aneesh.kumar@...nel.org>
> Cc: Andrew Morton <akpm@...ux-foundation.org>
> Cc: Nick Piggin <npiggin@...il.com>
> Cc: Hugh Dickins <hughd@...gle.com>
> Cc: linux-arch@...r.kernel.org
> Cc: linux-mm@...ck.org
> ---
>  include/asm-generic/tlb.h | 16 ++++------------
>  mm/memory.c               |  7 +++++++
>  2 files changed, 11 insertions(+), 12 deletions(-)
> 
> diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h
> index 709830274b75..411daa96f57a 100644
> --- a/include/asm-generic/tlb.h
> +++ b/include/asm-generic/tlb.h
> @@ -549,22 +549,14 @@ static inline void tlb_start_vma(struct mmu_gather *tlb, struct vm_area_struct *
>  
>  static inline void tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vma)
>  {
> -	if (tlb->fullmm)
> +	if (tlb->fullmm || IS_ENABLED(CONFIG_MMU_GATHER_MERGE_VMAS))
>  		return;
>  
>  	/*
> -	 * VM_PFNMAP is more fragile because the core mm will not track the
> -	 * page mapcount -- there might not be page-frames for these PFNs after
> -	 * all. Force flush TLBs for such ranges to avoid munmap() vs
> -	 * unmap_mapping_range() races.
> +	 * Do a TLB flush and reset the range at VMA boundaries; this avoids
> +	 * the ranges growing with the unused space between consecutive VMAs.
>  	 */
> -	if (tlb->vma_pfn || !IS_ENABLED(CONFIG_MMU_GATHER_MERGE_VMAS)) {
> -		/*
> -		 * Do a TLB flush and reset the range at VMA boundaries; this avoids
> -		 * the ranges growing with the unused space between consecutive VMAs.
> -		 */
> -		tlb_flush_mmu_tlbonly(tlb);
> -	}
> +	tlb_flush_mmu_tlbonly(tlb);
>  }
>  
>  /*
> diff --git a/mm/memory.c b/mm/memory.c
> index 398c031be9ba..2071415f68dd 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -365,6 +365,13 @@ void free_pgtables(struct mmu_gather *tlb, struct ma_state *mas,
>  {
>  	struct unlink_vma_file_batch vb;
>  
> +	/*
> +	 * Ensure we have no stale TLB entries by the time this mapping is
> +	 * removed from the rmap.
> +	 */
> +	if (tlb->vma_pfn && !tlb->fullmm)
> +		tlb_flush_mmu(tlb);
> +
>  	do {
>  		unsigned long addr = vma->vm_start;
>  		struct vm_area_struct *next;
> -- 
> 2.48.0.rc2.279.g1de40edade-goog

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ