lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 8 Jul 2022 16:03:17 +0200
From:   Peter Zijlstra <peterz@...radead.org>
To:     Will Deacon <will@...nel.org>
Cc:     Jann Horn <jannh@...gle.com>,
        Linus Torvalds <torvalds@...uxfoundation.org>,
        linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        Dave Airlie <airlied@...ux.ie>,
        Daniel Vetter <daniel@...ll.ch>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Guo Ren <guoren@...nel.org>, David Miller <davem@...emloft.net>
Subject: Re: [PATCH 4/4] mmu_gather: Force tlb-flush VM_PFNMAP vmas

On Fri, Jul 08, 2022 at 02:36:06PM +0100, Will Deacon wrote:
> On Fri, Jul 08, 2022 at 09:18:06AM +0200, Peter Zijlstra wrote:

> > @@ -507,16 +502,22 @@ static inline void tlb_start_vma(struct
> >  
> >  static inline void tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vma)
> >  {
> > -	if (tlb->fullmm || IS_ENABLED(CONFIG_MMU_GATHER_MERGE_VMAS))
> > +	if (tlb->fullmm)
> >  		return;
> >  
> >  	/*
> > -	 * Do a TLB flush and reset the range at VMA boundaries; this avoids
> > -	 * the ranges growing with the unused space between consecutive VMAs,
> > -	 * but also the mmu_gather::vma_* flags from tlb_start_vma() rely on
> > -	 * this.
> > +	 * VM_PFNMAP is more fragile because the core mm will not track the
> > +	 * page mapcount -- there might not be page-frames for these PFNs after
> > +	 * all. Force flush TLBs for such ranges to avoid munmap() vs
> > +	 * unmap_mapping_range() races.
> >  	 */
> > -	tlb_flush_mmu_tlbonly(tlb);
> > +	if (tlb->vma_pfn || !IS_ENABLED(CONFIG_MMU_GATHER_MERGE_VMAS)) {
> > +		/*
> > +		 * Do a TLB flush and reset the range at VMA boundaries; this avoids
> > +		 * the ranges growing with the unused space between consecutive VMAs.
> > +		 */
> > +		tlb_flush_mmu_tlbonly(tlb);
> > +	}
> 
> We already have the vma here, so I'm not sure how much the new 'vma_pfn'
> field really buys us over checking the 'vm_flags', but perhaps that's
> cleanup for another day.

Duh, that's just me being daft again. For some raisin I was convinced
(and failed to check) that we only had the vma at start.

I can easily respin this to not need the extra variable.

How's this then?

---
Subject: mmu_gather: Force tlb-flush VM_PFNMAP vmas
From: Peter Zijlstra <peterz@...radead.org>
Date: Thu Jul 7 11:51:16 CEST 2022

Jann reported a race between munmap() and unmap_mapping_range(), where
unmap_mapping_range() will no-op once unmap_vmas() has unlinked the
VMA; however munmap() will not yet have invalidated the TLBs.

Therefore unmap_mapping_range() will complete while there are still
(stale) TLB entries for the specified range.

Mitigate this by force flushing TLBs for VM_PFNMAP ranges.

Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
---
 include/asm-generic/tlb.h |   18 ++++++++++++------
 1 file changed, 12 insertions(+), 6 deletions(-)

--- a/include/asm-generic/tlb.h
+++ b/include/asm-generic/tlb.h
@@ -507,16 +507,22 @@ static inline void tlb_start_vma(struct
 
 static inline void tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vma)
 {
-	if (tlb->fullmm || IS_ENABLED(CONFIG_MMU_GATHER_MERGE_VMAS))
+	if (tlb->fullmm)
 		return;
 
 	/*
-	 * Do a TLB flush and reset the range at VMA boundaries; this avoids
-	 * the ranges growing with the unused space between consecutive VMAs,
-	 * but also the mmu_gather::vma_* flags from tlb_start_vma() rely on
-	 * this.
+	 * VM_PFNMAP is more fragile because the core mm will not track the
+	 * page mapcount -- there might not be page-frames for these PFNs after
+	 * all. Force flush TLBs for such ranges to avoid munmap() vs
+	 * unmap_mapping_range() races.
 	 */
-	tlb_flush_mmu_tlbonly(tlb);
+	if ((vma->vm_flags & VM_PFNMAP) || !IS_ENABLED(CONFIG_MMU_GATHER_MERGE_VMAS)) {
+		/*
+		 * Do a TLB flush and reset the range at VMA boundaries; this avoids
+		 * the ranges growing with the unused space between consecutive VMAs.
+		 */
+		tlb_flush_mmu_tlbonly(tlb);
+	}
 }
 
 /*

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ