lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 11 Jul 2022 17:04:02 +0200
From:   Jann Horn <jannh@...gle.com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     Linus Torvalds <torvalds@...uxfoundation.org>,
        Will Deacon <will@...nel.org>, linux-kernel@...r.kernel.org,
        linux-mm@...ck.org, Dave Airlie <airlied@...ux.ie>,
        Daniel Vetter <daniel@...ll.ch>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Guo Ren <guoren@...nel.org>, David Miller <davem@...emloft.net>
Subject: Re: [PATCH 4/4] mmu_gather: Force tlb-flush VM_PFNMAP vmas

On Sat, Jul 9, 2022 at 10:38 AM Peter Zijlstra <peterz@...radead.org> wrote:
> On Fri, Jul 08, 2022 at 04:04:38PM +0200, Jann Horn wrote:
> > On Fri, Jul 8, 2022 at 9:19 AM Peter Zijlstra <peterz@...radead.org> wrote:
> > > @@ -507,16 +502,22 @@ static inline void tlb_start_vma(struct
> > >
> > >  static inline void tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vma)
> > >  {
> > > -       if (tlb->fullmm || IS_ENABLED(CONFIG_MMU_GATHER_MERGE_VMAS))
> > > +       if (tlb->fullmm)
> > >                 return;
> >
> > Is this correct, or would there still be a race between MM teardown
> > (which sets ->fullmm, see exit_mmap()->tlb_gather_mmu_fullmm()) and
> > unmap_mapping_range()? My understanding is that ->fullmm only
> > guarantees a flush at tlb_finish_mmu(), but here we're trying to
> > ensure a flush before unlink_file_vma().
>
> fullmm is when the last user of the mm goes away, there should not be

(FWIW, there also seems to be an error path in write_ldt ->
free_ldt_pgtables -> tlb_gather_mmu_fullmm where ->fullmm can be set
for a TLB shootdown in a live process, but that's irrelevant for this
patch.)

> any races on the address space at that time. Also see the comment with
> tlb_gather_mmu_fullmm() and its users.

Ah, right, aside from the LDT weirdness, fullmm is only used in
exit_mmap, and at that point there can be no more parallel access to
the MM except for remote memory reaping (which is synchronized against
using mmap_write_lock()) and rmap walks...

> Subject: mmu_gather: Force TLB-flush VM_PFNMAP|VM_MIXEDMAP vmas
> From: Peter Zijlstra <peterz@...radead.org>
> Date: Thu Jul 7 11:51:16 CEST 2022
>
> Jann reported a race between munmap() and unmap_mapping_range(), where
> unmap_mapping_range() will no-op once unmap_vmas() has unlinked the
> VMA; however munmap() will not yet have invalidated the TLBs.
>
> Therefore unmap_mapping_range() will complete while there are still
> (stale) TLB entries for the specified range.
>
> Mitigate this by force flushing TLBs for VM_PFNMAP ranges.
>
> Reported-by: Jann Horn <jannh@...gle.com>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>

Looks good to me.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ