lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAJuCfpFejoBUSknS=VTEK0gAhOHG3vUe751pxccT-cBGcBquAw@mail.gmail.com>
Date:   Wed, 5 Jul 2023 10:14:38 -0700
From:   Suren Baghdasaryan <surenb@...gle.com>
To:     David Hildenbrand <david@...hat.com>
Cc:     akpm@...ux-foundation.org, jirislaby@...nel.org,
        jacobly.alt@...il.com, holger@...lied-asynchrony.com,
        hdegoede@...hat.com, michel@...pinasse.org, jglisse@...gle.com,
        mhocko@...e.com, vbabka@...e.cz, hannes@...xchg.org,
        mgorman@...hsingularity.net, dave@...olabs.net,
        willy@...radead.org, liam.howlett@...cle.com, peterz@...radead.org,
        ldufour@...ux.ibm.com, paulmck@...nel.org, mingo@...hat.com,
        will@...nel.org, luto@...nel.org, songliubraving@...com,
        peterx@...hat.com, dhowells@...hat.com, hughd@...gle.com,
        bigeasy@...utronix.de, kent.overstreet@...ux.dev,
        punit.agrawal@...edance.com, lstoakes@...il.com,
        peterjung1337@...il.com, rientjes@...gle.com, chriscli@...gle.com,
        axelrasmussen@...gle.com, joelaf@...gle.com, minchan@...gle.com,
        rppt@...nel.org, jannh@...gle.com, shakeelb@...gle.com,
        tatashin@...gle.com, edumazet@...gle.com, gthelen@...gle.com,
        linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        stable@...r.kernel.org
Subject: Re: [PATCH v2 1/2] fork: lock VMAs of the parent process when forking

On Wed, Jul 5, 2023 at 9:10 AM Suren Baghdasaryan <surenb@...gle.com> wrote:
>
> On Wed, Jul 5, 2023 at 1:08 AM David Hildenbrand <david@...hat.com> wrote:
> >
> > On 05.07.23 08:37, Suren Baghdasaryan wrote:
> > > When forking a child process, parent write-protects an anonymous page
> > > and COW-shares it with the child being forked using copy_present_pte().
> > > Parent's TLB is flushed right before we drop the parent's mmap_lock in
> > > dup_mmap(). If we get a write-fault before that TLB flush in the parent,
> > > and we end up replacing that anonymous page in the parent process in
> > > do_wp_page() (because, COW-shared with the child), this might lead to
> > > some stale writable TLB entries targeting the wrong (old) page.
> > > Similar issue happened in the past with userfaultfd (see flush_tlb_page()
> > > call inside do_wp_page()).
> > > Lock VMAs of the parent process when forking a child, which prevents
> > > concurrent page faults during fork operation and avoids this issue.
> > > This fix can potentially regress some fork-heavy workloads. Kernel build
> > > time did not show noticeable regression on a 56-core machine while a
> > > stress test mapping 10000 VMAs and forking 5000 times in a tight loop
> > > shows ~5% regression. If such fork time regression is unacceptable,
> > > disabling CONFIG_PER_VMA_LOCK should restore its performance. Further
> > > optimizations are possible if this regression proves to be problematic.
> >
> > Out of interest, did you also populate page tables / pages for some of these
> > VMAs, or is this primarily looping over 10000 VMAs that don't actually copy any
> > page tables?
>
> I did not populate the page tables, therefore this represents the
> worst case scenario (the share of time used to lock the VMAs is
> maximized).
>
> >
> > >
> > > Suggested-by: David Hildenbrand <david@...hat.com>
> > > Reported-by: Jiri Slaby <jirislaby@...nel.org>
> > > Closes: https://lore.kernel.org/all/dbdef34c-3a07-5951-e1ae-e9c6e3cdf51b@kernel.org/
> > > Reported-by: Holger Hoffstätte <holger@...lied-asynchrony.com>
> > > Closes: https://lore.kernel.org/all/b198d649-f4bf-b971-31d0-e8433ec2a34c@applied-asynchrony.com/
> > > Reported-by: Jacob Young <jacobly.alt@...il.com>
> > > Closes: https://bugzilla.kernel.org/show_bug.cgi?id=217624
> > > Fixes: 0bff0aaea03e ("x86/mm: try VMA lock-based page fault handling first")
> > > Cc: stable@...r.kernel.org
> > > Signed-off-by: Suren Baghdasaryan <surenb@...gle.com>
> > > ---
> > >   kernel/fork.c | 1 +
> > >   1 file changed, 1 insertion(+)
> > >
> > > diff --git a/kernel/fork.c b/kernel/fork.c
> > > index b85814e614a5..d2e12b6d2b18 100644
> > > --- a/kernel/fork.c
> > > +++ b/kernel/fork.c
> > > @@ -686,6 +686,7 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm,
> > >       for_each_vma(old_vmi, mpnt) {
> > >               struct file *file;
> > >
> > > +             vma_start_write(mpnt);
> > >               if (mpnt->vm_flags & VM_DONTCOPY) {
> > >                       vm_stat_account(mm, mpnt->vm_flags, -vma_pages(mpnt));
> > >                       continue;
> >
> > After the mmap_write_lock_killable(), there will still be a period where page
> > faults can happen. Essentially, page faults can happen for a VMA until we lock that VMA.
> >
> > I cannot immediately name something that is broken allowing for that, and this change
> > should fix the issue at hand, but exotic things like
> >
> >         flush_cache_dup_mm(oldmm);
> >
> > make me wonder if we really want to allow for that or if there is some other corner case
> > in fork() handling that really doesn't expect concurrent page faults (and, thereby, page
> > table modifications) with fork.
> >
> > For example, documentation/core-api/cachetlb.rst says
> >
> > 2) ``void flush_cache_dup_mm(struct mm_struct *mm)``
> >
> >         This interface flushes an entire user address space from
> >         the caches.  That is, after running, there will be no cache
> >         lines associated with 'mm'.
> >
> >         This interface is used to handle whole address space
> >         page table operations such as what happens during fork.
> >
> >         This option is separate from flush_cache_mm to allow some
> >         optimizations for VIPT caches.
> >
>
> I see. So, we really need to lock all VMAs before
> flush_cache_dup_mm(). Makes sense. I'll post an update to this patch
> shortly.

v3 of the patchset with this fix is posted at
https://lore.kernel.org/all/20230705171213.2843068-1-surenb@google.com/

> Thanks,
> Suren.
>
> >
> > An alternative that requires another VMA walk would be
> >
> > diff --git a/kernel/fork.c b/kernel/fork.c
> > index 41c964104b58..0f182d3f049b 100644
> > --- a/kernel/fork.c
> > +++ b/kernel/fork.c
> > @@ -662,6 +662,13 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm,
> >                 retval = -EINTR;
> >                 goto fail_uprobe_end;
> >         }
> > +
> > +       /* Disallow any page faults early by locking all VMAs. */
> > +       if (IS_ENABLED(CONFIG_PER_VMA_LOCK)) {
> > +               for_each_vma(old_vmi, mpnt)
> > +                       vma_start_write(mpnt);
> > +               vma_iter_init(old_vmi, old_mm, 0);
> > +       }
> >         flush_cache_dup_mm(oldmm);
> >         uprobe_dup_mmap(oldmm, mm);
> >         /*
> > --
> > 2.41.0
> >
> >
> > Unless there are other thoughts, I guess you change is fine regarding the problem
> > at hand. Not so sure regarding any other corner cases, that's why I'm spelling it out.
> >
> >
> > Acked-by: David Hildenbrand <david@...hat.com>
> >
> > --
> > Cheers,
> >
> > David / dhildenb
> >

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ