lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHbLzkqGdnwY4P8jKQR0ojm6QV6b3dBi5pwrC1UJ4dqi3EqS4w@mail.gmail.com>
Date:   Fri, 2 Sep 2022 10:45:20 -0700
From:   Yang Shi <shy828301@...il.com>
To:     Peter Xu <peterx@...hat.com>
Cc:     david@...hat.com, kirill.shutemov@...ux.intel.com,
        jhubbard@...dia.com, jgg@...dia.com, hughd@...gle.com,
        akpm@...ux-foundation.org, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm: gup: fix the fast GUP race against THP collapse

On Fri, Sep 2, 2022 at 9:00 AM Peter Xu <peterx@...hat.com> wrote:
>
> On Thu, Sep 01, 2022 at 04:50:45PM -0700, Yang Shi wrote:
> > On Thu, Sep 1, 2022 at 4:26 PM Peter Xu <peterx@...hat.com> wrote:
> > >
> > > Hi, Yang,
> > >
> > > On Thu, Sep 01, 2022 at 03:27:07PM -0700, Yang Shi wrote:
> > > > Since general RCU GUP fast was introduced in commit 2667f50e8b81 ("mm:
> > > > introduce a general RCU get_user_pages_fast()"), a TLB flush is no longer
> > > > sufficient to handle concurrent GUP-fast in all cases, it only handles
> > > > traditional IPI-based GUP-fast correctly.
> > >
> > > If TLB flush (or, IPI broadcasts) used to work to protect against gup-fast,
> > > I'm kind of confused why it's not sufficient even if with RCU gup?  Isn't
> > > that'll keep working as long as interrupt disabled (which current fast-gup
> > > will still do)?
> >
> > Actually the wording was copied from David's commit log for his
> > PageAnonExclusive fix. My understanding is the IPI broadcast still
> > works, but it may not be supported by all architectures and not
> > preferred anymore. So we should avoid depending on IPI broadcast IIUC.
> >
> > >
> > > IIUC the issue is you suspect not all archs correctly implemented
> > > pmdp_collapse_flush(), or am I wrong?
> >
> > This is a possible fix, please see below for details.
> >
> > >
> > > > On architectures that send
> > > > an IPI broadcast on TLB flush, it works as expected.  But on the
> > > > architectures that do not use IPI to broadcast TLB flush, it may have
> > > > the below race:
> > > >
> > > >    CPU A                                          CPU B
> > > > THP collapse                                     fast GUP
> > > >                                               gup_pmd_range() <-- see valid pmd
> > > >                                                   gup_pte_range() <-- work on pte
> > > > pmdp_collapse_flush() <-- clear pmd and flush
> > > > __collapse_huge_page_isolate()
> > > >     check page pinned <-- before GUP bump refcount
> > > >                                                       pin the page
> > > >                                                       check PTE <-- no change
> > > > __collapse_huge_page_copy()
> > > >     copy data to huge page
> > > >     ptep_clear()
> > > > install huge pmd for the huge page
> > > >                                                       return the stale page
> > > > discard the stale page
> > > >
> > > > The race could be fixed by checking whether PMD is changed or not after
> > > > taking the page pin in fast GUP, just like what it does for PTE.  If the
> > > > PMD is changed it means there may be parallel THP collapse, so GUP
> > > > should back off.
> > >
> > > Could the race also be fixed by impl pmdp_collapse_flush() correctly for
> > > the archs that are missing? Do you know which arch(s) is broken with it?
> >
> > Yes, and this was suggested by me in the first place, but per the
> > suggestion from John and David, this is not the preferred way. I think
> > it is because:
> >
> > Firstly, using IPI to serialize against fast GUP is not recommended
> > anymore since fast GUP does check PTE then back off so we should avoid
> > it.
> > Secondly, if checking PMD then backing off could solve the problem,
> > why do we still need broadcast IPI? It doesn't sound performant.
> >
> > >
> > > It's just not clear to me whether this patch is an optimization or a fix,
> > > if it's a fix whether the IPI broadcast in ppc pmdp_collapse_flush() would
> > > still be needed.
> >
> > It is a fix and the fix will make IPI broadcast not useful anymore.
>
> How about another patch to remove the ppc impl too?  Then it can be a two
> patches series.

BTW, I don't think we could remove the ppc implementation since it is
different from the generic pmdp_collapse_flush(), particularly for the
hash part IIUC.

The generic version calls flush_tlb_range() -> hash__flush_tlb_range()
for hash, but the hash call is actually no-op. The ppc version calls
hash__pmdp_collapse_flush() -> flush_tlb_pmd_range(), which does
something useful.

But, as I mentioned in another thread, we should be able to remove the
IPI call, which just calls a dummy function.

>
> So that ppc developers can be copied and maybe it helps to have the ppc
> people looking at current approach too.
>
> Then the last piece of it is the s390 pmdp_collapse_flush().  I'm wondering
> whether generic pmdp_collapse_flush() would be good enough, since the only
> addition comparing to the s390 one will be flush_tlb_range() (which is a
> further __tlb_flush_mm_lazy).  David may have some thoughts.
>
> The patch itself looks good to me, one trivial nit below.
>
> >
> > >
> > > Thanks,
> > >
> > > >
> > > > Also update the stale comment about serializing against fast GUP in
> > > > khugepaged.
> > > >
> > > > Fixes: 2667f50e8b81 ("mm: introduce a general RCU get_user_pages_fast()")
> > > > Signed-off-by: Yang Shi <shy828301@...il.com>
> > > > ---
> > > >  mm/gup.c        | 30 ++++++++++++++++++++++++------
> > > >  mm/khugepaged.c | 10 ++++++----
> > > >  2 files changed, 30 insertions(+), 10 deletions(-)
> > > >
> > > > diff --git a/mm/gup.c b/mm/gup.c
> > > > index f3fc1f08d90c..4365b2811269 100644
> > > > --- a/mm/gup.c
> > > > +++ b/mm/gup.c
> > > > @@ -2380,8 +2380,9 @@ static void __maybe_unused undo_dev_pagemap(int *nr, int nr_start,
> > > >  }
> > > >
> > > >  #ifdef CONFIG_ARCH_HAS_PTE_SPECIAL
> > > > -static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
> > > > -                      unsigned int flags, struct page **pages, int *nr)
> > > > +static int gup_pte_range(pmd_t pmd, pmd_t *pmdp, unsigned long addr,
> > > > +                      unsigned long end, unsigned int flags,
> > > > +                      struct page **pages, int *nr)
> > > >  {
> > > >       struct dev_pagemap *pgmap = NULL;
> > > >       int nr_start = *nr, ret = 0;
> > > > @@ -2423,7 +2424,23 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
> > > >                       goto pte_unmap;
> > > >               }
> > > >
> > > > -             if (unlikely(pte_val(pte) != pte_val(*ptep))) {
> > > > +             /*
> > > > +              * THP collapse conceptually does:
> > > > +              *   1. Clear and flush PMD
> > > > +              *   2. Check the base page refcount
> > > > +              *   3. Copy data to huge page
> > > > +              *   4. Clear PTE
> > > > +              *   5. Discard the base page
> > > > +              *
> > > > +              * So fast GUP may race with THP collapse then pin and
> > > > +              * return an old page since TLB flush is no longer sufficient
> > > > +              * to serialize against fast GUP.
> > > > +              *
> > > > +              * Check PMD, if it is changed just back off since it
> > > > +              * means there may be parallel THP collapse.
>
> Would you mind rewording this comment a bit?  I feel it a bit weird to
> suddenly mention about thp collapse especially its details.
>
> Maybe some statement on the whole history of why check pte, and in what
> case pmd check is needed (where the thp collapse example can be moved to,
> imho)?
>
> One of my attempt for reference..
>
>                 /*
>                  * Fast-gup relies on pte change detection to avoid
>                  * concurrent pgtable operations.
>                  *
>                  * To pin the page, fast-gup needs to do below in order:
>                  * (1) pin the page (by prefetching pte), then (2) check
>                  * pte not changed.
>                  *
>                  * For the rest of pgtable operations where pgtable updates
>                  * can be racy with fast-gup, we need to do (1) clear pte,
>                  * then (2) check whether page is pinned.
>                  *
>                  * Above will work for all pte-level operations, including
>                  * thp split.
>                  *
>                  * For thp collapse, it's a bit more complicated because
>                  * with RCU pgtable free fast-gup can be walking a pgtable
>                  * page that is being freed (so pte is still valid but pmd
>                  * can be cleared already).  To avoid race in such
>                  * condition, we need to also check pmd here to make sure
>                  * pmd doesn't change (corresponds to pmdp_collapse_flush()
>                  * in the thp collide code path).
>                  */
>
> If you agree with the comment change, feel free to add:
>
> Acked-by: Peter Xu <peterx@...hat.com>
>
> Thanks,
>
> --
> Peter Xu
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ