lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAHkRjk6cQTu7N+UanTspWm_LyABRhfPHQn1+PPdaHYrTC3PtfQ@mail.gmail.com>
Date:   Wed, 4 Sep 2019 15:22:03 +0100
From:   Catalin Marinas <catalin.marinas@....com>
To:     Anshuman Khandual <anshuman.khandual@....com>
Cc:     Jia He <justin.he@....com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Matthew Wilcox <willy@...radead.org>,
        Jérôme Glisse <jglisse@...hat.com>,
        Ralph Campbell <rcampbell@...dia.com>,
        Jason Gunthorpe <jgg@...pe.ca>,
        Peter Zijlstra <peterz@...radead.org>,
        Dave Airlie <airlied@...hat.com>,
        "Aneesh Kumar K.V" <aneesh.kumar@...ux.ibm.com>,
        Thomas Hellstrom <thellstrom@...are.com>,
        Souptick Joarder <jrdr.linux@...il.com>,
        linux-mm <linux-mm@...ck.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] mm: fix double page fault on arm64 if PTE_AF is cleared

On Wed, 4 Sep 2019 at 04:20, Anshuman Khandual
<anshuman.khandual@....com> wrote:
> On 09/04/2019 06:28 AM, Jia He wrote:
> > @@ -2152,20 +2153,30 @@ static inline void cow_user_page(struct page *dst, struct page *src, unsigned lo
> >        */
> >       if (unlikely(!src)) {
> >               void *kaddr = kmap_atomic(dst);
> > -             void __user *uaddr = (void __user *)(va & PAGE_MASK);
> > +             void __user *uaddr = (void __user *)(vmf->address & PAGE_MASK);
> > +             pte_t entry;
> >
> >               /*
> >                * This really shouldn't fail, because the page is there
> >                * in the page tables. But it might just be unreadable,
> >                * in which case we just give up and fill the result with
> > -              * zeroes.
> > +              * zeroes. If PTE_AF is cleared on arm64, it might
> > +              * cause double page fault here. so makes pte young here
> >                */
> > +             if (!pte_young(vmf->orig_pte)) {
> > +                     entry = pte_mkyoung(vmf->orig_pte);
> > +                     if (ptep_set_access_flags(vmf->vma, vmf->address,
> > +                             vmf->pte, entry, vmf->flags & FAULT_FLAG_WRITE))
> > +                             update_mmu_cache(vmf->vma, vmf->address,
> > +                                             vmf->pte);
> > +             }
> > +
> >               if (__copy_from_user_inatomic(kaddr, uaddr, PAGE_SIZE))
>
> Should not page fault be disabled when doing this ?

Page faults are already disabled by the kmap_atomic(). But that only
means that you don't deadlock trying to take the mmap_sem again.

> Ideally it should
> have also called access_ok() on the user address range first.

Not necessary, we've already got a vma and the access to the vma checked.

> The point
> is that the caller of __copy_from_user_inatomic() must make sure that
> there cannot be any page fault while doing the actual copy.

When you copy from a user address, in general that's not guaranteed,
more of a best effort.

> But also it
> should be done in generic way, something like in access_ok(). The current
> proposal here seems very specific to arm64 case.

The commit log didn't explain the problem properly. On arm64 without
hardware Access Flag, copying from user will fail because the pte is
old and cannot be marked young. So we always end up with zeroed page
after fork() + CoW for pfn mappings.

-- 
Catalin

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ