lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 24 Sep 2021 12:26:21 +0300
From:   "Kirill A. Shutemov" <kirill@...temov.name>
To:     Yang Shi <shy828301@...il.com>
Cc:     HORIGUCHI NAOYA(堀口 直也) 
        <naoya.horiguchi@....com>, Hugh Dickins <hughd@...gle.com>,
        "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
        Matthew Wilcox <willy@...radead.org>,
        Peter Xu <peterx@...hat.com>,
        Oscar Salvador <osalvador@...e.de>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Linux MM <linux-mm@...ck.org>,
        Linux FS-devel Mailing List <linux-fsdevel@...r.kernel.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [v2 PATCH 1/5] mm: filemap: check if THP has hwpoisoned subpage
 for PMD page fault

On Thu, Sep 23, 2021 at 01:39:49PM -0700, Yang Shi wrote:
> On Thu, Sep 23, 2021 at 10:15 AM Yang Shi <shy828301@...il.com> wrote:
> >
> > On Thu, Sep 23, 2021 at 7:39 AM Kirill A. Shutemov <kirill@...temov.name> wrote:
> > >
> > > On Wed, Sep 22, 2021 at 08:28:26PM -0700, Yang Shi wrote:
> > > > When handling shmem page fault the THP with corrupted subpage could be PMD
> > > > mapped if certain conditions are satisfied.  But kernel is supposed to
> > > > send SIGBUS when trying to map hwpoisoned page.
> > > >
> > > > There are two paths which may do PMD map: fault around and regular fault.
> > > >
> > > > Before commit f9ce0be71d1f ("mm: Cleanup faultaround and finish_fault() codepaths")
> > > > the thing was even worse in fault around path.  The THP could be PMD mapped as
> > > > long as the VMA fits regardless what subpage is accessed and corrupted.  After
> > > > this commit as long as head page is not corrupted the THP could be PMD mapped.
> > > >
> > > > In the regulat fault path the THP could be PMD mapped as long as the corrupted
> > >
> > > s/regulat/regular/
> > >
> > > > page is not accessed and the VMA fits.
> > > >
> > > > This loophole could be fixed by iterating every subpage to check if any
> > > > of them is hwpoisoned or not, but it is somewhat costly in page fault path.
> > > >
> > > > So introduce a new page flag called HasHWPoisoned on the first tail page.  It
> > > > indicates the THP has hwpoisoned subpage(s).  It is set if any subpage of THP
> > > > is found hwpoisoned by memory failure and cleared when the THP is freed or
> > > > split.
> > > >
> > > > Cc: <stable@...r.kernel.org>
> > > > Suggested-by: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
> > > > Signed-off-by: Yang Shi <shy828301@...il.com>
> > > > ---
> > >
> > > ...
> > >
> > > > diff --git a/mm/filemap.c b/mm/filemap.c
> > > > index dae481293b5d..740b7afe159a 100644
> > > > --- a/mm/filemap.c
> > > > +++ b/mm/filemap.c
> > > > @@ -3195,12 +3195,14 @@ static bool filemap_map_pmd(struct vm_fault *vmf, struct page *page)
> > > >       }
> > > >
> > > >       if (pmd_none(*vmf->pmd) && PageTransHuge(page)) {
> > > > -         vm_fault_t ret = do_set_pmd(vmf, page);
> > > > -         if (!ret) {
> > > > -                 /* The page is mapped successfully, reference consumed. */
> > > > -                 unlock_page(page);
> > > > -                 return true;
> > > > -         }
> > > > +             vm_fault_t ret = do_set_pmd(vmf, page);
> > > > +             if (ret == VM_FAULT_FALLBACK)
> > > > +                     goto out;
> > >
> > > Hm.. What? I don't get it. Who will establish page table in the pmd then?
> >
> > Aha, yeah. It should jump to the below PMD populate section. Will fix
> > it in the next version.
> >
> > >
> > > > +             if (!ret) {
> > > > +                     /* The page is mapped successfully, reference consumed. */
> > > > +                     unlock_page(page);
> > > > +                     return true;
> > > > +             }
> > > >       }
> > > >
> > > >       if (pmd_none(*vmf->pmd)) {
> > > > @@ -3220,6 +3222,7 @@ static bool filemap_map_pmd(struct vm_fault *vmf, struct page *page)
> > > >               return true;
> > > >       }
> > > >
> > > > +out:
> > > >       return false;
> > > >  }
> > > >
> > > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> > > > index 5e9ef0fc261e..0574b1613714 100644
> > > > --- a/mm/huge_memory.c
> > > > +++ b/mm/huge_memory.c
> > > > @@ -2426,6 +2426,8 @@ static void __split_huge_page(struct page *page, struct list_head *list,
> > > >       /* lock lru list/PageCompound, ref frozen by page_ref_freeze */
> > > >       lruvec = lock_page_lruvec(head);
> > > >
> > > > +     ClearPageHasHWPoisoned(head);
> > > > +
> > >
> > > Do we serialize the new flag with lock_page() or what? I mean what
> > > prevents the flag being set again after this point, but before
> > > ClearPageCompound()?
> >
> > No, not in this patch. But I think we could use refcount. THP split
> > would freeze refcount and the split is guaranteed to succeed after
> > that point, so refcount can be checked in memory failure. The
> > SetPageHasHWPoisoned() call could be moved to __get_hwpoison_page()
> > when get_unless_page_zero() bumps the refcount successfully. If the
> > refcount is zero it means the THP is under split or being freed, we
> > don't care about these two cases.
> 
> Setting the flag in __get_hwpoison_page() would make this patch depend
> on patch #3. However, this patch probably will be backported to older
> versions. To ease the backport, I'd like to have the refcount check in
> the same place where THP is checked. So, something like "if
> (PageTransHuge(hpage) && page_count(hpage) != 0)".
> 
> Then the call to set the flag could be moved to __get_hwpoison_page()
> in the following patch (after patch #3). Does this sound good to you?

Could you show the code I'm not sure I follow. page_count(hpage) check
looks racy to me. What if split happens just after the check?

-- 
 Kirill A. Shutemov

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ