lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220627070730.GD2159330@hori.linux.bs1.fc.nec.co.jp>
Date:   Mon, 27 Jun 2022 07:07:31 +0000
From:   HORIGUCHI NAOYA(堀口 直也) 
        <naoya.horiguchi@....com>
To:     Mike Kravetz <mike.kravetz@...cle.com>
CC:     Naoya Horiguchi <nao.horiguchi@...il.com>,
        "linux-mm@...ck.org" <linux-mm@...ck.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        David Hildenbrand <david@...hat.com>,
        Miaohe Lin <linmiaohe@...wei.com>,
        Liu Shixin <liushixin2@...wei.com>,
        Yang Shi <shy828301@...il.com>,
        Oscar Salvador <osalvador@...e.de>,
        Muchun Song <songmuchun@...edance.com>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2 3/9] mm/hugetlb: make pud_huge() and huge_pud() aware
 of non-present pud entry

On Fri, Jun 24, 2022 at 05:02:01PM -0700, Mike Kravetz wrote:
> On 06/24/22 08:51, Naoya Horiguchi wrote:
> > From: Naoya Horiguchi <naoya.horiguchi@....com>
> > 
> > follow_pud_mask() does not support non-present pud entry now.  As long as
> > I tested on x86_64 server, follow_pud_mask() still simply returns
> > no_page_table() for non-present_pud_entry() due to pud_bad(), so no severe
> > user-visible effect should happen.  But generally we should call
> > follow_huge_pud() for non-present pud entry for 1GB hugetlb page.
> > 
> > Update pud_huge() and huge_pud() to handle non-present pud entries.  The
> > changes are similar to previous works for pud entries commit e66f17ff7177
> > ("mm/hugetlb: take page table lock in follow_huge_pmd()") and commit
> > cbef8478bee5 ("mm/hugetlb: pmd_huge() returns true for non-present hugepage").
> > 
> > Signed-off-by: Naoya Horiguchi <naoya.horiguchi@....com>
> > ---
> >  arch/x86/mm/hugetlbpage.c |  3 ++-
> >  mm/hugetlb.c              | 26 +++++++++++++++++++++++++-
> >  2 files changed, 27 insertions(+), 2 deletions(-)
> 
> Thanks.  Overall looks good with typos corrected that you already noticed.
> One question below.
> > 
> > diff --git a/arch/x86/mm/hugetlbpage.c b/arch/x86/mm/hugetlbpage.c
> > index a0d023cb4292..5fb86fb49ba8 100644
> > --- a/arch/x86/mm/hugetlbpage.c
> > +++ b/arch/x86/mm/hugetlbpage.c
> > @@ -70,7 +70,8 @@ int pmd_huge(pmd_t pmd)
> >  
> >  int pud_huge(pud_t pud)
> >  {
> > -	return !!(pud_val(pud) & _PAGE_PSE);
> > +	return !pud_none(pud) &&
> > +		(pud_val(pud) & (_PAGE_PRESENT|_PAGE_PSE)) != _PAGE_PRESENT;
> >  }
> >  #endif
> >  
> > diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> > index f59f43c06601..b7ae5f73f3b2 100644
> > --- a/mm/hugetlb.c
> > +++ b/mm/hugetlb.c
> > @@ -6946,10 +6946,34 @@ struct page * __weak
> >  follow_huge_pud(struct mm_struct *mm, unsigned long address,
> >  		pud_t *pud, int flags)
> >  {
> > +	struct page *page = NULL;
> > +	spinlock_t *ptl;
> > +	pte_t pte;
> > +
> >  	if (flags & (FOLL_GET | FOLL_PIN))
> >  		return NULL;
> >  
> > -	return pte_page(*(pte_t *)pud) + ((address & ~PUD_MASK) >> PAGE_SHIFT);
> > +retry:
> > +	ptl = huge_pte_lock(hstate_sizelog(PUD_SHIFT), mm, (pte_t *)pud);
> > +	if (!pud_huge(*pud))
> > +		goto out;
> > +	pte = huge_ptep_get((pte_t *)pud);
> > +	if (pte_present(pte)) {
> > +		page = pud_page(*pud) + ((address & ~PUD_MASK) >> PAGE_SHIFT);
> > +		if (WARN_ON_ONCE(!try_grab_page(page, flags))) {
> 
> The call to try_grab_page() seems a bit strange since flags will not include
> FOLL_GET | FOLL_PIN as tested above.  Is try_grab_page() always going be a
> noop here?

Thanks, you're right.  "flags & (FOLL_GET | FOLL_PIN)" check should be
updated.  Probably it's to be aligned to what follow_huge_pmd() checks first.

Thanks,
Naoya Horiguchi

> 
> -- 
> Mike Kravetz
> 
> > +			page = NULL;
> > +			goto out;
> > +		}
> > +	} else {
> > +		if (is_hugetlb_entry_migration(pte)) {
> > +			spin_unlock(ptl);
> > +			__migration_entry_wait(mm, (pte_t *)pud, ptl);
> > +			goto retry;
> > +		}
> > +	}
> > +out:
> > +	spin_unlock(ptl);
> > +	return page;
> >  }
> >  
> >  struct page * __weak
> > -- 
> > 2.25.1
> > 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ