lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Yqtp2hA+5pRiFskC@monkey>
Date:   Thu, 16 Jun 2022 10:35:22 -0700
From:   Mike Kravetz <mike.kravetz@...cle.com>
To:     Baoquan He <bhe@...hat.com>
Cc:     Baolin Wang <baolin.wang@...ux.alibaba.com>,
        songmuchun@...edance.com, akpm@...ux-foundation.org,
        catalin.marinas@....com, will@...nel.org,
        anshuman.khandual@....com, linux-arm-kernel@...ts.infradead.org,
        linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH] arm64/hugetlb: Implement arm64 specific
 hugetlb_mask_last_hp

On 06/16/22 20:05, Baoquan He wrote:
> On 06/16/22 at 11:34am, Baolin Wang wrote:
> > The HugeTLB address ranges are linearly scanned during fork, unmap and
> > remap operations, and the linear scan can skip to the end of range mapped
> > by the page table page if hitting a non-present entry, which can help
> > to speed linear scanning of the HugeTLB address ranges.
> > 
> > So hugetlb_mask_last_hp() is introduced to help to update the address in
> > the loop of HugeTLB linear scanning with getting the last huge page mapped
> > by the associated page table page[1], when a non-present entry is encountered.
> > 
> > Considering ARM64 specific cont-pte/pmd size HugeTLB, this patch implemented
> > an ARM64 specific hugetlb_mask_last_hp() to help this case.
> > 
> > [1] https://lore.kernel.org/linux-mm/20220527225849.284839-1-mike.kravetz@oracle.com/
> > 
> > Signed-off-by: Baolin Wang <baolin.wang@...ux.alibaba.com>
> > ---
> > Note: this patch is based on the series: "hugetlb: speed up linear
> > address scanning" from Mike. Mike, please fold it into your series.
> > Thanks.
> > ---
> >  arch/arm64/mm/hugetlbpage.c | 20 ++++++++++++++++++++
> >  1 file changed, 20 insertions(+)
> > 
> > diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
> > index e2a5ec9..958935c 100644
> > --- a/arch/arm64/mm/hugetlbpage.c
> > +++ b/arch/arm64/mm/hugetlbpage.c
> > @@ -368,6 +368,26 @@ pte_t *huge_pte_offset(struct mm_struct *mm,
> >  	return NULL;
> >  }
> >  
> > +unsigned long hugetlb_mask_last_hp(struct hstate *h)
> > +{
> > +	unsigned long hp_size = huge_page_size(h);
> 
> hp_size may not be a good name, it reminds me of hotplug. I would name
> it hpage_size even though a little more characters are added.
> 

How about just hugetlb_mask_last_page?  Since the routine is prefixed
with 'hugetlb' and we are passing in a pointer to a hstate, I think there
is enough context to know we are talking about a huge page mask as
opposed to a base page mask.

If OK, I will change the name in my patches and here.
-- 
Mike Kravetz

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ