lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 17 Jun 2022 15:41:45 +0800
From:   Muchun Song <songmuchun@...edance.com>
To:     Mike Kravetz <mike.kravetz@...cle.com>
Cc:     Oscar Salvador <osalvador@...e.de>, akpm@...ux-foundation.org,
        linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm: hugetlb: remove minimum_order variable

On Thu, Jun 16, 2022 at 11:03:34AM -0700, Mike Kravetz wrote:
> On 06/16/22 11:38, Muchun Song wrote:
> > The following commit:
> > 
> >   commit 641844f5616d ("mm/hugetlb: introduce minimum hugepage order")
> > 
> > fixed a static checker warning and introduced a global variable minimum_order
> > to fix the warning.  However, the local variable in dissolve_free_huge_pages()
> > can be initialized to huge_page_order(&default_hstate) to fix the warning.
> > So remove minimum_order to simplify the code.
> > 
> > Signed-off-by: Muchun Song <songmuchun@...edance.com>
> > ---
> >  mm/hugetlb.c | 18 +++++++-----------
> >  1 file changed, 7 insertions(+), 11 deletions(-)
> > 
> > diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> > index 8ea4e51d8186..405d1c7441c9 100644
> > --- a/mm/hugetlb.c
> > +++ b/mm/hugetlb.c
> > @@ -66,12 +66,6 @@ static bool hugetlb_cma_page(struct page *page, unsigned int order)
> >  #endif
> >  static unsigned long hugetlb_cma_size __initdata;
> >  
> > -/*
> > - * Minimum page order among possible hugepage sizes, set to a proper value
> > - * at boot time.
> > - */
> > -static unsigned int minimum_order __read_mostly = UINT_MAX;
> > -
> >  __initdata LIST_HEAD(huge_boot_pages);
> >  
> >  /* for command line parsing */
> > @@ -2161,11 +2155,17 @@ int dissolve_free_huge_pages(unsigned long start_pfn, unsigned long end_pfn)
> >  	unsigned long pfn;
> >  	struct page *page;
> >  	int rc = 0;
> > +	unsigned int order;
> > +	struct hstate *h;
> >  
> >  	if (!hugepages_supported())
> >  		return rc;
> >  
> > -	for (pfn = start_pfn; pfn < end_pfn; pfn += 1 << minimum_order) {
> > +	order = huge_page_order(&default_hstate);
> > +	for_each_hstate(h)
> > +		order = min(order, huge_page_order(h));
> 
> Since we will be traversing the array of hstates, I wonder if we should
> optimize this further?  We could:
> - Pass the node into dissolve_free_huge_pages
> - When traversing the hstate array, check free_huge_pages_node[node] in
>   each hstate.
> - If no free huge pages, no need to do the pfn scan.
> 
> Yes, the above is racy.  However, the code is already racy as hugetlb
> page state can change while performing this scan.  We only hold the hugetlb
> lock when checking an individual hugetlb page.  The change above may
> make the code a bit more racy.
>

Agree.
 
> If we think that is too racy, they we could at least check
> nr_huge_pages_node[node].  If there are no hugetlb pages on the node
> there is no need to scan.  And, I think we have isolated this pfn range
> so no new hugetlb pages can be created.
> 
> Not sure if the above optimizations are worth the effort.  IIUC, the
> pfn range is at most a memory block size which is not huge.
>

Right. It is not huge.

I have no strong opinion. dissolve_free_huge_pages() is only called in
memory offline path and it is not a hot path. If we think the optimization
is necessary, I think it should be a separate patch.

Thanks.
 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ