lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20231021193857.GA6451@monkey>
Date:   Sat, 21 Oct 2023 12:38:57 -0700
From:   Mike Kravetz <mike.kravetz@...cle.com>
To:     Jane Chu <jane.chu@...cle.com>
Cc:     linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v8 7/8] hugetlb: batch TLB flushes when freeing vmemmap

On 10/21/23 11:20, Jane Chu wrote:
> Hi, Mike,
> 
> On 10/18/2023 7:31 PM, Mike Kravetz wrote:
> > From: Joao Martins <joao.m.martins@...cle.com>
> > 
> > Now that a list of pages is deduplicated at once, the TLB
> > flush can be batched for all vmemmap pages that got remapped.
> > 
> [..]
> 
> > @@ -719,19 +737,28 @@ void hugetlb_vmemmap_optimize_folios(struct hstate *h, struct list_head *folio_l
> >   	list_for_each_entry(folio, folio_list, lru) {
> >   		int ret = __hugetlb_vmemmap_optimize(h, &folio->page,
> > -								&vmemmap_pages);
> > +						&vmemmap_pages,
> > +						VMEMMAP_REMAP_NO_TLB_FLUSH);
> >   		/*
> >   		 * Pages to be freed may have been accumulated.  If we
> >   		 * encounter an ENOMEM,  free what we have and try again.
> > +		 * This can occur in the case that both spliting fails
> > +		 * halfway and head page allocation also failed. In this
> > +		 * case __hugetlb_vmemmap_optimize() would free memory
> > +		 * allowing more vmemmap remaps to occur.
> >   		 */
> >   		if (ret == -ENOMEM && !list_empty(&vmemmap_pages)) {
> > +			flush_tlb_all();
> >   			free_vmemmap_page_list(&vmemmap_pages);
> >   			INIT_LIST_HEAD(&vmemmap_pages);
> > -			__hugetlb_vmemmap_optimize(h, &folio->page, &vmemmap_pages);
> > +			__hugetlb_vmemmap_optimize(h, &folio->page,
> > +						&vmemmap_pages,
> > +						VMEMMAP_REMAP_NO_TLB_FLUSH);
> >   		}
> >   	}
> > +	flush_tlb_all();
> 
> It seems that if folio_list is empty, we could spend a tlb flush here.
> perhaps it's worth to check against empty list up front and return ?

Good point.

hugetlb_vmemmap_optimize_folios is only called from
prep_and_add_allocated_folios and prep_and_add_bootmem_folios.  I
previously thought about adding a check like the following at the
beginning of those routines.

	if (list_empty(folio_list))
		return;

However that seemed like over optimizing.  But, such a check would avoid
the tlb flush as you point out above as well as an unnecessary
hugetlb_lock lock/unlock cycle.

We can add something like this as an optimization.  I am not too concerned
about this right now because these these routines are generally called very
infrequently as the result of a user request to change the size of hugetlb
pools.
-- 
Mike Kravetz

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ