lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <cbebac71-0be6-ae66-02b3-243d0f8c39e8@oracle.com>
Date:   Sat, 21 Oct 2023 11:20:13 -0700
From:   Jane Chu <jane.chu@...cle.com>
To:     Mike Kravetz <mike.kravetz@...cle.com>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v8 7/8] hugetlb: batch TLB flushes when freeing vmemmap

Hi, Mike,

On 10/18/2023 7:31 PM, Mike Kravetz wrote:
> From: Joao Martins <joao.m.martins@...cle.com>
> 
> Now that a list of pages is deduplicated at once, the TLB
> flush can be batched for all vmemmap pages that got remapped.
> 
[..]

>   
> @@ -719,19 +737,28 @@ void hugetlb_vmemmap_optimize_folios(struct hstate *h, struct list_head *folio_l
>   
>   	list_for_each_entry(folio, folio_list, lru) {
>   		int ret = __hugetlb_vmemmap_optimize(h, &folio->page,
> -								&vmemmap_pages);
> +						&vmemmap_pages,
> +						VMEMMAP_REMAP_NO_TLB_FLUSH);
>   
>   		/*
>   		 * Pages to be freed may have been accumulated.  If we
>   		 * encounter an ENOMEM,  free what we have and try again.
> +		 * This can occur in the case that both spliting fails
> +		 * halfway and head page allocation also failed. In this
> +		 * case __hugetlb_vmemmap_optimize() would free memory
> +		 * allowing more vmemmap remaps to occur.
>   		 */
>   		if (ret == -ENOMEM && !list_empty(&vmemmap_pages)) {
> +			flush_tlb_all();
>   			free_vmemmap_page_list(&vmemmap_pages);
>   			INIT_LIST_HEAD(&vmemmap_pages);
> -			__hugetlb_vmemmap_optimize(h, &folio->page, &vmemmap_pages);
> +			__hugetlb_vmemmap_optimize(h, &folio->page,
> +						&vmemmap_pages,
> +						VMEMMAP_REMAP_NO_TLB_FLUSH);
>   		}
>   	}
>   
> +	flush_tlb_all();

It seems that if folio_list is empty, we could spend a tlb flush here.
perhaps it's worth to check against empty list up front and return ?

thanks,
-jane

>   	free_vmemmap_page_list(&vmemmap_pages);
>   }
>   

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ