lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f350b114-c932-4516-98f6-caf3599208f8@lucifer.local>
Date: Wed, 25 Jun 2025 13:14:42 +0100
From: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
To: Dev Jain <dev.jain@....com>
Cc: akpm@...ux-foundation.org, david@...hat.com, ziy@...dia.com,
        baolin.wang@...ux.alibaba.com, Liam.Howlett@...cle.com,
        npache@...hat.com, ryan.roberts@....com, baohua@...nel.org,
        linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/3] khugepaged: Optimize
 __collapse_huge_page_copy_succeeded() by PTE batching

You forgot the v2 here :) this breaks b4 shazam...

I managed to do this on the cover letter (but not patches) of a series
before. So you're in good company... ;)

On Wed, Jun 25, 2025 at 11:28:04AM +0530, Dev Jain wrote:
> Use PTE batching to optimize __collapse_huge_page_copy_succeeded().
>
> On arm64, suppose khugepaged is scanning a pte-mapped 2MB THP for collapse.
> Then, calling ptep_clear() for every pte will cause a TLB flush for every
> contpte block. Instead, clear_full_ptes() does a
> contpte_try_unfold_partial() which will flush the TLB only for the (if any)
> starting and ending contpte block, if they partially overlap with the range
> khugepaged is looking at.
>
> For all arches, there should be a benefit due to batching atomic operations
> on mapcounts due to folio_remove_rmap_ptes().
>
> No issues were observed with mm-selftests.
>
> Signed-off-by: Dev Jain <dev.jain@....com>

Overall looking way way better! Just some nits below.

> ---
>  mm/khugepaged.c | 27 +++++++++++++++++++++------
>  1 file changed, 21 insertions(+), 6 deletions(-)
>
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index d45d08b521f6..3944b112d452 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -700,12 +700,15 @@ static void __collapse_huge_page_copy_succeeded(pte_t *pte,
>  						spinlock_t *ptl,
>  						struct list_head *compound_pagelist)
>  {
> +	unsigned long end = address + HPAGE_PMD_SIZE;
>  	struct folio *src, *tmp;
> -	pte_t *_pte;
>  	pte_t pteval;
> +	pte_t *_pte;
> +	int nr_ptes;
>
> -	for (_pte = pte; _pte < pte + HPAGE_PMD_NR;
> -	     _pte++, address += PAGE_SIZE) {
> +	for (_pte = pte; _pte < pte + HPAGE_PMD_NR; _pte += nr_ptes,
> +	     address += nr_ptes * PAGE_SIZE) {

Thanks this is much better.

> +		nr_ptes = 1;
>  		pteval = ptep_get(_pte);
>  		if (pte_none(pteval) || is_zero_pfn(pte_pfn(pteval))) {
>  			add_mm_counter(vma->vm_mm, MM_ANONPAGES, 1);
> @@ -719,21 +722,33 @@ static void __collapse_huge_page_copy_succeeded(pte_t *pte,
>  				ksm_might_unmap_zero_page(vma->vm_mm, pteval);
>  			}
>  		} else {
> +			const fpb_t flags = FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY;
> +			int max_nr_ptes;
> +
>  			struct page *src_page = pte_page(pteval);
>
>  			src = page_folio(src_page);
>  			if (!folio_test_large(src))
>  				release_pte_folio(src);
> +
> +			max_nr_ptes = (end - address) >> PAGE_SHIFT;
> +			if (folio_test_large(src))
> +				nr_ptes = folio_pte_batch(src, address, _pte,
> +							  pteval, max_nr_ptes,
> +							  flags, NULL, NULL, NULL);

Nit, but we only use max_nr_ptes here so could declare and set here, e.g.:

			if (folio_test_large(src)) {
				int max_nr_ptes = (end - address) >> PAGE_SHIFT;

				nr_ptes = folio_pte_batch(src, address, _pte,
							  pteval, max_nr_ptes,
							  flags, NULL, NULL, NULL);
			}

BTW I think David raised it, but is there a way to wrap folio_pte_batch() to not
have to NULL, NULL, NULL here? :)


oh and if we do this, we can also combine this line with above so:

			if (folio_test_large(src)) {
				int max_nr_ptes = (end - address) >> PAGE_SHIFT;

				nr_ptes = folio_pte_batch(src, address, _pte,
							  pteval, max_nr_ptes,
							  flags, NULL, NULL, NULL);
			} else {
  				release_pte_folio(src);
			}

Which is neater.

> +
>  			/*
>  			 * ptl mostly unnecessary, but preempt has to
>  			 * be disabled to update the per-cpu stats
>  			 * inside folio_remove_rmap_pte().
>  			 */
>  			spin_lock(ptl);
> -			ptep_clear(vma->vm_mm, address, _pte);
> -			folio_remove_rmap_pte(src, src_page, vma);
> +			clear_full_ptes(vma->vm_mm, address, _pte, nr_ptes,
> +					/* full = */ false);
> +			folio_remove_rmap_ptes(src, src_page, nr_ptes, vma);
>  			spin_unlock(ptl);
> -			free_folio_and_swap_cache(src);
> +			free_swap_cache(src);
> +			folio_put_refs(src, nr_ptes);
>  		}
>  	}
>
> --
> 2.30.2
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ