lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b82b860c-8ad9-409a-8668-e3db11b9f7a5@lucifer.local>
Date: Tue, 15 Jul 2025 10:43:11 +0100
From: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
To: Dev Jain <dev.jain@....com>
Cc: akpm@...ux-foundation.org, david@...hat.com, ziy@...dia.com,
        baolin.wang@...ux.alibaba.com, Liam.Howlett@...cle.com,
        npache@...hat.com, ryan.roberts@....com, baohua@...nel.org,
        linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 2/3] khugepaged: Optimize collapse_pte_mapped_thp()
 for large folios by PTE batching

On Tue, Jul 15, 2025 at 12:04:56PM +0530, Dev Jain wrote:
>
> On 26/06/25 10:17 am, Lorenzo Stoakes wrote:
> > On Thu, Jun 26, 2025 at 09:18:47AM +0530, Dev Jain wrote:
> > > On 25/06/25 6:41 pm, Lorenzo Stoakes wrote:
> > > > On Wed, Jun 25, 2025 at 11:28:05AM +0530, Dev Jain wrote:
> > > > > Use PTE batching to optimize collapse_pte_mapped_thp().
> > > > >
> > > > > On arm64, suppose khugepaged is scanning a pte-mapped 2MB THP for collapse.
> > > > > Then, calling ptep_clear() for every pte will cause a TLB flush for every
> > > > > contpte block. Instead, clear_full_ptes() does a
> > > > > contpte_try_unfold_partial() which will flush the TLB only for the (if any)
> > > > > starting and ending contpte block, if they partially overlap with the range
> > > > > khugepaged is looking at.
> > > > >
> > > > > For all arches, there should be a benefit due to batching atomic operations
> > > > > on mapcounts due to folio_remove_rmap_ptes().
> > > > >
> > > > > Note that we do not need to make a change to the check
> > > > > "if (folio_page(folio, i) != page)"; if i'th page of the folio is equal
> > > > > to the first page of our batch, then i + 1, .... i + nr_batch_ptes - 1
> > > > > pages of the folio will be equal to the corresponding pages of our
> > > > > batch mapping consecutive pages.
> > > > >
> > > > > No issues were observed with mm-selftests.
> > > > >
> > > > > Signed-off-by: Dev Jain <dev.jain@....com>
> > > > > ---
> > > > >    mm/khugepaged.c | 38 ++++++++++++++++++++++++++------------
> > > > >    1 file changed, 26 insertions(+), 12 deletions(-)
> > > > >
> > > > > diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> > > > > index 3944b112d452..4c8d33abfbd8 100644
> > > > > --- a/mm/khugepaged.c
> > > > > +++ b/mm/khugepaged.c
> > > > > @@ -1499,15 +1499,16 @@ static int set_huge_pmd(struct vm_area_struct *vma, unsigned long addr,
> > > > >    int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr,
> > > > >    			    bool install_pmd)
> > > > >    {
> > > > > +	int nr_mapped_ptes = 0, nr_batch_ptes, result = SCAN_FAIL;
> > > > >    	struct mmu_notifier_range range;
> > > > >    	bool notified = false;
> > > > >    	unsigned long haddr = addr & HPAGE_PMD_MASK;
> > > > > +	unsigned long end = haddr + HPAGE_PMD_SIZE;
> > > > >    	struct vm_area_struct *vma = vma_lookup(mm, haddr);
> > > > >    	struct folio *folio;
> > > > >    	pte_t *start_pte, *pte;
> > > > >    	pmd_t *pmd, pgt_pmd;
> > > > >    	spinlock_t *pml = NULL, *ptl;
> > > > > -	int nr_ptes = 0, result = SCAN_FAIL;
> > > > >    	int i;
> > > > >
> > > > >    	mmap_assert_locked(mm);
> > > > > @@ -1621,11 +1622,17 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr,
> > > > >    		goto abort;
> > > > >
> > > > >    	/* step 2: clear page table and adjust rmap */
> > > > > -	for (i = 0, addr = haddr, pte = start_pte;
> > > > > -	     i < HPAGE_PMD_NR; i++, addr += PAGE_SIZE, pte++) {
> > > > > +	for (i = 0, addr = haddr, pte = start_pte; i < HPAGE_PMD_NR;
> > > > > +	     i += nr_batch_ptes, addr += nr_batch_ptes * PAGE_SIZE,
> > > > > +	     pte += nr_batch_ptes) {
> > > > > +		const fpb_t flags = FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY;
> > > > > +		int max_nr_batch_ptes = (end - addr) >> PAGE_SHIFT;
> > > > > +		struct folio *mapped_folio;
> > > > >    		struct page *page;
> > > > >    		pte_t ptent = ptep_get(pte);
> > > > >
> > > > > +		nr_batch_ptes = 1;
> > > > > +
> > > > >    		if (pte_none(ptent))
> > > > >    			continue;
> > > > >    		/*
> > > > > @@ -1639,26 +1646,33 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr,
> > > > >    			goto abort;
> > > > >    		}
> > > > >    		page = vm_normal_page(vma, addr, ptent);
> > > > > +		mapped_folio = page_folio(page);
> > > > > +
> > > > >    		if (folio_page(folio, i) != page)
> > > > >    			goto abort;
> > > > Isn't this asserting that folio == mapped_folio here? We're saying page is the
> > > > ith page of folio, so why do we need to look up mapped_folio?
> > > We need to check for all PTEs whether they map the right page or not. This may
> > > get disturbed due to mremap and stuff.
> > Right but I'm saying mapped_folio == folio right? You're literally asserting it
> > here? So there's no need to assign mapped_folio at all, just reference folio no?
> >
> > > > > +		mapped_folio = page_folio(page);
> > > > You're assigning this twice.
> > > Forgot to remove, thanks.
> > >
> > > > > +		nr_batch_ptes = folio_pte_batch(mapped_folio, addr, pte, ptent,
> > > > > +						max_nr_batch_ptes, flags,
> > > > > +						NULL, NULL, NULL);
> > > > > +
> > > > >    		/*
> > > > >    		 * Must clear entry, or a racing truncate may re-remove it.
> > > > >    		 * TLB flush can be left until pmdp_collapse_flush() does it.
> > > > >    		 * PTE dirty? Shmem page is already dirty; file is read-only.
> > > > >    		 */
> > > > > -		ptep_clear(mm, addr, pte);
> > > > > -		folio_remove_rmap_pte(folio, page, vma);
> > > > > -		nr_ptes++;
> > > > > +		clear_full_ptes(mm, addr, pte, nr_batch_ptes, /* full = */ false);
> > > > > +		folio_remove_rmap_ptes(folio, page, nr_batch_ptes, vma);
> > > > > +		nr_mapped_ptes += nr_batch_ptes;
> > > > >    	}
> > > > >
> > > > >    	if (!pml)
> > > > >    		spin_unlock(ptl);
> > > > >
> > > > >    	/* step 3: set proper refcount and mm_counters. */
> > > > > -	if (nr_ptes) {
> > > > > -		folio_ref_sub(folio, nr_ptes);
> > > > > -		add_mm_counter(mm, mm_counter_file(folio), -nr_ptes);
> > > > > +	if (nr_mapped_ptes) {
> > > > > +		folio_ref_sub(folio, nr_mapped_ptes);
> > > > > +		add_mm_counter(mm, mm_counter_file(folio), -nr_mapped_ptes);
> > > > >    	}
> > > > >
> > > > >    	/* step 4: remove empty page table */
> > > > > @@ -1691,10 +1705,10 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr,
> > > > >    			: SCAN_SUCCEED;
> > > > >    	goto drop_folio;
> > > > >    abort:
> > > > > -	if (nr_ptes) {
> > > > > +	if (nr_mapped_ptes) {
> > > > I know it's ironic coming from me :P but I'm not sure why we need to churn this
> > > > up by renaming?
> > > Because nr_ptes is an existing variable and I need a new variable to make
> > > the jump at the end of the PTE batch.
> > I thought you eliminated nr_ptes as a variable here? Where else is it used?
> >
> > Oh how this code needs refactoring...
>
> If we retain nr_ptes, then the two variables will be nr_ptes and nr_mapped_ptes,
> which is confusing since the former is plain and the latter has a _mapped_ thingy
> in it, so instead now we call them nr_batch_ptes and nr_mapped_ptes.
>

Sigh, this is still awful. But probably just existing awfulness. This whole
thing needs a tent thrown over it and fumigation... but again not your fault :)

I mean fine, this is fine then.

> >
> > > > >    		flush_tlb_mm(mm);
> > > > > -		folio_ref_sub(folio, nr_ptes);
> > > > > -		add_mm_counter(mm, mm_counter_file(folio), -nr_ptes);
> > > > > +		folio_ref_sub(folio, nr_mapped_ptes);
> > > > > +		add_mm_counter(mm, mm_counter_file(folio), -nr_mapped_ptes);
> > > > >    	}
> > > > >    unlock:
> > > > >    	if (start_pte)
> > > > > --
> > > > > 2.30.2
> > > > >
> > > > V

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ