lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aMtp0-mV5_33AgYt@hyeyoo>
Date: Thu, 18 Sep 2025 11:09:23 +0900
From: Harry Yoo <harry.yoo@...cle.com>
To: Jinliang Zheng <alexjlzheng@...il.com>
Cc: Liam.Howlett@...cle.com, akpm@...ux-foundation.org,
        alexjlzheng@...cent.com, arnd@...db.de, bp@...en8.de,
        dave.hansen@...ux.intel.com, david@...hat.com, geert@...ux-m68k.org,
        hpa@...or.com, joro@...tes.org, jroedel@...e.de, kas@...nel.org,
        linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
        linux-mm@...ck.org, linux@...linux.org.uk, lorenzo.stoakes@...cle.com,
        mhocko@...e.com, mingo@...hat.com, rppt@...nel.org, surenb@...gle.com,
        tglx@...utronix.de, thuth@...hat.com, urezki@...il.com, vbabka@...e.cz,
        vincenzo.frascino@....com, x86@...nel.org
Subject: Re: [PATCH] mm: introduce ARCH_PAGE_TABLE_SYNC_MASK_VMALLOC to sync
 kernel mapping conditionally

On Thu, Sep 18, 2025 at 09:31:30AM +0800, Jinliang Zheng wrote:
> On Thu, 18 Sep 2025 01:41:04 +0900, harry.yoo@...cle.com wrote:
> > On Wed, Sep 17, 2025 at 11:48:29PM +0800, alexjlzheng@...il.com wrote:
> > > From: Jinliang Zheng <alexjlzheng@...cent.com>
> > > 
> > > After commit 6eb82f994026 ("x86/mm: Pre-allocate P4D/PUD pages for
> > > vmalloc area"), we don't need to synchronize kernel mappings in the
> > > vmalloc area on x86_64.
> > 
> > Right.
> > 
> > > And commit 58a18fe95e83 ("x86/mm/64: Do not sync vmalloc/ioremap
> > > mappings") actually does this.
> > 
> > Right.
> > 
> > > But commit 6659d0279980 ("x86/mm/64: define ARCH_PAGE_TABLE_SYNC_MASK
> > > and arch_sync_kernel_mappings()") breaks this.
> > 
> > Good point.
> > 
> > > This patch introduces ARCH_PAGE_TABLE_SYNC_MASK_VMALLOC to avoid
> > > unnecessary kernel mappings synchronization of the vmalloc area.
> > > 
> > > Fixes: 6659d0279980 ("x86/mm/64: define ARCH_PAGE_TABLE_SYNC_MASK and arch_sync_kernel_mappings()")
> > 
> > The commit is getting backported to -stable kernels.
> > 
> > Do you think this can cause a visible performance regression from
> > user point of view, or it's just a nice optimization to have?
> > (and any data to support?)
> 
> Haha, when I woke up in bed this morning, I suddenly realized that I
> might have pushed a worthless patch and wasted everyone's precious time.
> 
> Sorry for that. :-(

It's okay!

> After commit 6eb82f994026 ("x86/mm: Pre-allocate P4D/PUD pages for vmalloc area"),
> pgd_alloc_track()/p4d_alloc_track() in vmalloc() and apply_to_range() may should
> always return a mask that does not contain PGTBL_PGD_MODIFIED (5 level pgtable)
> or PGTBL_P4D_MODIFIED (4 level pgtable), thereby bypassing the call to
> arch_sync_kernel_mappings(). Right?

Yeah, I was confused about it too ;)

I think you're right. because vmalloc area is already populated,
p4d_alloc_track() / pud_alloc_track() won't return
PGTBL_PGD_MODIFIED or PGTBL_P4D_MODIFIED.

> thanks,
> Jinliang Zheng. :)
> 
> > 
> > > Signed-off-by: Jinliang Zheng <alexjlzheng@...cent.com>
> > > ---
> > >  arch/arm/include/asm/page.h                 | 3 ++-
> > >  arch/x86/include/asm/pgtable-2level_types.h | 3 ++-
> > >  arch/x86/include/asm/pgtable-3level_types.h | 3 ++-
> > >  include/linux/pgtable.h                     | 4 ++++
> > >  mm/memory.c                                 | 2 +-
> > >  mm/vmalloc.c                                | 6 +++---
> > >  6 files changed, 14 insertions(+), 7 deletions(-)
> > > 
> > > diff --git a/mm/memory.c b/mm/memory.c
> > > index 0ba4f6b71847..cd2488043f8f 100644
> > > --- a/mm/memory.c
> > > +++ b/mm/memory.c
> > > @@ -3170,7 +3170,7 @@ static int __apply_to_page_range(struct mm_struct *mm, unsigned long addr,
> > >  			break;
> > >  	} while (pgd++, addr = next, addr != end);
> > >  
> > > -	if (mask & ARCH_PAGE_TABLE_SYNC_MASK)
> > > +	if (mask & ARCH_PAGE_TABLE_SYNC_MASK_VMALLOC)
> > >  		arch_sync_kernel_mappings(start, start + size);
> > 
> > But vmalloc is not the only user of apply_to_page_range()?
> > 
> > -- 
> > Cheers,
> > Harry / Hyeonggon

-- 
Cheers,
Harry / Hyeonggon

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ