[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aMrkoBhIcP37YgyS@hyeyoo>
Date: Thu, 18 Sep 2025 01:41:04 +0900
From: Harry Yoo <harry.yoo@...cle.com>
To: alexjlzheng@...il.com
Cc: mingo@...hat.com, tglx@...utronix.de, jroedel@...e.de,
linux@...linux.org.uk, bp@...en8.de, dave.hansen@...ux.intel.com,
x86@...nel.org, hpa@...or.com, akpm@...ux-foundation.org,
david@...hat.com, lorenzo.stoakes@...cle.com, Liam.Howlett@...cle.com,
vbabka@...e.cz, rppt@...nel.org, surenb@...gle.com, mhocko@...e.com,
urezki@...il.com, arnd@...db.de, vincenzo.frascino@....com,
geert@...ux-m68k.org, thuth@...hat.com, kas@...nel.org,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, joro@...tes.org,
Jinliang Zheng <alexjlzheng@...cent.com>
Subject: Re: [PATCH] mm: introduce ARCH_PAGE_TABLE_SYNC_MASK_VMALLOC to sync
kernel mapping conditionally
On Wed, Sep 17, 2025 at 11:48:29PM +0800, alexjlzheng@...il.com wrote:
> From: Jinliang Zheng <alexjlzheng@...cent.com>
>
> After commit 6eb82f994026 ("x86/mm: Pre-allocate P4D/PUD pages for
> vmalloc area"), we don't need to synchronize kernel mappings in the
> vmalloc area on x86_64.
Right.
> And commit 58a18fe95e83 ("x86/mm/64: Do not sync vmalloc/ioremap
> mappings") actually does this.
Right.
> But commit 6659d0279980 ("x86/mm/64: define ARCH_PAGE_TABLE_SYNC_MASK
> and arch_sync_kernel_mappings()") breaks this.
Good point.
> This patch introduces ARCH_PAGE_TABLE_SYNC_MASK_VMALLOC to avoid
> unnecessary kernel mappings synchronization of the vmalloc area.
>
> Fixes: 6659d0279980 ("x86/mm/64: define ARCH_PAGE_TABLE_SYNC_MASK and arch_sync_kernel_mappings()")
The commit is getting backported to -stable kernels.
Do you think this can cause a visible performance regression from
user point of view, or it's just a nice optimization to have?
(and any data to support?)
> Signed-off-by: Jinliang Zheng <alexjlzheng@...cent.com>
> ---
> arch/arm/include/asm/page.h | 3 ++-
> arch/x86/include/asm/pgtable-2level_types.h | 3 ++-
> arch/x86/include/asm/pgtable-3level_types.h | 3 ++-
> include/linux/pgtable.h | 4 ++++
> mm/memory.c | 2 +-
> mm/vmalloc.c | 6 +++---
> 6 files changed, 14 insertions(+), 7 deletions(-)
>
> diff --git a/mm/memory.c b/mm/memory.c
> index 0ba4f6b71847..cd2488043f8f 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -3170,7 +3170,7 @@ static int __apply_to_page_range(struct mm_struct *mm, unsigned long addr,
> break;
> } while (pgd++, addr = next, addr != end);
>
> - if (mask & ARCH_PAGE_TABLE_SYNC_MASK)
> + if (mask & ARCH_PAGE_TABLE_SYNC_MASK_VMALLOC)
> arch_sync_kernel_mappings(start, start + size);
But vmalloc is not the only user of apply_to_page_range()?
--
Cheers,
Harry / Hyeonggon
Powered by blists - more mailing lists