lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <46dcddec-88a9-4b22-920c-8a3edeb2f027@arm.com>
Date: Tue, 16 Sep 2025 09:24:45 +0100
From: Ryan Roberts <ryan.roberts@....com>
To: Huang Ying <ying.huang@...ux.alibaba.com>,
 Catalin Marinas <catalin.marinas@....com>, Will Deacon <will@...nel.org>,
 Andrew Morton <akpm@...ux-foundation.org>,
 David Hildenbrand <david@...hat.com>
Cc: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
 Vlastimil Babka <vbabka@...e.cz>, Zi Yan <ziy@...dia.com>,
 Baolin Wang <baolin.wang@...ux.alibaba.com>,
 Yang Shi <yang@...amperecomputing.com>,
 "Christoph Lameter (Ampere)" <cl@...two.org>, Dev Jain <dev.jain@....com>,
 Barry Song <baohua@...nel.org>, Anshuman Khandual
 <anshuman.khandual@....com>, Yicong Yang <yangyicong@...ilicon.com>,
 Kefeng Wang <wangkefeng.wang@...wei.com>,
 Kevin Brodsky <kevin.brodsky@....com>,
 Yin Fengwei <fengwei_yin@...ux.alibaba.com>,
 linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
 linux-mm@...ck.org
Subject: Re: [RFC PATCH 2/2] arm64, tlbflush: don't TLBI broadcast if page
 reused in write fault

On 15/09/2025 04:29, Huang Ying wrote:
> A multi-thread customer workload with large memory footprint uses
> fork()/exec() to run some external programs every tens seconds.  When
> running the workload on an arm64 server machine, it's observed that
> quite some CPU cycles are spent in the TLB flushing functions.  While
> running the workload on the x86_64 server machine, it's not.  This
> causes the performance on arm64 to be much worse than that on x86_64.
> 
> During the workload running, after fork()/exec() write-protects all
> pages in the parent process, memory writing in the parent process
> will cause a write protection fault.  Then the page fault handler
> will make the PTE/PDE writable if the page can be reused, which is
> almost always true in the workload.  On arm64, to avoid the write
> protection fault on other CPUs, the page fault handler flushes the TLB
> globally with TLBI broadcast after changing the PTE/PDE.  However, this
> isn't always necessary.  Firstly, it's safe to leave some stall
> read-only TLB entries as long as they will be flushed finally.
> Secondly, it's quite possible that the original read-only PTE/PDEs
> aren't cached in remote TLB at all if the memory footprint is large.
> In fact, on x86_64, the page fault handler doesn't flush the remote
> TLB in this situation, which benefits the performance a lot.
> 
> To improve the performance on arm64, make the write protection fault
> handler flush the TLB locally instead of globally via TLBI broadcast
> after making the PTE/PDE writable.  If there are stall read-only TLB
> entries in the remote CPUs, the page fault handler on these CPUs will
> regard the page fault as spurious and flush the stall TLB entries.
> 
> To test the patchset, make the usemem.c from
> vm-scalability (https://git.kernel.org/pub/scm/linux/kernel/git/wfg/vm-scalability.git).
> support calling fork()/exec() periodically.  To mimic the behavior of
> the customer workload, run usemem with 4 threads, access 100GB memory,
> and call fork()/exec() every 40 seconds.  Test results show that with
> the patchset the score of usemem improves ~40.6%.  The cycles% of TLB
> flush functions reduces from ~50.5% to ~0.3% in perf profile.

Overall, this looks like a simple and useful performance optimization - thanks!
I'm running this through our performance regression suite to see if it spots any
workloads where the change slows things down and will report once we have the
results.

> 
> Signed-off-by: Huang Ying <ying.huang@...ux.alibaba.com>
> Cc: Catalin Marinas <catalin.marinas@....com>
> Cc: Will Deacon <will@...nel.org>
> Cc: Andrew Morton <akpm@...ux-foundation.org>
> Cc: David Hildenbrand <david@...hat.com>
> Cc: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
> Cc: Vlastimil Babka <vbabka@...e.cz>
> Cc: Zi Yan <ziy@...dia.com>
> Cc: Baolin Wang <baolin.wang@...ux.alibaba.com>
> Cc: Ryan Roberts <ryan.roberts@....com>
> Cc: Yang Shi <yang@...amperecomputing.com>
> Cc: "Christoph Lameter (Ampere)" <cl@...two.org>
> Cc: Dev Jain <dev.jain@....com>
> Cc: Barry Song <baohua@...nel.org>
> Cc: Anshuman Khandual <anshuman.khandual@....com>
> Cc: Yicong Yang <yangyicong@...ilicon.com>
> Cc: Kefeng Wang <wangkefeng.wang@...wei.com>
> Cc: Kevin Brodsky <kevin.brodsky@....com>
> Cc: Yin Fengwei <fengwei_yin@...ux.alibaba.com>
> Cc: linux-arm-kernel@...ts.infradead.org
> Cc: linux-kernel@...r.kernel.org
> Cc: linux-mm@...ck.org
> ---
>  arch/arm64/include/asm/pgtable.h  | 14 ++++++++-----
>  arch/arm64/include/asm/tlbflush.h | 33 +++++++++++++++++++++++++++++++
>  arch/arm64/mm/fault.c             |  2 +-
>  3 files changed, 43 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
> index abd2dee416b3..a9ed8c9d2c33 100644
> --- a/arch/arm64/include/asm/pgtable.h
> +++ b/arch/arm64/include/asm/pgtable.h
> @@ -130,12 +130,16 @@ static inline void arch_leave_lazy_mmu_mode(void)
>  #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>  
>  /*
> - * Outside of a few very special situations (e.g. hibernation), we always
> - * use broadcast TLB invalidation instructions, therefore a spurious page
> - * fault on one CPU which has been handled concurrently by another CPU
> - * does not need to perform additional invalidation.
> + * We use local TLB invalidation instruction when reusing page in
> + * write protection fault handler to avoid TLBI broadcast in the hot
> + * path.  This will cause spurious page faults if stall read-only TLB
> + * entries exist.
>   */
> -#define flush_tlb_fix_spurious_fault(vma, address, ptep) do { } while (0)
> +#define flush_tlb_fix_spurious_fault(vma, address, ptep)	\
> +	local_flush_tlb_page_nonotify(vma, address)
> +
> +#define flush_tlb_fix_spurious_fault_pmd(vma, address, pmdp)	\
> +	local_flush_tlb_page_nonotify(vma, address)

It's not really clear to me how important doing local tlb flushes for pmds is
for the performance improvement? I'm guessing most of the win comes from the pte
level? I suspect you have only added spurious pmd fault handling because the
arm64 implementation of __ptep_set_access_flags() actually handles both pte and
pmd levels?

Given the core kernel didn't previously have support for pmd spurious faults, I
wonder if it would be simpler to drop the first patch and rejig
__ptep_set_access_flags() so that it has a pgsize parameter and can
differentiate based on that? I'm on the fence...

If you do end up taking this approach, there is a style I introduced for
hugetlb, where the function is suffixed with _anysz and it takes a pgsize param:

int __ptep_set_access_flags_anysz(struct vm_area_struct *vma,
				  unsigned long address, pte_t *ptep,
				  pte_t entry, int dirty, unsigned long pgsize);

>  
>  /*
>   * ZERO_PAGE is a global shared page that is always zero: used
> diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
> index 18a5dc0c9a54..607b67d8f61b 100644
> --- a/arch/arm64/include/asm/tlbflush.h
> +++ b/arch/arm64/include/asm/tlbflush.h
> @@ -282,6 +282,39 @@ static inline void flush_tlb_mm(struct mm_struct *mm)
>  	mmu_notifier_arch_invalidate_secondary_tlbs(mm, 0, -1UL);
>  }
>  
> +static inline void __local_flush_tlb_page_nonotify_nosync(
> +	struct mm_struct *mm, unsigned long uaddr)
> +{
> +	unsigned long addr;
> +
> +	dsb(nshst);
> +	addr = __TLBI_VADDR(uaddr, ASID(mm));
> +	__tlbi(vale1, addr);
> +	__tlbi_user(vale1, addr);
> +}
> +
> +static inline void local_flush_tlb_page_nonotify(
> +	struct vm_area_struct *vma, unsigned long uaddr)
> +{
> +	__local_flush_tlb_page_nonotify_nosync(vma->vm_mm, uaddr);
> +	dsb(nsh);
> +}
> +
> +static inline void __local_flush_tlb_page_nosync(
> +	struct mm_struct *mm, unsigned long uaddr)
> +{
> +	__local_flush_tlb_page_nonotify_nosync(mm, uaddr);
> +	mmu_notifier_arch_invalidate_secondary_tlbs(mm, uaddr & PAGE_MASK,
> +						(uaddr & PAGE_MASK) + PAGE_SIZE);
> +}
> +
> +static inline void local_flush_tlb_page(struct vm_area_struct *vma,
> +					unsigned long uaddr)
> +{
> +	__local_flush_tlb_page_nosync(vma->vm_mm, uaddr);
> +	dsb(nsh);
> +}
> +

You're introducing more variants than you're actually using here. I think you
just need local_flush_tlb_page() and local_flush_tlb_page_nonotify(); you could
keep __local_flush_tlb_page_nonotify_nosync() and drop
__local_flush_tlb_page_nosync() since it's not really adding much?

But I'm also wondering if we should tidy up this API in general; we have local
vs broadcast, sync vs nosync, notify vs nonotify. And page is really just a
special case of a range. So perhaps it is better to rework the range API to take
some flags and we can tidy up all of this. I know Will also posted an RFC to
convert a lot of this to c functions, which should also be included. Not a
blocker for this change, I don't think, but there should definitely be some
follow up work to tidy it up. (I'm happy to take it on).

>  static inline void __flush_tlb_page_nosync(struct mm_struct *mm,
>  					   unsigned long uaddr)
>  {
> diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
> index d816ff44faff..22f54f5afe3f 100644
> --- a/arch/arm64/mm/fault.c
> +++ b/arch/arm64/mm/fault.c
> @@ -235,7 +235,7 @@ int __ptep_set_access_flags(struct vm_area_struct *vma,
>  
>  	/* Invalidate a stale read-only entry */
>  	if (dirty)
> -		flush_tlb_page(vma, address);
> +		local_flush_tlb_page(vma, address);

Given this is called for both pmds and ptes, it's a bit of an abuse to flush a
*page*. Yes it is architecturally safe, but it's not exactly self-documenting.
If we pass in the pgsize (as per above) it could be optimized given we know the
level and we only want to invalidate the last level. e.g. the local equivalent to:

__flush_tlb_range(vma, address, address + PMD_SIZE, PMD_SIZE, true, 2);

or

__flush_tlb_range(vma, address, address + PAGE_SIZE, PAGE_SIZE, true, 3);

Again though, perhaps that is part of some follow up tidying work?


contpte_ptep_set_access_flags() currently does a (broadcast) __flush_tlb_range()
on the (pte_write(orig_pte) == pte_write(entry)) path. I think that should be
changed to a local range invalidation to be consistent with this change.

Thanks,
Ryan

>  	return 1;
>  }
>  


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ