lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0ba9820d-f498-42b0-a663-6f4dca5374b4@arm.com>
Date: Fri, 13 Jun 2025 09:40:58 +0100
From: Ryan Roberts <ryan.roberts@....com>
To: Alexander Gordeev <agordeev@...ux.ibm.com>,
 Andrew Morton <akpm@...ux-foundation.org>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org,
 sparclinux@...r.kernel.org, xen-devel@...ts.xenproject.org,
 linuxppc-dev@...ts.ozlabs.org, linux-s390@...r.kernel.org,
 Hugh Dickins <hughd@...gle.com>, Nicholas Piggin <npiggin@...il.com>,
 Guenter Roeck <linux@...ck-us.net>, Juergen Gross <jgross@...e.com>,
 Jeremy Fitzhardinge <jeremy@...p.org>
Subject: Re: [PATCH 4/6] sparc/mm: Do not disable preemption in lazy MMU mode

On 12/06/2025 18:36, Alexander Gordeev wrote:
> Commit a1d416bf9faf ("sparc/mm: disable preemption in lazy mmu mode")
> is not necessary anymore, since the lazy MMU mode is entered with a
> spinlock held and sparc does not support Real-Time. Thus, upon entering
> the lazy mode the preemption is already disabled.

Surely given Sparc knows that it doesn't support PREEMPT_RT, it is better for
it's implementation to explicitly disable preemption rather than rely on the
spinlock to do it, since the spinlock penalizes other arches unnecessarily? It
also prevents multiple CPUs from updating (different areas of) kernel pgtables
in parallel. The property Sparc needs is for the task to stay on the same CPU
without interruption, right? Same goes for powerpc.

> 
> Signed-off-by: Alexander Gordeev <agordeev@...ux.ibm.com>
> ---
>  arch/sparc/include/asm/tlbflush_64.h |  2 +-
>  arch/sparc/mm/tlb.c                  | 12 ++++++++----
>  2 files changed, 9 insertions(+), 5 deletions(-)
> 
> diff --git a/arch/sparc/include/asm/tlbflush_64.h b/arch/sparc/include/asm/tlbflush_64.h
> index 8b8cdaa69272..a6d8068fb211 100644
> --- a/arch/sparc/include/asm/tlbflush_64.h
> +++ b/arch/sparc/include/asm/tlbflush_64.h
> @@ -44,7 +44,7 @@ void flush_tlb_kernel_range(unsigned long start, unsigned long end);
>  void flush_tlb_pending(void);
>  void arch_enter_lazy_mmu_mode(void);
>  void arch_leave_lazy_mmu_mode(void);
> -#define arch_flush_lazy_mmu_mode()      do {} while (0)
> +void arch_flush_lazy_mmu_mode(void);
>  
>  /* Local cpu only.  */
>  void __flush_tlb_all(void);
> diff --git a/arch/sparc/mm/tlb.c b/arch/sparc/mm/tlb.c
> index a35ddcca5e76..e46dfd5f2583 100644
> --- a/arch/sparc/mm/tlb.c
> +++ b/arch/sparc/mm/tlb.c
> @@ -52,10 +52,9 @@ void flush_tlb_pending(void)
>  
>  void arch_enter_lazy_mmu_mode(void)
>  {
> -	struct tlb_batch *tb;
> +	struct tlb_batch *tb = this_cpu_ptr(&tlb_batch);
>  
> -	preempt_disable();
> -	tb = this_cpu_ptr(&tlb_batch);
> +	VM_WARN_ON_ONCE(preemptible());
>  	tb->active = 1;
>  }
>  
> @@ -63,10 +62,15 @@ void arch_leave_lazy_mmu_mode(void)
>  {
>  	struct tlb_batch *tb = this_cpu_ptr(&tlb_batch);
>  
> +	VM_WARN_ON_ONCE(preemptible());
>  	if (tb->tlb_nr)
>  		flush_tlb_pending();
>  	tb->active = 0;
> -	preempt_enable();
> +}
> +
> +void arch_flush_lazy_mmu_mode(void)
> +{
> +	VM_WARN_ON_ONCE(preemptible());
>  }
>  
>  static void tlb_batch_add_one(struct mm_struct *mm, unsigned long vaddr,


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ