[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87qzud42n1.ritesh.list@gmail.com>
Date: Wed, 05 Nov 2025 08:16:58 +0530
From: Ritesh Harjani (IBM) <ritesh.list@...il.com>
To: Kevin Brodsky <kevin.brodsky@....com>, linux-mm@...ck.org
Cc: linux-kernel@...r.kernel.org, Kevin Brodsky <kevin.brodsky@....com>,
Alexander Gordeev <agordeev@...ux.ibm.com>, Andreas Larsson <andreas@...sler.com>,
Andrew Morton <akpm@...ux-foundation.org>, Boris Ostrovsky <boris.ostrovsky@...cle.com>,
Borislav Petkov <bp@...en8.de>, Catalin Marinas <catalin.marinas@....com>,
Christophe Leroy <christophe.leroy@...roup.eu>, Dave Hansen <dave.hansen@...ux.intel.com>,
David Hildenbrand <david@...hat.com>, "David S. Miller" <davem@...emloft.net>,
David Woodhouse <dwmw2@...radead.org>, "H. Peter Anvin" <hpa@...or.com>, Ingo Molnar <mingo@...hat.com>,
Jann Horn <jannh@...gle.com>, Juergen Gross <jgross@...e.com>,
"Liam R. Howlett" <Liam.Howlett@...cle.com>, Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
Madhavan Srinivasan <maddy@...ux.ibm.com>, Michael Ellerman <mpe@...erman.id.au>, Michal Hocko <mhocko@...e.com>,
Mike Rapoport <rppt@...nel.org>, Nicholas Piggin <npiggin@...il.com>,
Peter Zijlstra <peterz@...radead.org>, Ryan Roberts <ryan.roberts@....com>,
Suren Baghdasaryan <surenb@...gle.com>, Thomas Gleixner <tglx@...utronix.de>, Vlastimil Babka <vbabka@...e.cz>,
Will Deacon <will@...nel.org>, Yeoreum Yun <yeoreum.yun@....com>,
linux-arm-kernel@...ts.infradead.org, linuxppc-dev@...ts.ozlabs.org,
sparclinux@...r.kernel.org, xen-devel@...ts.xenproject.org, x86@...nel.org,
Venkat Rao Bagalkote <venkat88@...ux.ibm.com>
Subject: Re: [PATCH v4 01/12] powerpc/64s: Do not re-activate batched TLB flush
Kevin Brodsky <kevin.brodsky@....com> writes:
> From: Alexander Gordeev <agordeev@...ux.ibm.com>
>
> Since commit b9ef323ea168 ("powerpc/64s: Disable preemption in hash
> lazy mmu mode") a task can not be preempted while in lazy MMU mode.
> Therefore, the batch re-activation code is never called, so remove it.
>
> Signed-off-by: Alexander Gordeev <agordeev@...ux.ibm.com>
> Signed-off-by: Kevin Brodsky <kevin.brodsky@....com>
> ---
> arch/powerpc/include/asm/thread_info.h | 2 --
> arch/powerpc/kernel/process.c | 25 -------------------------
> 2 files changed, 27 deletions(-)
>
Since the commit referenced in above disables the preemption in
arch_enter_lazy_mmu(), so the expectation is that we will never be
context switched while in lazy_mmu, hence the code changes in
switch_to() around __flush_tlb_pending() should ideally never be called.
With this analysis - the patch looks good to me. I will give this entire
patch series a try on Power HW with Hash mmu too (which uses lazy mmu and
let you know the results of that)!
For this patch please feel free to add:
Reviewed-by: Ritesh Harjani (IBM) <ritesh.list@...il.com>
CC: Venkat who also runs CI on linux Power HW for upstream testing :)
-ritesh
> diff --git a/arch/powerpc/include/asm/thread_info.h b/arch/powerpc/include/asm/thread_info.h
> index b0f200aba2b3..97f35f9b1a96 100644
> --- a/arch/powerpc/include/asm/thread_info.h
> +++ b/arch/powerpc/include/asm/thread_info.h
> @@ -154,12 +154,10 @@ void arch_setup_new_exec(void);
> /* Don't move TLF_NAPPING without adjusting the code in entry_32.S */
> #define TLF_NAPPING 0 /* idle thread enabled NAP mode */
> #define TLF_SLEEPING 1 /* suspend code enabled SLEEP mode */
> -#define TLF_LAZY_MMU 3 /* tlb_batch is active */
> #define TLF_RUNLATCH 4 /* Is the runlatch enabled? */
>
> #define _TLF_NAPPING (1 << TLF_NAPPING)
> #define _TLF_SLEEPING (1 << TLF_SLEEPING)
> -#define _TLF_LAZY_MMU (1 << TLF_LAZY_MMU)
> #define _TLF_RUNLATCH (1 << TLF_RUNLATCH)
>
> #ifndef __ASSEMBLER__
> diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
> index eb23966ac0a9..9237dcbeee4a 100644
> --- a/arch/powerpc/kernel/process.c
> +++ b/arch/powerpc/kernel/process.c
> @@ -1281,9 +1281,6 @@ struct task_struct *__switch_to(struct task_struct *prev,
> {
> struct thread_struct *new_thread, *old_thread;
> struct task_struct *last;
> -#ifdef CONFIG_PPC_64S_HASH_MMU
> - struct ppc64_tlb_batch *batch;
> -#endif
>
> new_thread = &new->thread;
> old_thread = ¤t->thread;
> @@ -1291,14 +1288,6 @@ struct task_struct *__switch_to(struct task_struct *prev,
> WARN_ON(!irqs_disabled());
>
> #ifdef CONFIG_PPC_64S_HASH_MMU
> - batch = this_cpu_ptr(&ppc64_tlb_batch);
> - if (batch->active) {
> - current_thread_info()->local_flags |= _TLF_LAZY_MMU;
> - if (batch->index)
> - __flush_tlb_pending(batch);
> - batch->active = 0;
> - }
> -
> /*
> * On POWER9 the copy-paste buffer can only paste into
> * foreign real addresses, so unprivileged processes can not
> @@ -1369,20 +1358,6 @@ struct task_struct *__switch_to(struct task_struct *prev,
> */
>
> #ifdef CONFIG_PPC_BOOK3S_64
> -#ifdef CONFIG_PPC_64S_HASH_MMU
> - /*
> - * This applies to a process that was context switched while inside
> - * arch_enter_lazy_mmu_mode(), to re-activate the batch that was
> - * deactivated above, before _switch(). This will never be the case
> - * for new tasks.
> - */
> - if (current_thread_info()->local_flags & _TLF_LAZY_MMU) {
> - current_thread_info()->local_flags &= ~_TLF_LAZY_MMU;
> - batch = this_cpu_ptr(&ppc64_tlb_batch);
> - batch->active = 1;
> - }
> -#endif
> -
> /*
> * Math facilities are masked out of the child MSR in copy_thread.
> * A new task does not need to restore_math because it will
> --
> 2.47.0
Powered by blists - more mailing lists