[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87pl9x41c5.ritesh.list@gmail.com>
Date: Wed, 05 Nov 2025 08:45:06 +0530
From: Ritesh Harjani (IBM) <ritesh.list@...il.com>
To: Kevin Brodsky <kevin.brodsky@....com>, linux-mm@...ck.org
Cc: linux-kernel@...r.kernel.org, Kevin Brodsky <kevin.brodsky@....com>,
Alexander Gordeev <agordeev@...ux.ibm.com>, Andreas Larsson <andreas@...sler.com>,
Andrew Morton <akpm@...ux-foundation.org>, Boris Ostrovsky <boris.ostrovsky@...cle.com>,
Borislav Petkov <bp@...en8.de>, Catalin Marinas <catalin.marinas@....com>,
Christophe Leroy <christophe.leroy@...roup.eu>, Dave Hansen <dave.hansen@...ux.intel.com>,
David Hildenbrand <david@...hat.com>, "David S. Miller" <davem@...emloft.net>,
David Woodhouse <dwmw2@...radead.org>, "H. Peter Anvin" <hpa@...or.com>, Ingo Molnar <mingo@...hat.com>,
Jann Horn <jannh@...gle.com>, Juergen Gross <jgross@...e.com>,
"Liam R. Howlett" <Liam.Howlett@...cle.com>, Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
Madhavan Srinivasan <maddy@...ux.ibm.com>, Michael Ellerman <mpe@...erman.id.au>, Michal Hocko <mhocko@...e.com>,
Mike Rapoport <rppt@...nel.org>, Nicholas Piggin <npiggin@...il.com>,
Peter Zijlstra <peterz@...radead.org>, Ryan Roberts <ryan.roberts@....com>,
Suren Baghdasaryan <surenb@...gle.com>, Thomas Gleixner <tglx@...utronix.de>, Vlastimil Babka <vbabka@...e.cz>,
Will Deacon <will@...nel.org>, Yeoreum Yun <yeoreum.yun@....com>,
linux-arm-kernel@...ts.infradead.org, linuxppc-dev@...ts.ozlabs.org,
sparclinux@...r.kernel.org, xen-devel@...ts.xenproject.org, x86@...nel.org
Subject: Re: [PATCH v4 03/12] powerpc/mm: implement arch_flush_lazy_mmu_mode()
Kevin Brodsky <kevin.brodsky@....com> writes:
> Upcoming changes to the lazy_mmu API will cause
> arch_flush_lazy_mmu_mode() to be called when leaving a nested
> lazy_mmu section.
>
> Move the relevant logic from arch_leave_lazy_mmu_mode() to
> arch_flush_lazy_mmu_mode() and have the former call the latter.
>
> Note: the additional this_cpu_ptr() on the
> arch_leave_lazy_mmu_mode() path will be removed in a subsequent
> patch.
>
> Signed-off-by: Kevin Brodsky <kevin.brodsky@....com>
> ---
> .../powerpc/include/asm/book3s/64/tlbflush-hash.h | 15 +++++++++++----
> 1 file changed, 11 insertions(+), 4 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h b/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h
> index 146287d9580f..7704dbe8e88d 100644
> --- a/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h
> +++ b/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h
> @@ -41,6 +41,16 @@ static inline void arch_enter_lazy_mmu_mode(void)
> batch->active = 1;
> }
>
> +static inline void arch_flush_lazy_mmu_mode(void)
> +{
> + struct ppc64_tlb_batch *batch;
> +
> + batch = this_cpu_ptr(&ppc64_tlb_batch);
> +
> + if (batch->index)
> + __flush_tlb_pending(batch);
> +}
> +
This looks a bit scary since arch_flush_lazy_mmu_mode() is getting
called from several of the places in later patches().
Although I think arch_flush_lazy_mmu_mode() will only always be called
in nested lazy mmu case right?
Do you think we can add a VM_BUG_ON(radix_enabled()); in above to make
sure the above never gets called in radix_enabled() case.
I am still going over the patch series, but while reviewing this I
wanted to take your opinion.
Ohh wait.. There is no way of knowing the return value from
arch_enter_lazy_mmu_mode().. I think you might need a similar check to
return from arch_flush_lazy_mmu_mode() too, if radix_enabled() is true.
-ritesh
> static inline void arch_leave_lazy_mmu_mode(void)
> {
> struct ppc64_tlb_batch *batch;
> @@ -49,14 +59,11 @@ static inline void arch_leave_lazy_mmu_mode(void)
> return;
> batch = this_cpu_ptr(&ppc64_tlb_batch);
>
> - if (batch->index)
> - __flush_tlb_pending(batch);
> + arch_flush_lazy_mmu_mode();
> batch->active = 0;
> preempt_enable();
> }
>
> -#define arch_flush_lazy_mmu_mode() do {} while (0)
> -
> extern void hash__tlbiel_all(unsigned int action);
>
> extern void flush_hash_page(unsigned long vpn, real_pte_t pte, int psize,
> --
> 2.47.0
Powered by blists - more mailing lists