[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0276c749-9418-47ea-85f1-0b0ab93b0225@arm.com>
Date: Thu, 6 Nov 2025 10:31:46 +0000
From: Kevin Brodsky <kevin.brodsky@....com>
To: "Ritesh Harjani (IBM)" <ritesh.list@...il.com>, linux-mm@...ck.org
Cc: linux-kernel@...r.kernel.org, Alexander Gordeev <agordeev@...ux.ibm.com>,
Andreas Larsson <andreas@...sler.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Boris Ostrovsky <boris.ostrovsky@...cle.com>, Borislav Petkov
<bp@...en8.de>, Catalin Marinas <catalin.marinas@....com>,
Christophe Leroy <christophe.leroy@...roup.eu>,
Dave Hansen <dave.hansen@...ux.intel.com>,
David Hildenbrand <david@...hat.com>, "David S. Miller"
<davem@...emloft.net>, David Woodhouse <dwmw2@...radead.org>,
"H. Peter Anvin" <hpa@...or.com>, Ingo Molnar <mingo@...hat.com>,
Jann Horn <jannh@...gle.com>, Juergen Gross <jgross@...e.com>,
"Liam R. Howlett" <Liam.Howlett@...cle.com>,
Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
Madhavan Srinivasan <maddy@...ux.ibm.com>,
Michael Ellerman <mpe@...erman.id.au>, Michal Hocko <mhocko@...e.com>,
Mike Rapoport <rppt@...nel.org>, Nicholas Piggin <npiggin@...il.com>,
Peter Zijlstra <peterz@...radead.org>, Ryan Roberts <ryan.roberts@....com>,
Suren Baghdasaryan <surenb@...gle.com>, Thomas Gleixner
<tglx@...utronix.de>, Vlastimil Babka <vbabka@...e.cz>,
Will Deacon <will@...nel.org>, Yeoreum Yun <yeoreum.yun@....com>,
linux-arm-kernel@...ts.infradead.org, linuxppc-dev@...ts.ozlabs.org,
sparclinux@...r.kernel.org, xen-devel@...ts.xenproject.org, x86@...nel.org
Subject: Re: [PATCH v4 03/12] powerpc/mm: implement arch_flush_lazy_mmu_mode()
On 05/11/2025 09:49, Ritesh Harjani (IBM) wrote:
> Ritesh Harjani (IBM) <ritesh.list@...il.com> writes:
>
>> Kevin Brodsky <kevin.brodsky@....com> writes:
>>
>>> Upcoming changes to the lazy_mmu API will cause
>>> arch_flush_lazy_mmu_mode() to be called when leaving a nested
>>> lazy_mmu section.
>>>
>>> Move the relevant logic from arch_leave_lazy_mmu_mode() to
>>> arch_flush_lazy_mmu_mode() and have the former call the latter.
>>>
>>> Note: the additional this_cpu_ptr() on the
>>> arch_leave_lazy_mmu_mode() path will be removed in a subsequent
>>> patch.
>>>
>>> Signed-off-by: Kevin Brodsky <kevin.brodsky@....com>
>>> ---
>>> .../powerpc/include/asm/book3s/64/tlbflush-hash.h | 15 +++++++++++----
>>> 1 file changed, 11 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h b/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h
>>> index 146287d9580f..7704dbe8e88d 100644
>>> --- a/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h
>>> +++ b/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h
>>> @@ -41,6 +41,16 @@ static inline void arch_enter_lazy_mmu_mode(void)
>>> batch->active = 1;
>>> }
>>>
>>> +static inline void arch_flush_lazy_mmu_mode(void)
>>> +{
>>> + struct ppc64_tlb_batch *batch;
>>> +
>>> + batch = this_cpu_ptr(&ppc64_tlb_batch);
>>> +
>>> + if (batch->index)
>>> + __flush_tlb_pending(batch);
>>> +}
>>> +
>> This looks a bit scary since arch_flush_lazy_mmu_mode() is getting
>> called from several of the places in later patches().
>>
>> Although I think arch_flush_lazy_mmu_mode() will only always be called
>> in nested lazy mmu case right?
>>
>> Do you think we can add a VM_BUG_ON(radix_enabled()); in above to make
>> sure the above never gets called in radix_enabled() case.
>>
>> I am still going over the patch series, but while reviewing this I
>> wanted to take your opinion.
>>
>> Ohh wait.. There is no way of knowing the return value from
>> arch_enter_lazy_mmu_mode().. I think you might need a similar check to
>> return from arch_flush_lazy_mmu_mode() too, if radix_enabled() is true.
>>
> Now that I have gone through this series, it seems plaussible that since
> lazy mmu mode supports nesting, arch_flush_lazy_mmu_mode() can get
> called while the lazy mmu is active due to nesting..
>
> That means we should add the radix_enabled() check as I was talking in
> above i.e.
>
> @@ -38,6 +38,9 @@ static inline void arch_flush_lazy_mmu_mode(void)
> {
> struct ppc64_tlb_batch *batch;
>
> + if (radix_enabled())
> + return;
> +
> batch = this_cpu_ptr(&ppc64_tlb_batch);
>
> if (batch->index)
>
> Correct? Although otherwise also I don't think it should be a problem
> because batch->index is only valid during hash, but I still think we can
> add above check so that we don't have to call this_cpu_ptr() to check
> for batch->index whenever flush is being called.
You're right! I missed this because v3 had an extra patch (13) that
turned all the lazy_mmu_mode_* into no-ops if radix_enabled(). The
optimisation didn't seem to be worth the noise so I dropped it, but it
does mean that arch_flush() will now be called in the nested case
regardless of radix_enabled().
Will fix in v5, thanks!
- Kevin
Powered by blists - more mailing lists