[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b098b6ba-74b7-4e2e-bc3e-a417f921c782@arm.com>
Date: Mon, 5 Jan 2026 17:12:22 +0000
From: Ryan Roberts <ryan.roberts@....com>
To: Linu Cherian <linu.cherian@....com>
Cc: Will Deacon <will@...nel.org>, Ard Biesheuvel <ardb@...nel.org>,
Catalin Marinas <catalin.marinas@....com>,
Mark Rutland <mark.rutland@....com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Oliver Upton <oliver.upton@...ux.dev>, Marc Zyngier <maz@...nel.org>,
Dev Jain <dev.jain@....com>, linux-arm-kernel@...ts.infradead.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v1 02/13] arm64: mm: Introduce a C wrapper for by-range
TLB invalidation
On 05/01/2026 05:33, Linu Cherian wrote:
> Ryan,
>
> On Tue, Dec 16, 2025 at 02:45:47PM +0000, Ryan Roberts wrote:
>> As part of efforts to reduce our reliance on complex preprocessor macros
>> for TLB invalidation routines, introduce a new C wrapper for by-range
>> TLB invalidation which can be used instead of the __tlbi() macro and can
>> additionally be called from C code.
>>
>> Each specific tlbi range op is implemented as a C function and the
>> appropriate function pointer is passed to __tlbi_range(). Since
>> everything is declared inline and is statically resolvable, the compiler
>> will convert the indirect function call to a direct inline execution.
>>
>> Suggested-by: Linus Torvalds <torvalds@...ux-foundation.org>
>> Signed-off-by: Ryan Roberts <ryan.roberts@....com>
>> ---
>> arch/arm64/include/asm/tlbflush.h | 33 ++++++++++++++++++++++++++++++-
>> 1 file changed, 32 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
>> index 13a59cf28943..c5111d2afc66 100644
>> --- a/arch/arm64/include/asm/tlbflush.h
>> +++ b/arch/arm64/include/asm/tlbflush.h
>> @@ -459,6 +459,37 @@ static inline void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
>> * operations can only span an even number of pages. We save this for last to
>> * ensure 64KB start alignment is maintained for the LPA2 case.
>> */
>> +static __always_inline void rvae1is(u64 arg)
>> +{
>> + __tlbi(rvae1is, arg);
>> +}
>> +
>> +static __always_inline void rvale1(u64 arg)
>> +{
>> + __tlbi(rvale1, arg);
>> + __tlbi_user(rvale1, arg);
>
> Should this __tlbi_user be added as part of patch 3 ?
Yes! I've screwed up a rebase since vale1/rvale1 only started being used in
v6.19-rc1; I attempted to add it incrementally and have clearly screwed it up.
I'll fix it all in the next rev.
Thanks,
Ryan
>
>> +}
>> +
>> +static __always_inline void rvale1is(u64 arg)
>> +{
>> + __tlbi(rvale1is, arg);
>> +}
>> +
>> +static __always_inline void rvaale1is(u64 arg)
>> +{
>> + __tlbi(rvaale1is, arg);
>> +}
>> +
>> +static __always_inline void ripas2e1is(u64 arg)
>> +{
>> + __tlbi(ripas2e1is, arg);
>> +}
>> +
>> +static __always_inline void __tlbi_range(tlbi_op op, u64 arg)
>> +{
>> + op(arg);
>> +}
>> +
>> #define __flush_tlb_range_op(op, start, pages, stride, \
>> asid, tlb_level, tlbi_user, lpa2) \
>> do { \
>> @@ -486,7 +517,7 @@ do { \
>> if (num >= 0) { \
>> addr = __TLBI_VADDR_RANGE(__flush_start >> shift, asid, \
>> scale, num, tlb_level); \
>> - __tlbi(r##op, addr); \
>> + __tlbi_range(r##op, addr); \
>> if (tlbi_user) \
>> __tlbi_user(r##op, addr); \
>> __flush_start += __TLBI_RANGE_PAGES(num, scale) << PAGE_SHIFT; \
>> --
>> 2.43.0
>>
Powered by blists - more mailing lists