[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250829153510.2401161-2-ryan.roberts@arm.com>
Date: Fri, 29 Aug 2025 16:35:07 +0100
From: Ryan Roberts <ryan.roberts@....com>
To: Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will@...nel.org>,
Mark Rutland <mark.rutland@....com>,
James Morse <james.morse@....com>
Cc: Ryan Roberts <ryan.roberts@....com>,
linux-arm-kernel@...ts.infradead.org,
linux-kernel@...r.kernel.org
Subject: [RFC PATCH v1 1/2] arm64: tlbflush: Move invocation of __flush_tlb_range_op() to a macro
__flush_tlb_range_op() is a pre-processor macro that takes the TLBI
operation as a string, and builds the instruction from it. This prevents
passing the TLBI operation around as a variable. __flush_tlb_range_op()
also takes 7 other arguments.
Adding extra invocations for different TLB operations means duplicating
the whole thing, but those 7 extra arguments are the same each time.
Add an enum for the TLBI operations that __flush_tlb_range() uses, and a
macro to pass the operation name as a string to __flush_tlb_range_op(),
and the rest of the arguments using __VA_ARGS_.
The result is easier to add new TLBI operations to, and to modify any of
the other arguments as they only appear once.
Suggested-by: James Morse <james.morse@....com>
Signed-off-by: Ryan Roberts <ryan.roberts@....com>
---
arch/arm64/include/asm/tlbflush.h | 30 ++++++++++++++++++++++++------
1 file changed, 24 insertions(+), 6 deletions(-)
diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
index 18a5dc0c9a54..f66b8c4696d0 100644
--- a/arch/arm64/include/asm/tlbflush.h
+++ b/arch/arm64/include/asm/tlbflush.h
@@ -11,6 +11,7 @@
#ifndef __ASSEMBLY__
#include <linux/bitfield.h>
+#include <linux/build_bug.h>
#include <linux/mm_types.h>
#include <linux/sched.h>
#include <linux/mmu_notifier.h>
@@ -433,12 +434,32 @@ static inline bool __flush_tlb_range_limit_excess(unsigned long start,
return false;
}
+enum tlbi_op {
+ TLBI_VALE1IS,
+ TLBI_VAE1IS,
+};
+
+#define flush_tlb_range_op(op, ...) \
+do { \
+ switch (op) { \
+ case TLBI_VALE1IS: \
+ __flush_tlb_range_op(vale1is, __VA_ARGS__); \
+ break; \
+ case TLBI_VAE1IS: \
+ __flush_tlb_range_op(vae1is, __VA_ARGS__); \
+ break; \
+ default: \
+ BUILD_BUG_ON_MSG(1, "Unknown TLBI op"); \
+ } \
+} while (0)
+
static inline void __flush_tlb_range_nosync(struct mm_struct *mm,
unsigned long start, unsigned long end,
unsigned long stride, bool last_level,
int tlb_level)
{
unsigned long asid, pages;
+ enum tlbi_op tlbi_op;
start = round_down(start, stride);
end = round_up(end, stride);
@@ -452,12 +473,9 @@ static inline void __flush_tlb_range_nosync(struct mm_struct *mm,
dsb(ishst);
asid = ASID(mm);
- if (last_level)
- __flush_tlb_range_op(vale1is, start, pages, stride, asid,
- tlb_level, true, lpa2_is_enabled());
- else
- __flush_tlb_range_op(vae1is, start, pages, stride, asid,
- tlb_level, true, lpa2_is_enabled());
+ tlbi_op = last_level ? TLBI_VALE1IS : TLBI_VAE1IS;
+ flush_tlb_range_op(tlbi_op, start, pages, stride, asid, tlb_level,
+ true, lpa2_is_enabled());
mmu_notifier_arch_invalidate_secondary_tlbs(mm, start, end);
}
--
2.43.0
Powered by blists - more mailing lists