[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251216144601.2106412-8-ryan.roberts@arm.com>
Date: Tue, 16 Dec 2025 14:45:52 +0000
From: Ryan Roberts <ryan.roberts@....com>
To: Will Deacon <will@...nel.org>,
Ard Biesheuvel <ardb@...nel.org>,
Catalin Marinas <catalin.marinas@....com>,
Mark Rutland <mark.rutland@....com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Oliver Upton <oliver.upton@...ux.dev>,
Marc Zyngier <maz@...nel.org>,
Dev Jain <dev.jain@....com>,
Linu Cherian <Linu.Cherian@....com>
Cc: Ryan Roberts <ryan.roberts@....com>,
linux-arm-kernel@...ts.infradead.org,
linux-kernel@...r.kernel.org
Subject: [PATCH v1 07/13] arm64: mm: Simplify __TLBI_RANGE_NUM() macro
From: Will Deacon <will@...nel.org>
Since commit e2768b798a19 ("arm64/mm: Modify range-based tlbi to
decrement scale"), we don't need to clamp the 'pages' argument to fit
the range for the specified 'scale' as we know that the upper bits will
have been processed in a prior iteration.
Drop the clamping and simplify the __TLBI_RANGE_NUM() macro.
Signed-off-by: Will Deacon <will@...nel.org>
Reviewed-by: Ryan Roberts <ryan.roberts@....com>
Reviewed-by: Dev Jain <dev.jain@....com>
Signed-off-by: Ryan Roberts <ryan.roberts@....com>
---
arch/arm64/include/asm/tlbflush.h | 6 +-----
1 file changed, 1 insertion(+), 5 deletions(-)
diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
index d2a144a09a8f..0e1902f66e01 100644
--- a/arch/arm64/include/asm/tlbflush.h
+++ b/arch/arm64/include/asm/tlbflush.h
@@ -208,11 +208,7 @@ static inline void __tlbi_level(tlbi_op op, u64 addr, u32 level)
* range.
*/
#define __TLBI_RANGE_NUM(pages, scale) \
- ({ \
- int __pages = min((pages), \
- __TLBI_RANGE_PAGES(31, (scale))); \
- (__pages >> (5 * (scale) + 1)) - 1; \
- })
+ (((pages) >> (5 * (scale) + 1)) - 1)
/*
* TLB Invalidation
--
2.43.0
Powered by blists - more mailing lists