[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20250711161732.384-7-will@kernel.org>
Date: Fri, 11 Jul 2025 17:17:28 +0100
From: Will Deacon <will@...nel.org>
To: linux-arm-kernel@...ts.infradead.org
Cc: linux-kernel@...r.kernel.org,
Will Deacon <will@...nel.org>,
Ard Biesheuvel <ardb@...nel.org>,
Catalin Marinas <catalin.marinas@....com>,
Ryan Roberts <ryan.roberts@....com>,
Mark Rutland <mark.rutland@....com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Oliver Upton <oliver.upton@...ux.dev>,
Marc Zyngier <maz@...nel.org>
Subject: [PATCH 06/10] arm64: mm: Simplify __TLBI_RANGE_NUM() macro
Since commit e2768b798a19 ("arm64/mm: Modify range-based tlbi to
decrement scale"), we don't need to clamp the 'pages' argument to fit
the range for the specified 'scale' as we know that the upper bits will
have been processed in a prior iteration.
Drop the clamping and simplify the __TLBI_RANGE_NUM() macro.
Signed-off-by: Will Deacon <will@...nel.org>
---
arch/arm64/include/asm/tlbflush.h | 6 +-----
1 file changed, 1 insertion(+), 5 deletions(-)
diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
index ddd77e92b268..a8d21e52ef3a 100644
--- a/arch/arm64/include/asm/tlbflush.h
+++ b/arch/arm64/include/asm/tlbflush.h
@@ -205,11 +205,7 @@ static __always_inline void __tlbi_level(const enum tlbi_op op, u64 addr, u32 le
* range.
*/
#define __TLBI_RANGE_NUM(pages, scale) \
- ({ \
- int __pages = min((pages), \
- __TLBI_RANGE_PAGES(31, (scale))); \
- (__pages >> (5 * (scale) + 1)) - 1; \
- })
+ (((pages) >> (5 * (scale) + 1)) - 1)
/*
* TLB Invalidation
--
2.50.0.727.gbf7dc18ff4-goog
Powered by blists - more mailing lists