[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <68852420-47fc-4dcc-b724-4cf13720b88c@arm.com>
Date: Fri, 2 Jan 2026 15:23:55 +0000
From: Ryan Roberts <ryan.roberts@....com>
To: Dev Jain <dev.jain@....com>, Will Deacon <will@...nel.org>,
Ard Biesheuvel <ardb@...nel.org>, Catalin Marinas <catalin.marinas@....com>,
Mark Rutland <mark.rutland@....com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Oliver Upton <oliver.upton@...ux.dev>, Marc Zyngier <maz@...nel.org>,
Linu Cherian <Linu.Cherian@....com>
Cc: linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v1 08/13] arm64: mm: Simplify
__flush_tlb_range_limit_excess()
On 17/12/2025 08:12, Dev Jain wrote:
>
> On 16/12/25 8:15 pm, Ryan Roberts wrote:
>> From: Will Deacon <will@...nel.org>
>>
>> __flush_tlb_range_limit_excess() is unnecessarily complicated:
>>
>> - It takes a 'start', 'end' and 'pages' argument, whereas it only
>> needs 'pages' (which the caller has computed from the other two
>> arguments!).
>>
>> - It erroneously compares 'pages' with MAX_TLBI_RANGE_PAGES when
>> the system doesn't support range-based invalidation but the range to
>> be invalidated would result in fewer than MAX_DVM_OPS invalidations.
>>
>> Simplify the function so that it no longer takes the 'start' and 'end'
>> arguments and only considers the MAX_TLBI_RANGE_PAGES threshold on
>> systems that implement range-based invalidation.
>>
>> Signed-off-by: Will Deacon <will@...nel.org>
>> Signed-off-by: Ryan Roberts <ryan.roberts@....com>
>> ---
>> arch/arm64/include/asm/tlbflush.h | 20 ++++++--------------
>> 1 file changed, 6 insertions(+), 14 deletions(-)
>>
>> diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
>> index 0e1902f66e01..3b72a71feac0 100644
>> --- a/arch/arm64/include/asm/tlbflush.h
>> +++ b/arch/arm64/include/asm/tlbflush.h
>> @@ -527,21 +527,13 @@ static __always_inline void __flush_tlb_range_op(tlbi_op lop, tlbi_op rop,
>> #define __flush_s2_tlb_range_op(op, start, pages, stride, tlb_level) \
>> __flush_tlb_range_op(op, r##op, start, pages, stride, 0, tlb_level, kvm_lpa2_is_enabled())
>>
>> -static inline bool __flush_tlb_range_limit_excess(unsigned long start,
>> - unsigned long end, unsigned long pages, unsigned long stride)
>> +static inline bool __flush_tlb_range_limit_excess(unsigned long pages,
>> + unsigned long stride)
>> {
>> - /*
>> - * When the system does not support TLB range based flush
>> - * operation, (MAX_DVM_OPS - 1) pages can be handled. But
>> - * with TLB range based operation, MAX_TLBI_RANGE_PAGES
>> - * pages can be handled.
>> - */
>> - if ((!system_supports_tlb_range() &&
>> - (end - start) >= (MAX_DVM_OPS * stride)) ||
>> - pages > MAX_TLBI_RANGE_PAGES)
>> + if (system_supports_tlb_range() && pages > MAX_TLBI_RANGE_PAGES)
>> return true;
>>
>> - return false;
>> + return pages >= (MAX_DVM_OPS * stride) >> PAGE_SHIFT;
>
> The function will return true if tlb range is supported, but
> ((MAX_DVM_OPS * stride) >> PAGE_SHIFT) < pages <= MAX_TLBI_RANGE_PAGES.
> So I think you need to do
> https://lore.kernel.org/all/1b15b4f0-5490-4dac-8344-e716dd189751@arm.com/
I agree with your overall proposal, but I think a few of the details are not
quite correct.
I think the max number of DVM ops that could be issued by a single
__flush_tlb_range() call on a system with tlb-range is 20, not 4 as you suggest;
- 4 for each of the scales
- 1 for the final single page
- 15 to align to a 64K boundary on systems with LPA2 (with 4K page size)
But that doesn't really change your argument.
So proposing to change it to this in next version:
static inline bool __flush_tlb_range_limit_excess(unsigned long pages,
unsigned long stride)
{
/*
* Assume that the worst case number of DVM ops required to flush a
* given range on a system that supports tlb-range is 20 (4 scales, 1
* final page, 15 for alignment on LPA2 systems), which is much smaller
* than MAX_DVM_OPS.
*/
if (system_supports_tlb_range())
return pages > MAX_TLBI_RANGE_PAGES;
return pages >= (MAX_DVM_OPS * stride) >> PAGE_SHIFT;
}
Thanks,
Ryan
>
>> }
>>
>> static inline void __flush_tlb_range_nosync(struct mm_struct *mm,
>> @@ -555,7 +547,7 @@ static inline void __flush_tlb_range_nosync(struct mm_struct *mm,
>> end = round_up(end, stride);
>> pages = (end - start) >> PAGE_SHIFT;
>>
>> - if (__flush_tlb_range_limit_excess(start, end, pages, stride)) {
>> + if (__flush_tlb_range_limit_excess(pages, stride)) {
>> flush_tlb_mm(mm);
>> return;
>> }
>> @@ -619,7 +611,7 @@ static inline void flush_tlb_kernel_range(unsigned long start, unsigned long end
>> end = round_up(end, stride);
>> pages = (end - start) >> PAGE_SHIFT;
>>
>> - if (__flush_tlb_range_limit_excess(start, end, pages, stride)) {
>> + if (__flush_tlb_range_limit_excess(pages, stride)) {
>> flush_tlb_all();
>> return;
>> }
Powered by blists - more mailing lists