[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aV5xVhE1sf4l0gRf@a079125.arm.com>
Date: Wed, 7 Jan 2026 20:14:38 +0530
From: Linu Cherian <linu.cherian@....com>
To: Ryan Roberts <ryan.roberts@....com>
Cc: Will Deacon <will@...nel.org>, Ard Biesheuvel <ardb@...nel.org>,
Catalin Marinas <catalin.marinas@....com>,
Mark Rutland <mark.rutland@....com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Oliver Upton <oliver.upton@...ux.dev>,
Marc Zyngier <maz@...nel.org>, Dev Jain <dev.jain@....com>,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v1 13/13] arm64: mm: Provide level hint for
flush_tlb_page()
On Tue, Dec 16, 2025 at 02:45:58PM +0000, Ryan Roberts wrote:
> Previously tlb invalidations issued by __flush_tlb_page() did not
> contain a level hint. But the function is clearly only ever targeting
> level 3 tlb entries and its documentation agrees:
>
> | this operation only invalidates a single, last-level page-table
> | entry and therefore does not affect any walk-caches
>
> However, it turns out that the function was actually being used to
> invalidate a level 2 mapping via flush_tlb_fix_spurious_fault_pmd(). The
> bug was benign because the level hint was not set so the HW would still
> invalidate the PMD mapping, and also because the TLBF_NONOTIFY flag was
> set, the bounds of the mapping were never used for anything else.
>
> Now that we have the new and improved range-invalidation API, it is
> trival to fix flush_tlb_fix_spurious_fault_pmd() to explicitly flush the
> whole range (locally, without notification and last level only). So
> let's do that, and then update __flush_tlb_page() to hint level 3.
>
> Signed-off-by: Ryan Roberts <ryan.roberts@....com>
> ---
> arch/arm64/include/asm/pgtable.h | 5 +++--
> arch/arm64/include/asm/tlbflush.h | 2 +-
> 2 files changed, 4 insertions(+), 3 deletions(-)
>
> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
> index b96a7ca465a1..61f57647361a 100644
> --- a/arch/arm64/include/asm/pgtable.h
> +++ b/arch/arm64/include/asm/pgtable.h
> @@ -138,8 +138,9 @@ static inline void arch_leave_lazy_mmu_mode(void)
> #define flush_tlb_fix_spurious_fault(vma, address, ptep) \
> __flush_tlb_page(vma, address, TLBF_NOBROADCAST | TLBF_NONOTIFY)
>
> -#define flush_tlb_fix_spurious_fault_pmd(vma, address, pmdp) \
> - __flush_tlb_page(vma, address, TLBF_NOBROADCAST | TLBF_NONOTIFY)
> +#define flush_tlb_fix_spurious_fault_pmd(vma, address, pmdp) \
> + __flush_tlb_range(vma, address, address + PMD_SIZE, PMD_SIZE, 2, \
> + TLBF_NOBROADCAST | TLBF_NONOTIFY | TLBF_NOWALKCACHE)
>
> /*
> * ZERO_PAGE is a global shared page that is always zero: used
> diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
> index fa5aee990742..f24211b51df3 100644
> --- a/arch/arm64/include/asm/tlbflush.h
> +++ b/arch/arm64/include/asm/tlbflush.h
> @@ -577,7 +577,7 @@ static inline void __flush_tlb_page(struct vm_area_struct *vma,
> unsigned long start = round_down(uaddr, PAGE_SIZE);
> unsigned long end = start + PAGE_SIZE;
>
> - ___flush_tlb_range(vma, start, end, PAGE_SIZE, TLBI_TTL_UNKNOWN,
> + ___flush_tlb_range(vma, start, end, PAGE_SIZE, 3,
> TLBF_NOWALKCACHE | flags);
> }
Reviewed-by: Linu Cherian <linu.cherian@....com>
Powered by blists - more mailing lists