[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <b5ff76a5-b068-4c6b-aa21-d932da42e1e9@arm.com>
Date: Wed, 5 Nov 2025 09:05:30 +0530
From: Anshuman Khandual <anshuman.khandual@....com>
To: Mark Rutland <mark.rutland@....com>
Cc: linux-arm-kernel@...ts.infradead.org,
Catalin Marinas <catalin.marinas@....com>, Will Deacon <will@...nel.org>,
Ryan Roberts <ryan.roberts@....com>, Ard Biesheuvel <ardb@...nel.org>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 4/6] arm64/mm: Ensure correct 48 bit PA gets into
TTBRx_EL1
On 04/11/25 8:47 PM, Mark Rutland wrote:
> On Mon, Nov 03, 2025 at 05:26:16AM +0000, Anshuman Khandual wrote:
>> Even though 48 bit PA representation in TTBRx_EL1 does not involve shifting
>> partial bits like 52 bit variant does, they sill need to be masked properly
>> for correctness. Hence mask 48 bit PA with TTBRx_EL1_BADDR_MASK.
>
> There is no need for the address "to be masked properly for
> correctness".
>
> We added masking for 52-bit PAs due to the need to shuffle the bits
> around. There is no need for that when using 48-bit PAs, since the
> address must be below 2^48, and the address must be suitably aligned.
>
> If any bits are set outside of this mask, that is a bug in the caller.
>
> Mark.
Agreed - probably should not be masking out an wrong address from the caller
in order to proceed further with TTBRx_EL1 and then cause a problem down the
line.
>
>> Cc: Catalin Marinas <catalin.marinas@....com>
>> Cc: Will Deacon <will@...nel.org>
>> Cc: linux-arm-kernel@...ts.infradead.org
>> Cc: linux-kernel@...r.kernel.org
>> Signed-off-by: Anshuman Khandual <anshuman.khandual@....com>
>> ---
>> arch/arm64/include/asm/assembler.h | 1 +
>> arch/arm64/include/asm/pgtable.h | 2 +-
>> 2 files changed, 2 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
>> index 23be85d93348..d5eb09fc5f8a 100644
>> --- a/arch/arm64/include/asm/assembler.h
>> +++ b/arch/arm64/include/asm/assembler.h
>> @@ -609,6 +609,7 @@ alternative_endif
>> and \ttbr, \ttbr, #TTBR_BADDR_MASK_52
>> #else
>> mov \ttbr, \phys
>> + and \ttbr, \ttbr, #TTBRx_EL1_BADDR_MASK
>> #endif
>> .endm
>>
>> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
>> index 0944e296dd4a..c3110040c137 100644
>> --- a/arch/arm64/include/asm/pgtable.h
>> +++ b/arch/arm64/include/asm/pgtable.h
>> @@ -1604,7 +1604,7 @@ static inline void update_mmu_cache_range(struct vm_fault *vmf,
>> #ifdef CONFIG_ARM64_PA_BITS_52
>> #define phys_to_ttbr(addr) (((addr) | ((addr) >> 46)) & TTBR_BADDR_MASK_52)
>> #else
>> -#define phys_to_ttbr(addr) (addr)
>> +#define phys_to_ttbr(addr) (addr & TTBRx_EL1_BADDR_MASK)
>> #endif
>>
>> /*
>> --
>> 2.30.2
>>
Powered by blists - more mailing lists