lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <8ca00b05-d402-4359-9403-32dc714e3cb0@arm.com>
Date: Tue, 21 Oct 2025 14:45:55 +0100
From: Ben Horgan <ben.horgan@....com>
To: Anshuman Khandual <anshuman.khandual@....com>,
 linux-arm-kernel@...ts.infradead.org
Cc: Catalin Marinas <catalin.marinas@....com>, Will Deacon <will@...nel.org>,
 linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] arm64/mm: Add remaining TLBI_XXX_MASK macros

Hi Anshuman,

On 10/21/25 13:45, Anshuman Khandual wrote:
> 
> 
> On 21/10/25 2:30 PM, Ben Horgan wrote:
>> Hi Anshuman,
>>
>> On 10/21/25 06:20, Anshuman Khandual wrote:
>>> Add remaining TLBI_XXX_MASK macros and replace current open encoded fields.
>>> While here replace hard coded page size based shifts but with derived ones
>>> via ilog2() thus adding some required context.
>>>
>>> Cc: Catalin Marinas <catalin.marinas@....com>
>>> Cc: Will Deacon <will@...nel.org>
>>> Cc: linux-arm-kernel@...ts.infradead.org
>>> Cc: linux-kernel@...r.kernel.org
>>> Signed-off-by: Anshuman Khandual <anshuman.khandual@....com>
>>> ---
>>>  arch/arm64/include/asm/tlbflush.h | 26 ++++++++++++++++++--------
>>>  1 file changed, 18 insertions(+), 8 deletions(-)
>>>
>>> diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
>>> index 131096094f5b..cf75fc2a06c3 100644
>>> --- a/arch/arm64/include/asm/tlbflush.h
>>> +++ b/arch/arm64/include/asm/tlbflush.h
[...]
>>> @@ -100,8 +101,17 @@ static inline unsigned long get_trans_granule(void)
>>>   *
>>>   * For Stage-2 invalidation, use the level values provided to that effect
>>>   * in asm/stage2_pgtable.h.
>>> + *
>>> + * +----------+------+-------+--------------------------------------+
>>> + * |   ASID   |  TG  |  TTL  |                 BADDR                |
>>> + * +-----------------+-------+--------------------------------------+
>>> + * |63      48|47  46|45   44|43                                   0|
>>> + * +----------+------+-------+--------------------------------------+
>>>   */
>>> -#define TLBI_TTL_MASK		GENMASK_ULL(47, 44)
>>> +#define TLBI_ASID_MASK		GENMASK_ULL(63, 48)
>>> +#define TLBI_TG_MASK		GENMASK_ULL(47, 46)
>>> +#define TLBI_TTL_MASK		GENMASK_ULL(45, 44)
>>
>> The definition of TLBI_TTL_MASK changes here. This might be the correct
>> thing to do but it should be mentioned in the commit message and the
> 
> Sure, will update the commit message.
>> other user, arch/arm64/kvm/nested.c, needs to be updated in tandem.
> 
> Right, missed that one. Probably something like the following change
> might do it for KVM without much code churn.
> 
> --- a/arch/arm64/kvm/nested.c
> +++ b/arch/arm64/kvm/nested.c
> @@ -540,7 +540,7 @@ unsigned long compute_tlb_inval_range(struct kvm_s2_mmu *mmu, u64 val)
>         unsigned long max_size;
>         u8 ttl;
> 
> -       ttl = FIELD_GET(TLBI_TTL_MASK, val);
> +       ttl = FIELD_GET(TLBI_TTL_MASK, val) | FIELD_GET(TLBI_TG_MASK, val);

This and the other changed lines are missing a shift, but otherwise
seems reasonable.

> 
>         if (!ttl || !kvm_has_feat(kvm, ID_AA64MMFR2_EL1, TTL, IMP)) {
>                 /* No TTL, check the shadow S2 for a hint */
> @@ -963,7 +963,7 @@ static void compute_s1_tlbi_range(struct kvm_vcpu *vcpu, u32 inst, u64 val,
>         case OP_TLBI_VALE1ISNXS:
>         case OP_TLBI_VALE1OSNXS:
>                 scope->type = TLBI_VA;
> -               scope->size = ttl_to_size(FIELD_GET(TLBI_TTL_MASK, val));
> +               scope->size = ttl_to_size(FIELD_GET(TLBI_TTL_MASK, val) | FIELD_GET(TLBI_TG_MASK, val));
>                 if (!scope->size)
>                         scope->size = SZ_1G;
>                 scope->va = tlbi_va_s1_to_va(val) & ~(scope->size - 1);
> @@ -991,7 +991,7 @@ static void compute_s1_tlbi_range(struct kvm_vcpu *vcpu, u32 inst, u64 val,
>         case OP_TLBI_VAALE1ISNXS:
>         case OP_TLBI_VAALE1OSNXS:
>                 scope->type = TLBI_VAA;
> -               scope->size = ttl_to_size(FIELD_GET(TLBI_TTL_MASK, val));
> +               scope->size = ttl_to_size(FIELD_GET(TLBI_TTL_MASK, val) | FIELD_GET(TLBI_TG_MASK, val));
>                 if (!scope->size)
>                         scope->size = SZ_1G;
>                 scope->va = tlbi_va_s1_to_va(val) & ~(scope->size - 1);
> 
>>
>>> +#define TLBI_BADDR_MASK		GENMASK_ULL(43, 0)
>>>  
>>>  #define TLBI_TTL_UNKNOWN	INT_MAX
>>>  
>>
>> Thanks,
>>
>> Ben
>>
> 

Thanks,

Ben


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ