[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <Z4Fk58k8YptDkVgm@arm.com>
Date: Fri, 10 Jan 2025 18:20:23 +0000
From: Catalin Marinas <catalin.marinas@....com>
To: "Aneesh Kumar K.V (Arm)" <aneesh.kumar@...nel.org>
Cc: linux-kernel@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
kvmarm@...ts.linux.dev, Suzuki K Poulose <Suzuki.Poulose@....com>,
Steven Price <steven.price@....com>, Will Deacon <will@...nel.org>,
Marc Zyngier <maz@...nel.org>, Mark Rutland <mark.rutland@....com>,
Oliver Upton <oliver.upton@...ux.dev>,
Joey Gouly <joey.gouly@....com>, Zenghui Yu <yuzenghui@...wei.com>
Subject: Re: [PATCH v2 5/7] KVM: arm64: MTE: Use stage-2 NoTagAccess memory
attribute if supported
On Fri, Jan 10, 2025 at 04:30:21PM +0530, Aneesh Kumar K.V (Arm) wrote:
> Currently, the kernel won't start a guest if the MTE feature is enabled
> and the guest RAM is backed by memory which doesn't support access tags.
> Update this such that the kernel uses the NoTagAccess memory attribute
> while mapping pages from VMAs for which MTE is not allowed. The fault
> from accessing the access tags with such pages is forwarded to VMM so
> that VMM can decide to kill the guest or take any corrective actions
>
> Signed-off-by: Aneesh Kumar K.V (Arm) <aneesh.kumar@...nel.org>
Mostly nitpicks below (apart from the slot registration). The decision
on whether that's the best approach lies with Oliver/Marc.
> diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
> index cf811009a33c..609ed6a5ffce 100644
> --- a/arch/arm64/include/asm/kvm_emulate.h
> +++ b/arch/arm64/include/asm/kvm_emulate.h
> @@ -378,6 +378,11 @@ static inline bool kvm_vcpu_trap_is_exec_fault(const struct kvm_vcpu *vcpu)
> return kvm_vcpu_trap_is_iabt(vcpu) && !kvm_vcpu_abt_iss1tw(vcpu);
> }
>
> +static inline bool kvm_vcpu_trap_is_tagaccess(const struct kvm_vcpu *vcpu)
> +{
> + return !!(ESR_ELx_ISS2(kvm_vcpu_get_esr(vcpu)) & ESR_ELx_TagAccess);
> +}
The function type is already bool, no need for the "!!".
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index eb8220a409e1..3610bea7607d 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -1660,9 +1660,11 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>
> if (!fault_is_perm && !device && kvm_has_mte(kvm)) {
> /* Check the VMM hasn't introduced a new disallowed VMA */
> - if (mte_allowed) {
> + if (mte_allowed)
> sanitise_mte_tags(kvm, pfn, vma_pagesize);
> - } else {
> + else if (kvm_has_mte_perm(kvm))
> + prot |= KVM_PGTABLE_PROT_NORMAL_NOTAGACCESS;
> + else {
> ret = -EFAULT;
> goto out_unlock;
> }
Don't remove the brackets at the end of "if (mte_allowed) {" etc. The
coding style does require them if at least one of the branches is
multi-line (the last "else").
> @@ -2152,7 +2162,8 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
> if (!vma)
> break;
>
> - if (kvm_has_mte(kvm) && !kvm_vma_mte_allowed(vma)) {
> + if (kvm_has_mte(kvm) &&
> + !kvm_has_mte_perm(kvm) && !kvm_vma_mte_allowed(vma)) {
> ret = -EINVAL;
> break;
> }
I don't think we should change this, or at least not how it's done above
(Suzuki raised a related issue internally relaxing this for VM_PFNMAP).
For standard memory slots, we want to reject them upfront rather than
deferring to the fault handler. An example here is file mmap() passed as
standard RAM to the VM. It's an unnecessary change in behaviour IMHO.
I'd only relax this for VM_PFNMAP mappings further down in this
function (and move the VM_PFNMAP check above; see Suzuki's internal
patch, unless he posted it publicly already).
--
Catalin
Powered by blists - more mailing lists