lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <019afc2d-b047-4e33-971c-7debbbaec84d@redhat.com>
Date: Wed, 19 Mar 2025 10:26:14 +1000
From: Gavin Shan <gshan@...hat.com>
To: Mark Brown <broonie@...nel.org>,
 Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
 Marc Zyngier <maz@...nel.org>, Oliver Upton <oliver.upton@...ux.dev>,
 Joey Gouly <joey.gouly@....com>, Suzuki K Poulose <suzuki.poulose@....com>,
 Catalin Marinas <catalin.marinas@....com>, Will Deacon <will@...nel.org>
Cc: linux-arm-kernel@...ts.infradead.org, kvmarm@...ts.linux.dev,
 linux-kernel@...r.kernel.org, stable@...r.kernel.org,
 Mark Rutland <mark.rutland@....com>, Fuad Tabba <tabba@...gle.com>
Subject: Re: [PATCH 6.12 8/8] KVM: arm64: Eagerly switch ZCR_EL{1,2}

Hi Mark,

On 3/14/25 10:35 AM, Mark Brown wrote:
> From: Mark Rutland <mark.rutland@....com>
> 
> [ Upstream commit 59419f10045bc955d2229819c7cf7a8b0b9c5b59 ]
> 
> In non-protected KVM modes, while the guest FPSIMD/SVE/SME state is live on the
> CPU, the host's active SVE VL may differ from the guest's maximum SVE VL:
> 
> * For VHE hosts, when a VM uses NV, ZCR_EL2 contains a value constrained
>    by the guest hypervisor, which may be less than or equal to that
>    guest's maximum VL.
> 
>    Note: in this case the value of ZCR_EL1 is immaterial due to E2H.
> 
> * For nVHE/hVHE hosts, ZCR_EL1 contains a value written by the guest,
>    which may be less than or greater than the guest's maximum VL.
> 
>    Note: in this case hyp code traps host SVE usage and lazily restores
>    ZCR_EL2 to the host's maximum VL, which may be greater than the
>    guest's maximum VL.
> 
> This can be the case between exiting a guest and kvm_arch_vcpu_put_fp().
> If a softirq is taken during this period and the softirq handler tries
> to use kernel-mode NEON, then the kernel will fail to save the guest's
> FPSIMD/SVE state, and will pend a SIGKILL for the current thread.
> 
> This happens because kvm_arch_vcpu_ctxsync_fp() binds the guest's live
> FPSIMD/SVE state with the guest's maximum SVE VL, and
> fpsimd_save_user_state() verifies that the live SVE VL is as expected
> before attempting to save the register state:
> 
> | if (WARN_ON(sve_get_vl() != vl)) {
> |         force_signal_inject(SIGKILL, SI_KERNEL, 0, 0);
> |         return;
> | }
> 
> Fix this and make this a bit easier to reason about by always eagerly
> switching ZCR_EL{1,2} at hyp during guest<->host transitions. With this
> happening, there's no need to trap host SVE usage, and the nVHE/nVHE
> __deactivate_cptr_traps() logic can be simplified to enable host access
> to all present FPSIMD/SVE/SME features.
> 
> In protected nVHE/hVHE modes, the host's state is always saved/restored
> by hyp, and the guest's state is saved prior to exit to the host, so
> from the host's PoV the guest never has live FPSIMD/SVE/SME state, and
> the host's ZCR_EL1 is never clobbered by hyp.
> 
> Fixes: 8c8010d69c132273 ("KVM: arm64: Save/restore SVE state for nVHE")
> Fixes: 2e3cf82063a00ea0 ("KVM: arm64: nv: Ensure correct VL is loaded before saving SVE state")
> Signed-off-by: Mark Rutland <mark.rutland@....com>
> Reviewed-by: Mark Brown <broonie@...nel.org>
> Tested-by: Mark Brown <broonie@...nel.org>
> Cc: Catalin Marinas <catalin.marinas@....com>
> Cc: Fuad Tabba <tabba@...gle.com>
> Cc: Marc Zyngier <maz@...nel.org>
> Cc: Oliver Upton <oliver.upton@...ux.dev>
> Cc: Will Deacon <will@...nel.org>
> Reviewed-by: Oliver Upton <oliver.upton@...ux.dev>
> Link: https://lore.kernel.org/r/20250210195226.1215254-9-mark.rutland@arm.com
> Signed-off-by: Marc Zyngier <maz@...nel.org>
> Signed-off-by: Mark Brown <broonie@...nel.org>
> ---
>   arch/arm64/kvm/fpsimd.c                 | 30 -----------------
>   arch/arm64/kvm/hyp/entry.S              |  5 +++
>   arch/arm64/kvm/hyp/include/hyp/switch.h | 59 +++++++++++++++++++++++++++++++++
>   arch/arm64/kvm/hyp/nvhe/hyp-main.c      | 13 ++++----
>   arch/arm64/kvm/hyp/nvhe/switch.c        | 33 +++++++++++++++---
>   arch/arm64/kvm/hyp/vhe/switch.c         |  4 +++
>   6 files changed, 103 insertions(+), 41 deletions(-)
> 

[...]

> diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
> index 4e757a77322c9efc59cdff501745f7c80d452358..1c8e2ad32e8c396fc4b11d5fec2e86728f2829d9 100644
> --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c
> +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
> @@ -5,6 +5,7 @@
>    */
>   
>   #include <hyp/adjust_pc.h>
> +#include <hyp/switch.h>
>   
>   #include <asm/pgtable-types.h>
>   #include <asm/kvm_asm.h>
> @@ -176,8 +177,12 @@ static void handle___kvm_vcpu_run(struct kvm_cpu_context *host_ctxt)
>   		sync_hyp_vcpu(hyp_vcpu);
>   		pkvm_put_hyp_vcpu(hyp_vcpu);
>   	} else {
> +		struct kvm_vcpu *vcpu = kern_hyp_va(host_vcpu);
> +
>   		/* The host is fully trusted, run its vCPU directly. */
> -		ret = __kvm_vcpu_run(host_vcpu);
> +		fpsimd_lazy_switch_to_guest(vcpu);
> +		ret = __kvm_vcpu_run(vcpu);
> +		fpsimd_lazy_switch_to_host(vcpu);
>   	}
>   

@host_vcpu should have been hypervisor's linear mapping address in v6.12. It looks
incorrect to assume it's a kernel's linear mapping address and convert it (@host_vcpu)
to the hypervisor's linear address agin, if I don't miss anything.

https://web.git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/arch/arm64/kvm/hyp/nvhe/hyp-main.c?h=linux-6.12.y

Thanks,
Gavin

>   out:
> @@ -486,12 +491,6 @@ void handle_trap(struct kvm_cpu_context *host_ctxt)
>   	case ESR_ELx_EC_SMC64:
>   		handle_host_smc(host_ctxt);
>   		break;
> -	case ESR_ELx_EC_SVE:
> -		cpacr_clear_set(0, CPACR_ELx_ZEN);
> -		isb();
> -		sve_cond_update_zcr_vq(sve_vq_from_vl(kvm_host_sve_max_vl) - 1,
> -				       SYS_ZCR_EL2);
> -		break;
>   	case ESR_ELx_EC_IABT_LOW:
>   	case ESR_ELx_EC_DABT_LOW:
>   		handle_host_mem_abort(host_ctxt);
> diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
> index ee74006c47bc44ca1d9bdf1ce7d4d8a41cf8e494..a1245fa838319544f3770a05a58eeed5233f0324 100644
> --- a/arch/arm64/kvm/hyp/nvhe/switch.c
> +++ b/arch/arm64/kvm/hyp/nvhe/switch.c
> @@ -40,6 +40,9 @@ static void __activate_cptr_traps(struct kvm_vcpu *vcpu)
>   {
>   	u64 val = CPTR_EL2_TAM;	/* Same bit irrespective of E2H */
>   
> +	if (!guest_owns_fp_regs())
> +		__activate_traps_fpsimd32(vcpu);
> +
>   	if (has_hvhe()) {
>   		val |= CPACR_ELx_TTA;
>   
> @@ -48,6 +51,8 @@ static void __activate_cptr_traps(struct kvm_vcpu *vcpu)
>   			if (vcpu_has_sve(vcpu))
>   				val |= CPACR_ELx_ZEN;
>   		}
> +
> +		write_sysreg(val, cpacr_el1);
>   	} else {
>   		val |= CPTR_EL2_TTA | CPTR_NVHE_EL2_RES1;
>   
> @@ -62,12 +67,32 @@ static void __activate_cptr_traps(struct kvm_vcpu *vcpu)
>   
>   		if (!guest_owns_fp_regs())
>   			val |= CPTR_EL2_TFP;
> +
> +		write_sysreg(val, cptr_el2);
>   	}
> +}
>   
> -	if (!guest_owns_fp_regs())
> -		__activate_traps_fpsimd32(vcpu);
> +static void __deactivate_cptr_traps(struct kvm_vcpu *vcpu)
> +{
> +	if (has_hvhe()) {
> +		u64 val = CPACR_ELx_FPEN;
> +
> +		if (cpus_have_final_cap(ARM64_SVE))
> +			val |= CPACR_ELx_ZEN;
> +		if (cpus_have_final_cap(ARM64_SME))
> +			val |= CPACR_ELx_SMEN;
> +
> +		write_sysreg(val, cpacr_el1);
> +	} else {
> +		u64 val = CPTR_NVHE_EL2_RES1;
> +
> +		if (!cpus_have_final_cap(ARM64_SVE))
> +			val |= CPTR_EL2_TZ;
> +		if (!cpus_have_final_cap(ARM64_SME))
> +			val |= CPTR_EL2_TSM;
>   
> -	kvm_write_cptr_el2(val);
> +		write_sysreg(val, cptr_el2);
> +	}
>   }
>   
>   static void __activate_traps(struct kvm_vcpu *vcpu)
> @@ -120,7 +145,7 @@ static void __deactivate_traps(struct kvm_vcpu *vcpu)
>   
>   	write_sysreg(this_cpu_ptr(&kvm_init_params)->hcr_el2, hcr_el2);
>   
> -	kvm_reset_cptr_el2(vcpu);
> +	__deactivate_cptr_traps(vcpu);
>   	write_sysreg(__kvm_hyp_host_vector, vbar_el2);
>   }
>   
> diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
> index 46c1f5caf007331cdbbc806a184e9b4721042fc0..496abfd3646b9858e95e06a79edec11eee3a5893 100644
> --- a/arch/arm64/kvm/hyp/vhe/switch.c
> +++ b/arch/arm64/kvm/hyp/vhe/switch.c
> @@ -462,6 +462,8 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
>   
>   	sysreg_save_host_state_vhe(host_ctxt);
>   
> +	fpsimd_lazy_switch_to_guest(vcpu);
> +
>   	/*
>   	 * Note that ARM erratum 1165522 requires us to configure both stage 1
>   	 * and stage 2 translation for the guest context before we clear
> @@ -486,6 +488,8 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
>   
>   	__deactivate_traps(vcpu);
>   
> +	fpsimd_lazy_switch_to_host(vcpu);
> +
>   	sysreg_restore_host_state_vhe(host_ctxt);
>   
>   	if (guest_owns_fp_regs())
> 


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ