[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20251223-kvm-arm64-sme-v9-3-8be3867cb883@kernel.org>
Date: Tue, 23 Dec 2025 01:20:57 +0000
From: Mark Brown <broonie@...nel.org>
To: Marc Zyngier <maz@...nel.org>, Joey Gouly <joey.gouly@....com>,
Catalin Marinas <catalin.marinas@....com>,
Suzuki K Poulose <suzuki.poulose@....com>, Will Deacon <will@...nel.org>,
Paolo Bonzini <pbonzini@...hat.com>, Jonathan Corbet <corbet@....net>,
Shuah Khan <shuah@...nel.org>, Oliver Upton <oupton@...nel.org>
Cc: Dave Martin <Dave.Martin@....com>, Fuad Tabba <tabba@...gle.com>,
Mark Rutland <mark.rutland@....com>, Ben Horgan <ben.horgan@....com>,
linux-arm-kernel@...ts.infradead.org, kvmarm@...ts.linux.dev,
linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
linux-doc@...r.kernel.org, linux-kselftest@...r.kernel.org,
Peter Maydell <peter.maydell@...aro.org>,
Eric Auger <eric.auger@...hat.com>, Mark Brown <broonie@...nel.org>
Subject: [PATCH v9 03/30] arm64/fpsimd: Decide to save ZT0 and streaming
mode FFR at bind time
Some parts of the SME state are optional, enabled by additional features
on top of the base FEAT_SME and controlled with enable bits in SMCR_ELx. We
unconditionally enable these for the host but for KVM we will allow the
feature set exposed to guests to be restricted by the VMM. These are the
FFR register (FEAT_SME_FA64) and ZT0 (FEAT_SME2).
We defer saving of guest floating point state for non-protected guests to
the host kernel. We also want to avoid having to reconfigure the guest
floating point state if nothing used the floating point state while running
the host. If the guest was running with the optional features disabled then
traps will be enabled for them so the host kernel will need to skip
accessing that state when saving state for the guest.
Support this by moving the decision about saving this state to the point
where we bind floating point state to the CPU, adding a new variable to
the cpu_fp_state which uses the enable bits in SMCR_ELx to flag which
features are enabled.
Signed-off-by: Mark Brown <broonie@...nel.org>
---
arch/arm64/include/asm/fpsimd.h | 1 +
arch/arm64/kernel/fpsimd.c | 10 ++++++++--
arch/arm64/kvm/fpsimd.c | 1 +
3 files changed, 10 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h
index ece65061dea0..146c1af55e22 100644
--- a/arch/arm64/include/asm/fpsimd.h
+++ b/arch/arm64/include/asm/fpsimd.h
@@ -87,6 +87,7 @@ struct cpu_fp_state {
void *sme_state;
u64 *svcr;
u64 *fpmr;
+ u64 sme_features;
unsigned int sve_vl;
unsigned int sme_vl;
enum fp_type *fp_type;
diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
index be4499ff6498..887fce177c92 100644
--- a/arch/arm64/kernel/fpsimd.c
+++ b/arch/arm64/kernel/fpsimd.c
@@ -490,12 +490,12 @@ static void fpsimd_save_user_state(void)
if (*svcr & SVCR_ZA_MASK)
sme_save_state(last->sme_state,
- system_supports_sme2());
+ last->sme_features & SMCR_ELx_EZT0);
/* If we are in streaming mode override regular SVE. */
if (*svcr & SVCR_SM_MASK) {
save_sve_regs = true;
- save_ffr = system_supports_fa64();
+ save_ffr = last->sme_features & SMCR_ELx_FA64;
vl = last->sme_vl;
}
}
@@ -1671,6 +1671,12 @@ static void fpsimd_bind_task_to_cpu(void)
last->to_save = FP_STATE_CURRENT;
current->thread.fpsimd_cpu = smp_processor_id();
+ last->sme_features = 0;
+ if (system_supports_fa64())
+ last->sme_features |= SMCR_ELx_FA64;
+ if (system_supports_sme2())
+ last->sme_features |= SMCR_ELx_EZT0;
+
/*
* Toggle SVE and SME trapping for userspace if needed, these
* are serialsied by ret_to_user().
diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c
index 15e17aca1dec..9158353d8be3 100644
--- a/arch/arm64/kvm/fpsimd.c
+++ b/arch/arm64/kvm/fpsimd.c
@@ -80,6 +80,7 @@ void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu)
fp_state.svcr = __ctxt_sys_reg(&vcpu->arch.ctxt, SVCR);
fp_state.fpmr = __ctxt_sys_reg(&vcpu->arch.ctxt, FPMR);
fp_state.fp_type = &vcpu->arch.fp_type;
+ fp_state.sme_features = 0;
if (vcpu_has_sve(vcpu))
fp_state.to_save = FP_STATE_SVE;
--
2.47.3
Powered by blists - more mailing lists