[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240910-kvm-arm64-limit-guest-vl-v1-1-54df40b95ffb@kernel.org>
Date: Tue, 10 Sep 2024 21:21:17 +0100
From: Mark Brown <broonie@...nel.org>
To: Marc Zyngier <maz@...nel.org>, Oliver Upton <oliver.upton@...ux.dev>,
James Morse <james.morse@....com>,
Suzuki K Poulose <suzuki.poulose@....com>,
Catalin Marinas <catalin.marinas@....com>, Will Deacon <will@...nel.org>,
Fuad Tabba <tabba@...gle.com>
Cc: Dave Martin <Dave.Martin@....com>, linux-arm-kernel@...ts.infradead.org,
kvmarm@...ts.linux.dev, linux-kernel@...r.kernel.org,
Mark Brown <broonie@...nel.org>
Subject: [PATCH] KVM: arm64: Constrain the host to the maximum shared SVE
VL with pKVM
When pKVM saves and restores the host floating point state on a SVE system
it programs the vector length in ZCR_EL2.LEN to be whatever the maximum VL
for the PE is but uses a buffer allocated with kvm_host_sve_max_vl, the
maximum VL shared by all PEs in the system. This means that if we run on a
system where the maximum VLs are not consistent we will overflow the buffer
on PEs which support larger VLs.
Since the host will not currently attempt to make use of non-shared VLs fix
this by explicitly setting the EL2 VL to be the maximum shared VL when we
save and restore. This will enforce the limit on host VL usage. Should we
wish to support asymmetric VLs this code will need to be updated along with
the required changes for the host, patches have previously been posted:
https://lore.kernel.org/r/20240730-kvm-arm64-fix-pkvm-sve-vl-v6-0-cae8a2e0bd66@kernel.org
Fixes: b5b9955617bc ("KVM: arm64: Eagerly restore host fpsimd/sve state in pKVM")
Signed-off-by: Mark Brown <broonie@...nel.org>
---
arch/arm64/kvm/hyp/include/hyp/switch.h | 2 +-
arch/arm64/kvm/hyp/nvhe/hyp-main.c | 7 ++++---
2 files changed, 5 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index f59ccfe11ab9..ab1425baf0e9 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -339,7 +339,7 @@ static inline void __hyp_sve_save_host(void)
struct cpu_sve_state *sve_state = *host_data_ptr(sve_state);
sve_state->zcr_el1 = read_sysreg_el1(SYS_ZCR);
- write_sysreg_s(ZCR_ELx_LEN_MASK, SYS_ZCR_EL2);
+ write_sysreg_s(sve_vq_from_vl(kvm_host_sve_max_vl), SYS_ZCR_EL2);
__sve_save_state(sve_state->sve_regs + sve_ffr_offset(kvm_host_sve_max_vl),
&sve_state->fpsr,
true);
diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
index f43d845f3c4e..90ff79950912 100644
--- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c
+++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
@@ -45,10 +45,11 @@ static void __hyp_sve_restore_host(void)
* the host. The layout of the data when saving the sve state depends
* on the VL, so use a consistent (i.e., the maximum) host VL.
*
- * Setting ZCR_EL2 to ZCR_ELx_LEN_MASK sets the effective length
- * supported by the system (or limited at EL3).
+ * Note that this constrains the PE to the maximum shared VL
+ * that was discovered, if we wish to use larger VLs this will
+ * need to be revisited.
*/
- write_sysreg_s(ZCR_ELx_LEN_MASK, SYS_ZCR_EL2);
+ write_sysreg_s(sve_vq_from_vl(kvm_host_sve_max_vl), SYS_ZCR_EL2);
__sve_restore_state(sve_state->sve_regs + sve_ffr_offset(kvm_host_sve_max_vl),
&sve_state->fpsr,
true);
---
base-commit: 7c626ce4bae1ac14f60076d00eafe71af30450ba
change-id: 20240910-kvm-arm64-limit-guest-vl-d5fba0c7cc7b
Best regards,
--
Mark Brown <broonie@...nel.org>
Powered by blists - more mailing lists