[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+EHjTwQ4fLBE1YXoB6M0eamSgGDW=nfLaC+-_surBfVbh3byQ@mail.gmail.com>
Date: Mon, 12 Jan 2026 17:59:17 +0000
From: Fuad Tabba <tabba@...gle.com>
To: Mark Brown <broonie@...nel.org>
Cc: Marc Zyngier <maz@...nel.org>, Joey Gouly <joey.gouly@....com>,
Catalin Marinas <catalin.marinas@....com>, Suzuki K Poulose <suzuki.poulose@....com>,
Will Deacon <will@...nel.org>, Paolo Bonzini <pbonzini@...hat.com>, Jonathan Corbet <corbet@....net>,
Shuah Khan <shuah@...nel.org>, Oliver Upton <oupton@...nel.org>, Dave Martin <Dave.Martin@....com>,
Mark Rutland <mark.rutland@....com>, Ben Horgan <ben.horgan@....com>,
linux-arm-kernel@...ts.infradead.org, kvmarm@...ts.linux.dev,
linux-kernel@...r.kernel.org, kvm@...r.kernel.org, linux-doc@...r.kernel.org,
linux-kselftest@...r.kernel.org, Peter Maydell <peter.maydell@...aro.org>,
Eric Auger <eric.auger@...hat.com>
Subject: Re: [PATCH v9 19/30] KVM: arm64: Provide assembly for SME register access
Hi Mark,
On Tue, 23 Dec 2025 at 01:22, Mark Brown <broonie@...nel.org> wrote:
>
> Provide versions of the SME state save and restore functions for the
> hypervisor to allow it to restore ZA and ZT for guests.
>
> Signed-off-by: Mark Brown <broonie@...nel.org>
> ---
> arch/arm64/include/asm/kvm_hyp.h | 3 +++
> arch/arm64/kvm/hyp/fpsimd.S | 26 ++++++++++++++++++++++++++
> 2 files changed, 29 insertions(+)
>
> diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
> index 0317790dd3b7..1cef9991d238 100644
> --- a/arch/arm64/include/asm/kvm_hyp.h
> +++ b/arch/arm64/include/asm/kvm_hyp.h
> @@ -116,6 +116,9 @@ void __fpsimd_save_state(struct user_fpsimd_state *fp_regs);
> void __fpsimd_restore_state(struct user_fpsimd_state *fp_regs);
> void __sve_save_state(void *sve_pffr, u32 *fpsr, int save_ffr);
> void __sve_restore_state(void *sve_pffr, u32 *fpsr, int restore_ffr);
> +int __sve_get_vl(void);
> +void __sme_save_state(void const *state, bool restore_zt);
> +void __sme_restore_state(void const *state, bool restore_zt);
Would it be a good idea to pass the VL to these functions. Currently,
they assume that the hardware's current VL matches the buffer's
intended layout. But if there is a mismatch between the guest's VL and
the current one, this could be difficult to debug. Passing the VL and
checking it against _sme_rdsvl would be an inexpensive way to avoid
these.
>
> u64 __guest_enter(struct kvm_vcpu *vcpu);
>
> diff --git a/arch/arm64/kvm/hyp/fpsimd.S b/arch/arm64/kvm/hyp/fpsimd.S
> index 6e16cbfc5df2..44a1b0a483da 100644
> --- a/arch/arm64/kvm/hyp/fpsimd.S
> +++ b/arch/arm64/kvm/hyp/fpsimd.S
> @@ -29,3 +29,29 @@ SYM_FUNC_START(__sve_save_state)
> sve_save 0, x1, x2, 3
> ret
> SYM_FUNC_END(__sve_save_state)
> +
> +SYM_FUNC_START(__sve_get_vl)
> + _sve_rdvl 0, 1
> + ret
> +SYM_FUNC_END(__sve_get_vl)
Since this is just one instruction, would it be better to implement it
as an inline assembly in the header file rather than a full
SYM_FUNC_START, to reduce the overhead?
> +
> +SYM_FUNC_START(__sme_save_state)
I think that this needs an isb(). We need to ensure that SMCR updates
are visible here. Looking ahead to where you introduce
__hyp_sme_save_guest(), that doesn't have a barrier after updating
SMCR. The alternative is to call the barrier where it's needed, but
make sure that this is well documented.
> + _sme_rdsvl 2, 1 // x2 = VL/8
> + sme_save_za 0, x2, 12 // Leaves x0 pointing to the end of ZA
> +
> + cbz x1, 1f
> + _str_zt 0
> +1:
> + ret
> +SYM_FUNC_END(__sme_save_state)
> +
> +SYM_FUNC_START(__sme_restore_state)
Same as above.
Cheers,
/fuad
> + _sme_rdsvl 2, 1 // x2 = VL/8
> + sme_load_za 0, x2, 12 // Leaves x0 pointing to end of ZA
> +
> + cbz x1, 1f
> + _ldr_zt 0
> +
> +1:
> + ret
> +SYM_FUNC_END(__sme_restore_state)
>
> --
> 2.47.3
>
Powered by blists - more mailing lists