[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <ub4djdh4iqy5mhl4ea6gpalu2tpv5ymnw63wdkwehldzh477eq@frxtjt3umsqh>
Date: Fri, 26 Dec 2025 14:51:37 +0800
From: Yao Yuan <yaoyuan@...ux.alibaba.com>
To: Paolo Bonzini <pbonzini@...hat.com>
Cc: linux-kernel@...r.kernel.org, kvm@...r.kernel.org, seanjc@...gle.com,
x86@...nel.org, stable@...r.kernel.org
Subject: Re: [PATCH 1/5] x86, fpu: introduce fpu_load_guest_fpstate()
On Wed, Dec 24, 2025 at 01:12:45AM +0800, Paolo Bonzini wrote:
> Create a variant of fpregs_lock_and_load() that KVM can use in its
> vCPU entry code after preemption has been disabled. While basing
> it on the existing logic in vcpu_enter_guest(), ensure that
> fpregs_assert_state_consistent() always runs and sprinkle a few
> more assertions.
>
> Cc: stable@...r.kernel.org
> Fixes: 820a6ee944e7 ("kvm: x86: Add emulation for IA32_XFD", 2022-01-14)
> Signed-off-by: Paolo Bonzini <pbonzini@...hat.com>
> ---
> arch/x86/include/asm/fpu/api.h | 1 +
> arch/x86/kernel/fpu/core.c | 17 +++++++++++++++++
> arch/x86/kvm/x86.c | 8 +-------
> 3 files changed, 19 insertions(+), 7 deletions(-)
>
> diff --git a/arch/x86/include/asm/fpu/api.h b/arch/x86/include/asm/fpu/api.h
> index cd6f194a912b..0820b2621416 100644
> --- a/arch/x86/include/asm/fpu/api.h
> +++ b/arch/x86/include/asm/fpu/api.h
> @@ -147,6 +147,7 @@ extern void *get_xsave_addr(struct xregs_state *xsave, int xfeature_nr);
> /* KVM specific functions */
> extern bool fpu_alloc_guest_fpstate(struct fpu_guest *gfpu);
> extern void fpu_free_guest_fpstate(struct fpu_guest *gfpu);
> +extern void fpu_load_guest_fpstate(struct fpu_guest *gfpu);
> extern int fpu_swap_kvm_fpstate(struct fpu_guest *gfpu, bool enter_guest);
> extern int fpu_enable_guest_xfd_features(struct fpu_guest *guest_fpu, u64 xfeatures);
>
> diff --git a/arch/x86/kernel/fpu/core.c b/arch/x86/kernel/fpu/core.c
> index 3ab27fb86618..a480fa8c65d5 100644
> --- a/arch/x86/kernel/fpu/core.c
> +++ b/arch/x86/kernel/fpu/core.c
> @@ -878,6 +878,23 @@ void fpregs_lock_and_load(void)
> fpregs_assert_state_consistent();
> }
>
> +void fpu_load_guest_fpstate(struct fpu_guest *gfpu)
> +{
> +#ifdef CONFIG_X86_DEBUG_FPU
> + struct fpu *fpu = x86_task_fpu(current);
> + WARN_ON_ONCE(gfpu->fpstate != fpu->fpstate);
> +#endif
> +
> + lockdep_assert_preemption_disabled();
Hi Paolo,
Do we need make sure the irq is disabled w/ lockdep ?
The irq_fpu_usable() returns true for:
!in_nmi () && in_hardirq() and !softirq_count()
It's possible that the TIF_NEED_FPU_LOAD is set again
w/ interrupt is enabled.
> + if (test_thread_flag(TIF_NEED_FPU_LOAD))
> + fpregs_restore_userregs();
> +
> + fpregs_assert_state_consistent();
> + if (gfpu->xfd_err)
> + wrmsrq(MSR_IA32_XFD_ERR, gfpu->xfd_err);
> +}
> +EXPORT_SYMBOL_FOR_KVM(fpu_load_guest_fpstate);
> +
> #ifdef CONFIG_X86_DEBUG_FPU
> /*
> * If current FPU state according to its tracking (loaded FPU context on this
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index ff8812f3a129..01d95192dfc5 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -11300,13 +11300,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
> kvm_make_request(KVM_REQ_EVENT, vcpu);
> }
>
> - fpregs_assert_state_consistent();
> - if (test_thread_flag(TIF_NEED_FPU_LOAD))
> - switch_fpu_return();
> -
> - if (vcpu->arch.guest_fpu.xfd_err)
> - wrmsrq(MSR_IA32_XFD_ERR, vcpu->arch.guest_fpu.xfd_err);
> -
> + fpu_load_guest_fpstate(&vcpu->arch.guest_fpu);
> kvm_load_xfeatures(vcpu, true);
>
> if (unlikely(vcpu->arch.switch_db_regs &&
> --
> 2.52.0
>
Powered by blists - more mailing lists