[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a46895fa-3ffa-e2d8-2841-625a1f791ece@linux.intel.com>
Date: Thu, 27 Apr 2023 11:46:08 +0800
From: Binbin Wu <binbin.wu@...ux.intel.com>
To: Yang Weijiang <weijiang.yang@...el.com>, seanjc@...gle.com,
pbonzini@...hat.com, peterz@...radead.org, john.allen@....com,
kvm@...r.kernel.org, linux-kernel@...r.kernel.org
Cc: rick.p.edgecombe@...el.com,
Sean Christopherson <sean.j.christopherson@...el.com>
Subject: Re: [PATCH v2 09/21] KVM:x86: Load guest FPU state when accessing
xsaves-managed MSRs
On 4/21/2023 9:46 PM, Yang Weijiang wrote:
> From: Sean Christopherson <sean.j.christopherson@...el.com>
>
> Load the guest's FPU state if userspace is accessing MSRs whose values are
> managed by XSAVES so that the MSR helpers, e.g. kvm_{get,set}_xsave_msr(),
So far, kvm_{get,set}_xsave_msr() is not introduced yet.
IMO, it is a bit confusing to understand the whole picture without the
following patches.
May be add some description or adjust the order?
> can simply do {RD,WR}MSR to access the guest's value.
>
> If new feature MSRs supported in XSS are passed through to the guest they
> are saved and restored by XSAVES/XRSTORS, i.e. in the guest's FPU state.
>
> Because is also used for the KVM_GET_MSRS device ioctl(), explicitly check
> @vcpu is non-null before attempting to load guest state. The XSS supporting
> MSRs cannot be retrieved via the device ioctl() without loading guest FPU
> state (which doesn't exist).
>
> Note that guest_cpuid_has() is not queried as host userspace is allowed
> to access MSRs that have not been exposed to the guest, e.g. it might do
> KVM_SET_MSRS prior to KVM_SET_CPUID2.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@...el.com>
> Co-developed-by: Yang Weijiang <weijiang.yang@...el.com>
> Signed-off-by: Yang Weijiang <weijiang.yang@...el.com>
> ---
> arch/x86/kvm/x86.c | 29 ++++++++++++++++++++++++++++-
> 1 file changed, 28 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index d2975ca96ac5..7788646bbf1f 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -130,6 +130,9 @@ static int __set_sregs2(struct kvm_vcpu *vcpu, struct kvm_sregs2 *sregs2);
> static void __get_sregs2(struct kvm_vcpu *vcpu, struct kvm_sregs2 *sregs2);
>
> static DEFINE_MUTEX(vendor_module_lock);
> +static void kvm_load_guest_fpu(struct kvm_vcpu *vcpu);
> +static void kvm_put_guest_fpu(struct kvm_vcpu *vcpu);
> +
> struct kvm_x86_ops kvm_x86_ops __read_mostly;
>
> #define KVM_X86_OP(func) \
> @@ -4336,6 +4339,21 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
> }
> EXPORT_SYMBOL_GPL(kvm_get_msr_common);
>
> +static const u32 xsave_msrs[] = {
> + MSR_IA32_U_CET, MSR_IA32_PL3_SSP,
> +};
> +
> +static bool is_xsaves_msr(u32 index)
> +{
> + int i;
> +
> + for (i = 0; i < ARRAY_SIZE(xsave_msrs); i++) {
> + if (index == xsave_msrs[i])
> + return true;
> + }
> + return false;
> +}
> +
> /*
> * Read or write a bunch of msrs. All parameters are kernel addresses.
> *
> @@ -4346,11 +4364,20 @@ static int __msr_io(struct kvm_vcpu *vcpu, struct kvm_msrs *msrs,
> int (*do_msr)(struct kvm_vcpu *vcpu,
> unsigned index, u64 *data))
> {
> + bool fpu_loaded = false;
> int i;
>
> - for (i = 0; i < msrs->nmsrs; ++i)
> + for (i = 0; i < msrs->nmsrs; ++i) {
> + if (vcpu && !fpu_loaded && kvm_caps.supported_xss &&
> + is_xsaves_msr(entries[i].index)) {
> + kvm_load_guest_fpu(vcpu);
> + fpu_loaded = true;
> + }
> if (do_msr(vcpu, entries[i].index, &entries[i].data))
> break;
> + }
> + if (fpu_loaded)
> + kvm_put_guest_fpu(vcpu);
>
> return i;
> }
Powered by blists - more mailing lists