[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <10fd9a3e-1bc2-7d4d-0535-162854fc5e9d@intel.com>
Date: Fri, 3 Nov 2023 16:46:29 +0800
From: "Yang, Weijiang" <weijiang.yang@...el.com>
To: Sean Christopherson <seanjc@...gle.com>,
Maxim Levitsky <mlevitsk@...hat.com>
CC: <pbonzini@...hat.com>, <kvm@...r.kernel.org>,
<linux-kernel@...r.kernel.org>, <dave.hansen@...el.com>,
<peterz@...radead.org>, <chao.gao@...el.com>,
<rick.p.edgecombe@...el.com>, <john.allen@....com>
Subject: Re: [PATCH v6 14/25] KVM: x86: Load guest FPU state when access
XSAVE-managed MSRs
On 11/2/2023 2:05 AM, Sean Christopherson wrote:
> On Tue, Oct 31, 2023, Maxim Levitsky wrote:
>> On Thu, 2023-09-14 at 02:33 -0400, Yang Weijiang wrote:
>>> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
>>> index 66edbed25db8..a091764bf1d2 100644
>>> --- a/arch/x86/kvm/x86.c
>>> +++ b/arch/x86/kvm/x86.c
>>> @@ -133,6 +133,9 @@ static int __set_sregs2(struct kvm_vcpu *vcpu, struct kvm_sregs2 *sregs2);
>>> static void __get_sregs2(struct kvm_vcpu *vcpu, struct kvm_sregs2 *sregs2);
>>>
>>> static DEFINE_MUTEX(vendor_module_lock);
>>> +static void kvm_load_guest_fpu(struct kvm_vcpu *vcpu);
>>> +static void kvm_put_guest_fpu(struct kvm_vcpu *vcpu);
>>> +
>>> struct kvm_x86_ops kvm_x86_ops __read_mostly;
>>>
>>> #define KVM_X86_OP(func) \
>>> @@ -4372,6 +4375,22 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
>>> }
>>> EXPORT_SYMBOL_GPL(kvm_get_msr_common);
>>>
>>> +static const u32 xstate_msrs[] = {
>>> + MSR_IA32_U_CET, MSR_IA32_PL0_SSP, MSR_IA32_PL1_SSP,
>>> + MSR_IA32_PL2_SSP, MSR_IA32_PL3_SSP,
>>> +};
>>> +
>>> +static bool is_xstate_msr(u32 index)
>>> +{
>>> + int i;
>>> +
>>> + for (i = 0; i < ARRAY_SIZE(xstate_msrs); i++) {
>>> + if (index == xstate_msrs[i])
>>> + return true;
>>> + }
>>> + return false;
>>> +}
>> The name 'xstate_msr' IMHO is not clear.
>>
>> How about naming it 'guest_fpu_state_msrs', together with adding a comment like that:
> Maybe xstate_managed_msrs? I'd prefer not to include "guest" because the behavior
> is more a property of the architecture and/or the host kernel. I understand where
> you're coming from, but it's the MSR *values* are part of guest state, whereas the
> check is a query on how KVM manages the MSR value, if that makes sense.
>
> And I really don't like "FPU". I get why the the kernel uses the "FPU" terminology,
> but for this check in particular I want to tie the behavior back to the architecture,
> i.e. provide the hint that the reason why these MSRs are special is because Intel
> defined them to be context switched via XSTATE.
>
> Actually, this is unnecesary bikeshedding to some extent, using an array is silly.
> It's easier and likely far more performant (not that that matters in this case)
> to use a switch statement.
>
> Is this better?
The change looks good to me! Thanks!
> /*
> * Returns true if the MSR in question is managed via XSTATE, i.e. is context
> * switched with the rest of guest FPU state.
> */
> static bool is_xstate_managed_msr(u32 index)
How about is_xfeature_msr()? xfeature is XSAVE-Supported-Feature, just to align with SDM
convention.
> {
> switch (index) {
> case MSR_IA32_U_CET:
> case MSR_IA32_PL0_SSP ... MSR_IA32_PL3_SSP:
> return true;
> default:
> return false;
> }
> }
>
> /*
> * Read or write a bunch of msrs. All parameters are kernel addresses.
> *
> * @return number of msrs set successfully.
> */
> static int __msr_io(struct kvm_vcpu *vcpu, struct kvm_msrs *msrs,
> struct kvm_msr_entry *entries,
> int (*do_msr)(struct kvm_vcpu *vcpu,
> unsigned index, u64 *data))
> {
> bool fpu_loaded = false;
> int i;
>
> for (i = 0; i < msrs->nmsrs; ++i) {
> /*
> * If userspace is accessing one or more XSTATE-managed MSRs,
> * temporarily load the guest's FPU state so that the guest's
> * MSR value(s) is resident in hardware, i.e. so that KVM can
> * get/set the MSR via RDMSR/WRMSR.
> */
> if (vcpu && !fpu_loaded && kvm_caps.supported_xss &&
> is_xstate_managed_msr(entries[i].index)) {
> kvm_load_guest_fpu(vcpu);
> fpu_loaded = true;
> }
> if (do_msr(vcpu, entries[i].index, &entries[i].data))
> break;
> }
> if (fpu_loaded)
> kvm_put_guest_fpu(vcpu);
>
> return i;
> }
Powered by blists - more mailing lists