[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <85414ca6-e135-2371-cbce-0f595a7b7a26@intel.com>
Date: Tue, 9 Nov 2021 13:54:21 +0800
From: Chenyi Qiang <chenyi.qiang@...el.com>
To: Sean Christopherson <seanjc@...gle.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>,
Xiaoyao Li <xiaoyao.li@...el.com>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v5 3/7] KVM: X86: Expose IA32_PKRS MSR
On 11/9/2021 1:44 AM, Sean Christopherson wrote:
> On Wed, Aug 11, 2021, Chenyi Qiang wrote:
>> + u32 pkrs;
>
> ...
>
>> @@ -1115,6 +1117,7 @@ void vmx_prepare_switch_to_guest(struct kvm_vcpu *vcpu)
>> #endif
>> unsigned long fs_base, gs_base;
>> u16 fs_sel, gs_sel;
>> + u32 host_pkrs;
>
> As mentioned in the previosu patch, I think it makes sense to track this as a u64
> so that the only place in KVM that deas with the u64<=>u32 conversion is the below
>
> host_pkrs = get_current_pkrs();
>
>> int i;
>>
>> vmx->req_immediate_exit = false;
>> @@ -1150,6 +1153,20 @@ void vmx_prepare_switch_to_guest(struct kvm_vcpu *vcpu)
>> */
>> host_state->ldt_sel = kvm_read_ldt();
>>
>> + /*
>> + * Update the host pkrs vmcs field before vcpu runs.
>> + * The setting of VM_EXIT_LOAD_IA32_PKRS can ensure
>> + * kvm_cpu_cap_has(X86_FEATURE_PKS) &&
>> + * guest_cpuid_has(vcpu, X86_FEATURE_PKS)
>> + */
>> + if (vm_exit_controls_get(vmx) & VM_EXIT_LOAD_IA32_PKRS) {
>> + host_pkrs = get_current_pkrs();
>> + if (unlikely(host_pkrs != host_state->pkrs)) {
>> + vmcs_write64(HOST_IA32_PKRS, host_pkrs);
>> + host_state->pkrs = host_pkrs;
>> + }
>> + }
>> +
>> #ifdef CONFIG_X86_64
>> savesegment(ds, host_state->ds_sel);
>> savesegment(es, host_state->es_sel);
>> @@ -1371,6 +1388,15 @@ void vmx_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags)
>> vmx->emulation_required = emulation_required(vcpu);
>> }
>>
>> +static void vmx_set_pkrs(struct kvm_vcpu *vcpu, u64 pkrs)
>> +{
>
> Hrm. Ideally this would be open coded in vmx_set_msr(). Long term, the RESET/INIT
> paths should really treat MSR updates as "normal" host_initiated writes instead of
> having to manually handle every MSR.
>
> That would be a bit gross to handle in vmx_vcpu_reset() since it would have to
> create a struct msr_data (because __kvm_set_msr() isn't exposed to vendor code),
> but since vcpu->arch.pkrs is relevant to the MMU I think it makes sense to
> initiate the write from common x86.
>
> E.g. this way there's not out-of-band special code, vmx_vcpu_reset() is kept clean,
> and if/when SVM gains support for PKRS this particular path Just Works. And it would
> be an easy conversion for my pipe dream plan of handling MSRs at RESET/INIT via a
> list of MSRs+values.
>
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index ac83d873d65b..55881d13620f 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -11147,6 +11147,9 @@ void kvm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
> kvm_set_rflags(vcpu, X86_EFLAGS_FIXED);
> kvm_rip_write(vcpu, 0xfff0);
>
> + if (kvm_cpu_cap_has(X86_FEATURE_PKS))
> + __kvm_set_msr(vcpu, MSR_IA32_PKRS, 0, true);
> +
Got it. In addition, is it necessary to add on-INIT check? like:
if (kvm_cpu_cap_has(X86_FEATURE_PKS) && !init_event)
__kvm_set_msr(vcpu, MSR_IA32_PKRS, 0, true);
PKRS should be preserved on INIT, not cleared. The SDM doesn't make this
clear either.
> vcpu->arch.cr3 = 0;
> kvm_register_mark_dirty(vcpu, VCPU_EXREG_CR3);
>
>> + if (kvm_cpu_cap_has(X86_FEATURE_PKS)) {
>> + vcpu->arch.pkrs = pkrs;
>> + kvm_register_mark_available(vcpu, VCPU_EXREG_PKRS);
>> + vmcs_write64(GUEST_IA32_PKRS, pkrs);
>> + }
>> +}
Powered by blists - more mailing lists