[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <80892ca2e3d7122b5b92f696ecf4c1943b0245b9.camel@redhat.com>
Date: Mon, 24 May 2021 15:11:22 +0300
From: Maxim Levitsky <mlevitsk@...hat.com>
To: Vitaly Kuznetsov <vkuznets@...hat.com>, kvm@...r.kernel.org,
Paolo Bonzini <pbonzini@...hat.com>
Cc: Sean Christopherson <seanjc@...gle.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 1/7] KVM: nVMX: Introduce nested_evmcs_is_used()
On Mon, 2021-05-17 at 15:50 +0200, Vitaly Kuznetsov wrote:
> Unlike regular set_current_vmptr(), nested_vmx_handle_enlightened_vmptrld()
> can not be called directly from vmx_set_nested_state() as KVM may not have
> all the information yet (e.g. HV_X64_MSR_VP_ASSIST_PAGE MSR may not be
> restored yet). Enlightened VMCS is mapped later while getting nested state
> pages. In the meantime, vmx->nested.hv_evmcs remains NULL and using it
> for various checks is incorrect. In particular, if KVM_GET_NESTED_STATE is
> called right after KVM_SET_NESTED_STATE, KVM_STATE_NESTED_EVMCS flag in the
> resulting state will be unset (and such state will later fail to load).
>
> Introduce nested_evmcs_is_used() and use 'is_guest_mode(vcpu) &&
> vmx->nested.current_vmptr == -1ull' check to detect not-yet-mapped eVMCS
> after restore.
>
> Signed-off-by: Vitaly Kuznetsov <vkuznets@...hat.com>
> ---
> arch/x86/kvm/vmx/nested.c | 31 ++++++++++++++++++++++++++-----
> 1 file changed, 26 insertions(+), 5 deletions(-)
>
> diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
> index 6058a65a6ede..3080e00c8f90 100644
> --- a/arch/x86/kvm/vmx/nested.c
> +++ b/arch/x86/kvm/vmx/nested.c
> @@ -141,6 +141,27 @@ static void init_vmcs_shadow_fields(void)
> max_shadow_read_write_fields = j;
> }
>
> +static inline bool nested_evmcs_is_used(struct vcpu_vmx *vmx)
> +{
> + struct kvm_vcpu *vcpu = &vmx->vcpu;
> +
> + if (vmx->nested.hv_evmcs)
> + return true;
> +
> + /*
> + * After KVM_SET_NESTED_STATE, enlightened VMCS is mapped during
> + * KVM_REQ_GET_NESTED_STATE_PAGES handling and until the request is
> + * processed vmx->nested.hv_evmcs is NULL. It is, however, possible to
> + * detect such state by checking 'nested.current_vmptr == -1ull' when
> + * vCPU is in guest mode, it is only possible with eVMCS.
> + */
> + if (unlikely(vmx->nested.enlightened_vmcs_enabled && is_guest_mode(vcpu) &&
> + (vmx->nested.current_vmptr == -1ull)))
> + return true;
> +
> + return false;
> +}
I think that this is a valid way to solve the issue,
but it feels like there might be a better way.
I don't mind though to accept this patch as is.
So here are my 2 cents about this:
First of all after studying how evmcs works I take my words back
about needing to migrate its contents.
It is indeed enough to migrate its physical address,
or maybe even just a flag that evmcs is loaded
(and to my surprise we already do this - KVM_STATE_NESTED_EVMCS)
So how about just having a boolean flag that indicates that evmcs is in use,
but doesn't imply that we know its address or that it is mapped
to host address space, something like 'vmx->nested.enlightened_vmcs_loaded'
On migration that flag saved and restored as the KVM_STATE_NESTED_EVMCS,
otherwise it set when we load an evmcs and cleared when it is released.
Then as far as I can see we can use this flag in nested_evmcs_is_used
since all its callers don't touch evmcs, thus don't need it to be
mapped.
What do you think?
Best regards,
Maxim Levitsky
> /*
> * The following 3 functions, nested_vmx_succeed()/failValid()/failInvalid(),
> * set the success or error code of an emulated VMX instruction (as specified
> @@ -187,7 +208,7 @@ static int nested_vmx_fail(struct kvm_vcpu *vcpu, u32 vm_instruction_error)
> * failValid writes the error number to the current VMCS, which
> * can't be done if there isn't a current VMCS.
> */
> - if (vmx->nested.current_vmptr == -1ull && !vmx->nested.hv_evmcs)
> + if (vmx->nested.current_vmptr == -1ull && !nested_evmcs_is_used(vmx))
> return nested_vmx_failInvalid(vcpu);
>
> return nested_vmx_failValid(vcpu, vm_instruction_error);
> @@ -2208,7 +2229,7 @@ static void prepare_vmcs02_early(struct vcpu_vmx *vmx, struct vmcs12 *vmcs12)
> u32 exec_control;
> u64 guest_efer = nested_vmx_calc_efer(vmx, vmcs12);
>
> - if (vmx->nested.dirty_vmcs12 || vmx->nested.hv_evmcs)
> + if (vmx->nested.dirty_vmcs12 || nested_evmcs_is_used(vmx))
> prepare_vmcs02_early_rare(vmx, vmcs12);
>
> /*
> @@ -3437,7 +3458,7 @@ enum nvmx_vmentry_status nested_vmx_enter_non_root_mode(struct kvm_vcpu *vcpu,
>
> load_vmcs12_host_state(vcpu, vmcs12);
> vmcs12->vm_exit_reason = exit_reason.full;
> - if (enable_shadow_vmcs || vmx->nested.hv_evmcs)
> + if (enable_shadow_vmcs || nested_evmcs_is_used(vmx))
> vmx->nested.need_vmcs12_to_shadow_sync = true;
> return NVMX_VMENTRY_VMEXIT;
> }
> @@ -4032,7 +4053,7 @@ static void sync_vmcs02_to_vmcs12(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12)
> {
> struct vcpu_vmx *vmx = to_vmx(vcpu);
>
> - if (vmx->nested.hv_evmcs)
> + if (nested_evmcs_is_used(vmx))
> sync_vmcs02_to_vmcs12_rare(vcpu, vmcs12);
>
> vmx->nested.need_sync_vmcs02_to_vmcs12_rare = !vmx->nested.hv_evmcs;
> @@ -6056,7 +6077,7 @@ static int vmx_get_nested_state(struct kvm_vcpu *vcpu,
> if (vmx_has_valid_vmcs12(vcpu)) {
> kvm_state.size += sizeof(user_vmx_nested_state->vmcs12);
>
> - if (vmx->nested.hv_evmcs)
> + if (nested_evmcs_is_used(vmx))
> kvm_state.flags |= KVM_STATE_NESTED_EVMCS;
>
> if (is_guest_mode(vcpu) &&
Powered by blists - more mailing lists