[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6d8da1e886ebb84ac5168f485148473dc2e2a0b4.camel@redhat.com>
Date: Mon, 24 May 2021 20:54:45 +0300
From: Maxim Levitsky <mlevitsk@...hat.com>
To: Ilias Stamatis <ilstam@...zon.com>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org, pbonzini@...hat.com
Cc: seanjc@...gle.com, vkuznets@...hat.com, wanpengli@...cent.com,
jmattson@...gle.com, joro@...tes.org, zamsden@...il.com,
mtosatti@...hat.com, dwmw@...zon.co.uk
Subject: Re: [PATCH v3 10/12] KVM: VMX: Set the TSC offset and multiplier on
nested entry and exit
On Fri, 2021-05-21 at 11:24 +0100, Ilias Stamatis wrote:
> Calculate the nested TSC offset and multiplier on entering L2 using the
> corresponding functions. Restore the L1 values on L2's exit.
>
> Signed-off-by: Ilias Stamatis <ilstam@...zon.com>
> ---
> arch/x86/kvm/vmx/nested.c | 18 ++++++++++++++----
> 1 file changed, 14 insertions(+), 4 deletions(-)
>
> diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
> index 239154d3e4e7..f75c4174cbcf 100644
> --- a/arch/x86/kvm/vmx/nested.c
> +++ b/arch/x86/kvm/vmx/nested.c
> @@ -2532,6 +2532,15 @@ static int prepare_vmcs02(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12,
> vmcs_write64(GUEST_IA32_PAT, vmx->vcpu.arch.pat);
> }
>
> + vcpu->arch.tsc_offset = kvm_calc_nested_tsc_offset(
> + vcpu->arch.l1_tsc_offset,
> + vmx_get_l2_tsc_offset(vcpu),
> + vmx_get_l2_tsc_multiplier(vcpu));
> +
> + vcpu->arch.tsc_scaling_ratio = kvm_calc_nested_tsc_multiplier(
> + vcpu->arch.l1_tsc_scaling_ratio,
> + vmx_get_l2_tsc_multiplier(vcpu));
> +
This code can be in theory put to the common x86 code,
since it uses the vendor callbacks anyway, but this is
probably not worth it.
> vmcs_write64(TSC_OFFSET, vcpu->arch.tsc_offset);
> if (kvm_has_tsc_control)
> vmcs_write64(TSC_MULTIPLIER, vcpu->arch.tsc_scaling_ratio);
> @@ -3353,8 +3362,6 @@ enum nvmx_vmentry_status nested_vmx_enter_non_root_mode(struct kvm_vcpu *vcpu,
> }
>
> enter_guest_mode(vcpu);
> - if (vmcs12->cpu_based_vm_exec_control & CPU_BASED_USE_TSC_OFFSETTING)
> - vcpu->arch.tsc_offset += vmcs12->tsc_offset;
>
> if (prepare_vmcs02(vcpu, vmcs12, &entry_failure_code)) {
> exit_reason.basic = EXIT_REASON_INVALID_STATE;
> @@ -4462,8 +4469,11 @@ void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 vm_exit_reason,
> if (nested_cpu_has_preemption_timer(vmcs12))
> hrtimer_cancel(&to_vmx(vcpu)->nested.preemption_timer);
>
> - if (vmcs12->cpu_based_vm_exec_control & CPU_BASED_USE_TSC_OFFSETTING)
> - vcpu->arch.tsc_offset -= vmcs12->tsc_offset;
> + if (nested_cpu_has(vmcs12, CPU_BASED_USE_TSC_OFFSETTING)) {
> + vcpu->arch.tsc_offset = vcpu->arch.l1_tsc_offset;
> + if (nested_cpu_has2(vmcs12, SECONDARY_EXEC_TSC_SCALING))
> + vcpu->arch.tsc_scaling_ratio = vcpu->arch.l1_tsc_scaling_ratio;
> + }
Same here.
>
> if (likely(!vmx->fail)) {
> sync_vmcs02_to_vmcs12(vcpu, vmcs12);
Reviewed-by: Maxim Levitsky <mlevitsk@...hat.com>
Best regards,
Maxim Levitsky
Powered by blists - more mailing lists