[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1537369714.9937.24.camel@intel.com>
Date: Wed, 19 Sep 2018 08:08:34 -0700
From: Sean Christopherson <sean.j.christopherson@...el.com>
To: Vitaly Kuznetsov <vkuznets@...hat.com>, kvm@...r.kernel.org
Cc: Paolo Bonzini <pbonzini@...hat.com>,
Radim Krčmář <rkrcmar@...hat.com>,
Jim Mattson <jmattson@...gle.com>,
Liran Alon <liran.alon@...cle.com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v1 RESEND 4/9] x86/kvm/mmu: introduce guest_mmu
On Tue, 2018-09-18 at 18:09 +0200, Vitaly Kuznetsov wrote:
> When EPT is used for nested guest we need to re-init MMU as shadow
> EPT MMU (nested_ept_init_mmu_context() does that). When we return back
> from L2 to L1 kvm_mmu_reset_context() in nested_vmx_load_cr3() resets
> MMU back to normal TDP mode. Add a special 'guest_mmu' so we can use
> separate root caches; the improved hit rate is not very important for
> single vCPU performance, but it avoids contention on the mmu_lock for
> many vCPUs.
>
> On the nested CPUID benchmark, with 16 vCPUs, an L2->L1->L2 vmexit
> goes from 42k to 26k cycles.
>
> Signed-off-by: Vitaly Kuznetsov <vkuznets@...hat.com>
> Signed-off-by: Paolo Bonzini <pbonzini@...hat.com>
> ---
> arch/x86/include/asm/kvm_host.h | 3 +++
> arch/x86/kvm/mmu.c | 15 +++++++++++----
> arch/x86/kvm/vmx.c | 27 +++++++++++++++++++--------
> 3 files changed, 33 insertions(+), 12 deletions(-)
...
> @@ -10926,12 +10935,12 @@ static void vmx_switch_vmcs(struct kvm_vcpu *vcpu, struct loaded_vmcs *vmcs)
> */
> static void vmx_free_vcpu_nested(struct kvm_vcpu *vcpu)
> {
> - struct vcpu_vmx *vmx = to_vmx(vcpu);
> + struct vcpu_vmx *vmx = to_vmx(vcpu);
Might be worth dropping the local @vmx and calling to_vmx() inline
since it's now being used only for the call to vmx_switch_vmcs().
>
> - vmx_switch_vmcs(vcpu, &vmx->vmcs01);
> - free_nested(vmx);
> - vcpu_put(vcpu);
> + vcpu_load(vcpu);
> + vmx_switch_vmcs(vcpu, &vmx->vmcs01);
> + free_nested(vcpu);
> + vcpu_put(vcpu);
> }
>
> static void vmx_free_vcpu(struct kvm_vcpu *vcpu)
> @@ -11281,6 +11290,7 @@ static int nested_ept_init_mmu_context(struct kvm_vcpu *vcpu)
> if (!valid_ept_address(vcpu, nested_ept_get_cr3(vcpu)))
> return 1;
>
> + vcpu->arch.mmu = &vcpu->arch.guest_mmu;
> kvm_init_shadow_ept_mmu(vcpu,
> to_vmx(vcpu)->nested.msrs.ept_caps &
> VMX_EPT_EXECUTE_ONLY_BIT,
> @@ -11296,6 +11306,7 @@ static int nested_ept_init_mmu_context(struct kvm_vcpu *vcpu)
>
> static void nested_ept_uninit_mmu_context(struct kvm_vcpu *vcpu)
> {
> + vcpu->arch.mmu = &vcpu->arch.root_mmu;
> vcpu->arch.walk_mmu = &vcpu->arch.root_mmu;
> }
>
> @@ -13363,7 +13374,7 @@ static void vmx_leave_nested(struct kvm_vcpu *vcpu)
> to_vmx(vcpu)->nested.nested_run_pending = 0;
> nested_vmx_vmexit(vcpu, -1, 0, 0);
> }
> - free_nested(to_vmx(vcpu));
> + free_nested(vcpu);
> }
>
> /*
Powered by blists - more mailing lists