[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <96617e302a86ee0e4a263051b0cb7e51bf2ae4ed.1605232743.git.isaku.yamahata@intel.com>
Date: Mon, 16 Nov 2020 10:25:58 -0800
From: isaku.yamahata@...el.com
To: Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
"H . Peter Anvin" <hpa@...or.com>,
Paolo Bonzini <pbonzini@...hat.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>, x86@...nel.org,
linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Cc: isaku.yamahata@...el.com, isaku.yamahata@...il.com,
Sean Christopherson <sean.j.christopherson@...el.com>
Subject: [RFC PATCH 13/67] KVM: VMX: Explicitly check for hv_remote_flush_tlb when loading pgd()
From: Sean Christopherson <sean.j.christopherson@...el.com>
Explicitly check that kvm_x86_ops.tlb_remote_flush() points at Hyper-V's
implementation for PV flushing instead of assuming that a non-NULL
implemenation means running on Hyper-V. Wrap the related logic in
ifdeffery as hv_remote_flush_tlb() is defined iff CONFIG_HYPERV!=n.
Short term, the explicit check makes it more obvious why a non-NULL
tlb_remote_flush() triggers EPTP shenanigans. Long term, this will
allow TDX to define its own implementation of tlb_remote_flush() without
running afoul of Hyper-V.
Signed-off-by: Sean Christopherson <sean.j.christopherson@...el.com>
---
arch/x86/kvm/vmx/vmx.c | 7 +++++--
arch/x86/kvm/vmx/vmx.h | 2 ++
2 files changed, 7 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 1c9ad3103c87..0703d82e7bad 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -3072,14 +3072,15 @@ static void vmx_load_mmu_pgd(struct kvm_vcpu *vcpu, unsigned long pgd,
eptp = construct_eptp(vcpu, pgd, pgd_level);
vmcs_write64(EPT_POINTER, eptp);
- if (kvm_x86_ops.tlb_remote_flush) {
+#if IS_ENABLED(CONFIG_HYPERV)
+ if (kvm_x86_ops.tlb_remote_flush == hv_remote_flush_tlb) {
spin_lock(&to_kvm_vmx(kvm)->ept_pointer_lock);
to_vmx(vcpu)->ept_pointer = eptp;
to_kvm_vmx(kvm)->ept_pointers_match
= EPT_POINTERS_CHECK;
spin_unlock(&to_kvm_vmx(kvm)->ept_pointer_lock);
}
-
+#endif
if (!enable_unrestricted_guest && !is_paging(vcpu))
guest_cr3 = to_kvm_vmx(kvm)->ept_identity_map_addr;
else if (test_bit(VCPU_EXREG_CR3, (ulong *)&vcpu->arch.regs_avail))
@@ -6970,7 +6971,9 @@ static int vmx_create_vcpu(struct kvm_vcpu *vcpu)
static int vmx_vm_init(struct kvm *kvm)
{
+#if IS_ENABLED(CONFIG_HYPERV)
spin_lock_init(&to_kvm_vmx(kvm)->ept_pointer_lock);
+#endif
if (!ple_gap)
kvm->arch.pause_in_guest = true;
diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
index f6f66e5c6510..e9cdb0fb7f56 100644
--- a/arch/x86/kvm/vmx/vmx.h
+++ b/arch/x86/kvm/vmx/vmx.h
@@ -301,8 +301,10 @@ struct kvm_vmx {
bool ept_identity_pagetable_done;
gpa_t ept_identity_map_addr;
+#if IS_ENABLED(CONFIG_HYPERV)
enum ept_pointers_status ept_pointers_match;
spinlock_t ept_pointer_lock;
+#endif
};
bool nested_vmx_allowed(struct kvm_vcpu *vcpu);
--
2.17.1
Powered by blists - more mailing lists