lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 4 Jun 2018 09:08:34 +0000
From:   Tianyu Lan <Tianyu.Lan@...rosoft.com>
To:     unlisted-recipients:; (no To-header on input)
CC:     Tianyu Lan <Tianyu.Lan@...rosoft.com>,
        "pbonzini@...hat.com" <pbonzini@...hat.com>,
        "rkrcmar@...hat.com" <rkrcmar@...hat.com>,
        "tglx@...utronix.de" <tglx@...utronix.de>,
        "mingo@...hat.com" <mingo@...hat.com>,
        "hpa@...or.com" <hpa@...or.com>, "x86@...nel.org" <x86@...nel.org>,
        "kvm@...r.kernel.org" <kvm@...r.kernel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        KY Srinivasan <kys@...rosoft.com>,
        "vkuznets@...hat.com" <vkuznets@...hat.com>
Subject: [RFC Patch 3/3] KVM/x86: Add tlb_remote_flush callback support for
 vmcs

Register tlb_remote_flush callback for vmcs when hyperv capability of
nested guest mapping flush is detected. The interface can help to reduce
overhead when flush ept table among vcpus for nested VM. The tradition way
is to send IPIs to all affected vcpus and executes INVEPT on each vcpus.
It will trigger several vmexits for IPI and INVEPT emulation. Hyperv provides
such hypercall to do flush for all vcpus.

Signed-off-by: Lan Tianyu <Tianyu.Lan@...rosoft.com>
---
 arch/x86/kvm/vmx.c | 15 +++++++++++++++
 1 file changed, 15 insertions(+)

diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index e50beb76d846..6cb241c05690 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -4737,6 +4737,17 @@ static inline void __vmx_flush_tlb(struct kvm_vcpu *vcpu, int vpid,
 	}
 }
 
+static int vmx_remote_flush_tlb(struct kvm *kvm)
+{
+	struct kvm_vcpu *vcpu = kvm_get_vcpu(kvm, 0);
+
+	if (!VALID_PAGE(vcpu->arch.mmu.root_hpa))
+		return -1;
+
+	return hyperv_flush_guest_mapping(construct_eptp(vcpu,
+		vcpu->arch.mmu.root_hpa));
+}
+
 static void vmx_flush_tlb(struct kvm_vcpu *vcpu, bool invalidate_gpa)
 {
 	__vmx_flush_tlb(vcpu, to_vmx(vcpu)->vpid, invalidate_gpa);
@@ -7495,6 +7506,10 @@ static __init int hardware_setup(void)
 	if (enable_ept && !cpu_has_vmx_ept_2m_page())
 		kvm_disable_largepages();
 
+	if (ms_hyperv.nested_features & HV_X64_NESTED_GUSET_MAPPING_FLUSH
+	    && enable_ept)
+		kvm_x86_ops->tlb_remote_flush = vmx_remote_flush_tlb;
+
 	if (!cpu_has_vmx_ple()) {
 		ple_gap = 0;
 		ple_window = 0;
-- 
2.14.3

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ