lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190222150637.2337-11-Tianyu.Lan@microsoft.com>
Date:   Fri, 22 Feb 2019 23:06:37 +0800
From:   lantianyu1986@...il.com
To:     unlisted-recipients:; (no To-header on input)
Cc:     Lan Tianyu <Tianyu.Lan@...rosoft.com>, pbonzini@...hat.com,
        rkrcmar@...hat.com, tglx@...utronix.de, mingo@...hat.com,
        bp@...en8.de, hpa@...or.com, x86@...nel.org, kvm@...r.kernel.org,
        linux-kernel@...r.kernel.org, michael.h.kelley@...rosoft.com,
        kys@...rosoft.com, vkuznets@...hat.com, linux@...linux.org.uk
Subject: [PATCH V3 10/10] KVM/MMU: Add last_level flag in the struct mmu_spte_page

From: Lan Tianyu <Tianyu.Lan@...rosoft.com>

When build tlb range flush list, just add leaf node into flush list in order
to avoid overlap of address range in the list. If parent node and leaf node
are added into flush list, parent node's address range will cover leaf node's.
Otherwise, not all leaf nodes of the parent are actually allocated when flush
tlb. The side affect is that flush list would be overflow and go back to
non-range tlb flush if redundant address ranges was too many. This patch is
to add last_level flag in the struct kvm_mmu_page and set the flag to be true
in the set_spte() and clear the flag when the child node is allocated.

Signed-off-by: Lan Tianyu <Tianyu.Lan@...rosoft.com>
---
Change since v2:
	- Always set last_level flag to be true in the set_spte().
	- Clear last_level flag when assign child node.
---
 arch/x86/include/asm/kvm_host.h | 1 +
 arch/x86/kvm/mmu.c              | 8 ++++++++
 2 files changed, 9 insertions(+)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 3e8bd78940c4..1a0a381c442d 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -327,6 +327,7 @@ struct kvm_mmu_page {
 	struct hlist_node hash_link;
 	bool unsync;
 	bool mmio_cached;
+	bool last_level;
 
 	/*
 	 * The following two entries are used to key the shadow page in the
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 2e13aac28293..f5a33cf71d73 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -2754,6 +2754,7 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm,
 	 */
 	if (kvm_available_flush_tlb_with_range()) {
 		list_for_each_entry(sp, invalid_list, link)
+			if (sp->last_level)
 				hlist_add_head(&sp->flush_link, &flush_list);
 
 		kvm_flush_remote_tlbs_with_list(kvm, &flush_list);
@@ -2956,6 +2957,7 @@ static int set_spte(struct kvm_vcpu *vcpu, u64 *sptep,
 
 	if (level > PT_PAGE_TABLE_LEVEL)
 		spte |= PT_PAGE_SIZE_MASK;
+
 	if (tdp_enabled)
 		spte |= kvm_x86_ops->get_mt_mask(vcpu, gfn,
 			kvm_is_mmio_pfn(pfn));
@@ -3010,6 +3012,8 @@ static int set_spte(struct kvm_vcpu *vcpu, u64 *sptep,
 	if (speculative)
 		spte = mark_spte_for_access_track(spte);
 
+	sp->last_level = true;
+
 set_pte:
 	if (mmu_spte_update(sptep, spte))
 		ret |= SET_SPTE_NEED_REMOTE_TLB_FLUSH;
@@ -3200,6 +3204,10 @@ static int __direct_map(struct kvm_vcpu *vcpu, int write, int map_writable,
 					      iterator.level - 1, 1, ACC_ALL);
 
 			link_shadow_page(vcpu, iterator.sptep, sp);
+
+			sp = page_header(__pa(iterator.sptep));
+			if (sp->last_level)
+				sp->last_level = false;
 		}
 	}
 	return emulate;
-- 
2.14.4

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ