[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20181206132113.2691-6-Tianyu.Lan@microsoft.com>
Date: Thu, 6 Dec 2018 21:21:08 +0800
From: lantianyu1986@...il.com
To: unlisted-recipients:; (no To-header on input)
Cc: Lan Tianyu <Tianyu.Lan@...rosoft.com>, christoffer.dall@....com,
marc.zyngier@....com, linux@...linux.org.uk,
catalin.marinas@....com, will.deacon@....com, jhogan@...nel.org,
ralf@...ux-mips.org, paul.burton@...s.com, paulus@...abs.org,
benh@...nel.crashing.org, mpe@...erman.id.au, pbonzini@...hat.com,
rkrcmar@...hat.com, tglx@...utronix.de, mingo@...hat.com,
bp@...en8.de, hpa@...or.com, x86@...nel.org,
linux-arm-kernel@...ts.infradead.org, kvmarm@...ts.cs.columbia.edu,
linux-kernel@...r.kernel.org, linux-mips@...ux-mips.org,
kvm-ppc@...r.kernel.org, linuxppc-dev@...ts.ozlabs.org,
kvm@...r.kernel.org, michael.h.kelley@...rosoft.com,
kys@...rosoft.com, vkuznets@...hat.com
Subject: [Resend PATCH V5 5/10] KVM/MMU: Add tlb flush with range helper function
From: Lan Tianyu <Tianyu.Lan@...rosoft.com>
This patch is to add wrapper functions for tlb_remote_flush_with_range
callback and flush tlb directly in kvm_mmu_zap_collapsible_spte().
kvm_mmu_zap_collapsible_spte() returns flush request to the
slot_handle_leaf() and the latter does flush on demand. When
range flush is available, make kvm_mmu_zap_collapsible_spte()
to flush tlb with range directly to avoid returning range back
to slot_handle_leaf().
Signed-off-by: Lan Tianyu <Tianyu.Lan@...rosoft.com>
---
Change since V4:
Remove flush list interface and flush tlb directly in
kvm_mmu_zap_collapsible_spte().
Change since V3:
Remove code of updating "tlbs_dirty"
Change since V2:
Fix comment in the kvm_flush_remote_tlbs_with_range()
---
arch/x86/kvm/mmu.c | 37 ++++++++++++++++++++++++++++++++++++-
1 file changed, 36 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 7c03c0f35444..74df957f2f81 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -264,6 +264,35 @@ static void mmu_spte_set(u64 *sptep, u64 spte);
static union kvm_mmu_page_role
kvm_mmu_calc_root_page_role(struct kvm_vcpu *vcpu);
+
+static inline bool kvm_available_flush_tlb_with_range(void)
+{
+ return kvm_x86_ops->tlb_remote_flush_with_range;
+}
+
+static void kvm_flush_remote_tlbs_with_range(struct kvm *kvm,
+ struct kvm_tlb_range *range)
+{
+ int ret = -ENOTSUPP;
+
+ if (range && kvm_x86_ops->tlb_remote_flush_with_range)
+ ret = kvm_x86_ops->tlb_remote_flush_with_range(kvm, range);
+
+ if (ret)
+ kvm_flush_remote_tlbs(kvm);
+}
+
+static void kvm_flush_remote_tlbs_with_address(struct kvm *kvm,
+ u64 start_gfn, u64 pages)
+{
+ struct kvm_tlb_range range;
+
+ range.start_gfn = start_gfn;
+ range.pages = pages;
+
+ kvm_flush_remote_tlbs_with_range(kvm, &range);
+}
+
void kvm_mmu_set_mmio_spte_mask(u64 mmio_mask, u64 mmio_value)
{
BUG_ON((mmio_mask & mmio_value) != mmio_value);
@@ -5671,7 +5700,13 @@ static bool kvm_mmu_zap_collapsible_spte(struct kvm *kvm,
!kvm_is_reserved_pfn(pfn) &&
PageTransCompoundMap(pfn_to_page(pfn))) {
pte_list_remove(rmap_head, sptep);
- need_tlb_flush = 1;
+
+ if (kvm_available_flush_tlb_with_range())
+ kvm_flush_remote_tlbs_with_address(kvm, sp->gfn,
+ KVM_PAGES_PER_HPAGE(sp->role.level));
+ else
+ need_tlb_flush = 1;
+
goto restart;
}
}
--
2.14.4
Powered by blists - more mailing lists