lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Thu, 27 Sep 2018 03:49:17 +0000 From: Tianyu Lan <Tianyu.Lan@...rosoft.com> To: unlisted-recipients:; (no To-header on input) CC: Tianyu Lan <Tianyu.Lan@...rosoft.com>, KY Srinivasan <kys@...rosoft.com>, Haiyang Zhang <haiyangz@...rosoft.com>, Stephen Hemminger <sthemmin@...rosoft.com>, "tglx@...utronix.de" <tglx@...utronix.de>, "mingo@...hat.com" <mingo@...hat.com>, "hpa@...or.com" <hpa@...or.com>, "x86@...nel.org" <x86@...nel.org>, "pbonzini@...hat.com" <pbonzini@...hat.com>, "rkrcmar@...hat.com" <rkrcmar@...hat.com>, "devel@...uxdriverproject.org" <devel@...uxdriverproject.org>, "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>, "kvm@...r.kernel.org" <kvm@...r.kernel.org>, "Michael Kelley (EOSG)" <Michael.H.Kelley@...rosoft.com>, vkuznets <vkuznets@...hat.com>, Jork Loeser <Jork.Loeser@...rosoft.com> Subject: [PATCH V3 5/13] KVM/MMU: Flush tlb directly in the kvm_zap_gfn_range() Originally, flush tlb is done by slot_handle_level_range(). This patch is to flush tlb directly in the kvm_zap_gfn_range() when range flush is available. Signed-off-by: Lan Tianyu <Tianyu.Lan@...rosoft.com> --- arch/x86/kvm/mmu.c | 16 +++++++++++++--- 1 file changed, 13 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 877edae0401f..f24101ef763e 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -5578,6 +5578,7 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end) { struct kvm_memslots *slots; struct kvm_memory_slot *memslot; + bool flush = false; int i; spin_lock(&kvm->mmu_lock); @@ -5585,18 +5586,27 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end) slots = __kvm_memslots(kvm, i); kvm_for_each_memslot(memslot, slots) { gfn_t start, end; + bool flush_tlb = true; start = max(gfn_start, memslot->base_gfn); end = min(gfn_end, memslot->base_gfn + memslot->npages); if (start >= end) continue; - slot_handle_level_range(kvm, memslot, kvm_zap_rmapp, - PT_PAGE_TABLE_LEVEL, PT_MAX_HUGEPAGE_LEVEL, - start, end - 1, true); + if (kvm_available_flush_tlb_with_range()) + flush_tlb = false; + + flush = slot_handle_level_range(kvm, memslot, + kvm_zap_rmapp, PT_PAGE_TABLE_LEVEL, + PT_MAX_HUGEPAGE_LEVEL, start, + end - 1, flush_tlb); } } + if (flush && kvm_available_flush_tlb_with_range()) + kvm_flush_remote_tlbs_with_address(kvm, gfn_start, + gfn_end - gfn_start + 1); + spin_unlock(&kvm->mmu_lock); } -- 2.14.4
Powered by blists - more mailing lists