lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Fri, 22 Feb 2019 23:06:30 +0800 From: lantianyu1986@...il.com To: unlisted-recipients:; (no To-header on input) Cc: Lan Tianyu <Tianyu.Lan@...rosoft.com>, christoffer.dall@....com, marc.zyngier@....com, linux@...linux.org.uk, catalin.marinas@....com, will.deacon@....com, jhogan@...nel.org, ralf@...ux-mips.org, paul.burton@...s.com, paulus@...abs.org, benh@...nel.crashing.org, mpe@...erman.id.au, pbonzini@...hat.com, rkrcmar@...hat.com, tglx@...utronix.de, mingo@...hat.com, bp@...en8.de, hpa@...or.com, x86@...nel.org, linux-arm-kernel@...ts.infradead.org, kvmarm@...ts.cs.columbia.edu, linux-kernel@...r.kernel.org, linux-mips@...r.kernel.org, kvm-ppc@...r.kernel.org, linuxppc-dev@...ts.ozlabs.org, kvm@...r.kernel.org, michael.h.kelley@...rosoft.com, kys@...rosoft.com, vkuznets@...hat.com Subject: [PATCH V3 3/10] KVM/MMU: Introduce tlb flush with range list From: Lan Tianyu <Tianyu.Lan@...rosoft.com> This patch is to introduce tlb flush with range list interface and use struct kvm_mmu_page as list entry. Use flush list function in the kvm_mmu_commit_zap_page(). Signed-off-by: Lan Tianyu <Tianyu.Lan@...rosoft.com> --- arch/x86/kvm/mmu.c | 24 +++++++++++++++++++++++- 1 file changed, 23 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 8d43b7c0f56f..7a862c56b954 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -291,6 +291,20 @@ static void kvm_flush_remote_tlbs_with_address(struct kvm *kvm, range.start_gfn = start_gfn; range.pages = pages; + range.flush_list = NULL; + + kvm_flush_remote_tlbs_with_range(kvm, &range); +} + +static void kvm_flush_remote_tlbs_with_list(struct kvm *kvm, + struct hlist_head *flush_list) +{ + struct kvm_tlb_range range; + + if (hlist_empty(flush_list)) + return; + + range.flush_list = flush_list; kvm_flush_remote_tlbs_with_range(kvm, &range); } @@ -2719,6 +2733,7 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm, struct list_head *invalid_list) { struct kvm_mmu_page *sp, *nsp; + HLIST_HEAD(flush_list); if (list_empty(invalid_list)) return; @@ -2732,7 +2747,14 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm, * In addition, kvm_flush_remote_tlbs waits for all vcpus to exit * guest mode and/or lockless shadow page table walks. */ - kvm_flush_remote_tlbs(kvm); + if (kvm_available_flush_tlb_with_range()) { + list_for_each_entry(sp, invalid_list, link) + hlist_add_head(&sp->flush_link, &flush_list); + + kvm_flush_remote_tlbs_with_list(kvm, &flush_list); + } else { + kvm_flush_remote_tlbs(kvm); + } list_for_each_entry_safe(sp, nsp, invalid_list, link) { WARN_ON(!sp->role.invalid || sp->root_count); -- 2.14.4
Powered by blists - more mailing lists