lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Thu, 4 Nov 2021 17:47:53 +0000 From: Sean Christopherson <seanjc@...gle.com> To: Lai Jiangshan <jiangshanlai+lkml@...il.com> Cc: Paolo Bonzini <pbonzini@...hat.com>, Vitaly Kuznetsov <vkuznets@...hat.com>, Wanpeng Li <wanpengli@...cent.com>, Jim Mattson <jmattson@...gle.com>, Joerg Roedel <joro@...tes.org>, kvm@...r.kernel.org, LKML <linux-kernel@...r.kernel.org>, Ben Gardon <bgardon@...gle.com>, Junaid Shahid <junaids@...gle.com>, Liran Alon <liran.alon@...cle.com>, Boris Ostrovsky <boris.ostrovsky@...cle.com>, John Haxby <john.haxby@...cle.com>, Miaohe Lin <linmiaohe@...wei.com>, Tom Lendacky <thomas.lendacky@....com> Subject: Re: [PATCH v3 23/37] KVM: nVMX: Add helper to handle TLB flushes on nested VM-Enter/VM-Exit On Sat, Oct 30, 2021, Lai Jiangshan wrote: > A small comment in your proposal: I found that KVM_REQ_TLB_FLUSH_CURRENT > and KVM_REQ_TLB_FLUSH_GUEST is to flush "current" vpid only, some special > work needs to be added when switching mmu from L1 to L2 and vice versa: > handle the requests before switching. Oh, yeah, that's this snippet of my pseudo patch, but I didn't provide the kvm_service_pending_tlb_flush_on_nested_transition() implementation so it's not exactly obvious what I intended. The current code handles CURRENT, but not GUEST, the idea is to shove those into a helper that can be shared between nVMX and nSVM. And I believe the "flush" also needs to service KVM_REQ_MMU_SYNC. For L1=>L2 it should be irrelevant/impossible, since L1 can only be unsync if L1 and L2 share an MMU, but the L2=>L1 path could result in a lost sync if something, e.g. an IRQ, prompted a nested VM-Exit before re-entering L2. Let me know if I misunderstood your comment. Thanks! @@ -3361,8 +3358,7 @@ enum nvmx_vmentry_status nested_vmx_enter_non_root_mode(struct kvm_vcpu *vcpu, }; u32 failed_index; - if (kvm_check_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu)) - kvm_vcpu_flush_tlb_current(vcpu); + kvm_service_pending_tlb_flush_on_nested_transition(vcpu); evaluate_pending_interrupts = exec_controls_get(vmx) & (CPU_BASED_INTR_WINDOW_EXITING | CPU_BASED_NMI_WINDOW_EXITING);
Powered by blists - more mailing lists