lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Thu, 4 Nov 2021 17:47:53 +0000
From:   Sean Christopherson <>
To:     Lai Jiangshan <>
Cc:     Paolo Bonzini <>,
        Vitaly Kuznetsov <>,
        Wanpeng Li <>,
        Jim Mattson <>,
        Joerg Roedel <>,,
        LKML <>,
        Ben Gardon <>,
        Junaid Shahid <>,
        Liran Alon <>,
        Boris Ostrovsky <>,
        John Haxby <>,
        Miaohe Lin <>,
        Tom Lendacky <>
Subject: Re: [PATCH v3 23/37] KVM: nVMX: Add helper to handle TLB flushes on
 nested VM-Enter/VM-Exit

On Sat, Oct 30, 2021, Lai Jiangshan wrote:
> A small comment in your proposal: I found that KVM_REQ_TLB_FLUSH_CURRENT
> and KVM_REQ_TLB_FLUSH_GUEST is to flush "current" vpid only, some special
> work needs to be added when switching mmu from L1 to L2 and vice versa:
> handle the requests before switching.

Oh, yeah, that's this snippet of my pseudo patch, but I didn't provide the
kvm_service_pending_tlb_flush_on_nested_transition() implementation so it's not
exactly obvious what I intended.  The current code handles CURRENT, but not GUEST,
the idea is to shove those into a helper that can be shared between nVMX and nSVM.

And I believe the "flush" also needs to service KVM_REQ_MMU_SYNC.  For L1=>L2 it
should be irrelevant/impossible, since L1 can only be unsync if L1 and L2 share
an MMU, but the L2=>L1 path could result in a lost sync if something, e.g. an IRQ,
prompted a nested VM-Exit before re-entering L2.

Let me know if I misunderstood your comment.  Thanks!

@@ -3361,8 +3358,7 @@ enum nvmx_vmentry_status nested_vmx_enter_non_root_mode(struct kvm_vcpu *vcpu,
        u32 failed_index;

-       if (kvm_check_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu))
-               kvm_vcpu_flush_tlb_current(vcpu);
+       kvm_service_pending_tlb_flush_on_nested_transition(vcpu);

        evaluate_pending_interrupts = exec_controls_get(vmx) &

Powered by blists - more mailing lists