[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Yk8ydRqaIqLh/UjJ@google.com>
Date: Thu, 7 Apr 2022 18:50:29 +0000
From: Sean Christopherson <seanjc@...gle.com>
To: Vitaly Kuznetsov <vkuznets@...hat.com>
Cc: kvm@...r.kernel.org, Paolo Bonzini <pbonzini@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Michael Kelley <mikelley@...rosoft.com>,
Siddharth Chandrasekaran <sidcha@...zon.de>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 18/31] KVM: nSVM: hyper-v: Direct TLB flush
On Thu, Apr 07, 2022, Vitaly Kuznetsov wrote:
> @@ -486,6 +487,17 @@ static void nested_save_pending_event_to_vmcb12(struct vcpu_svm *svm,
>
> static void nested_svm_transition_tlb_flush(struct kvm_vcpu *vcpu)
> {
> + /*
> + * KVM_REQ_HV_TLB_FLUSH flushes entries from either L1's VPID or
Can you use VP_ID or some variation to avoid "VPID"? This looks like a copy+paste
from nVMX gone bad and will confuse the heck out of people that are more familiar
with VMX's VPID.
> + * L2's VPID upon request from the guest. Make sure we check for
> + * pending entries for the case when the request got misplaced (e.g.
> + * a transition from L2->L1 happened while processing Direct TLB flush
> + * request or vice versa). kvm_hv_vcpu_flush_tlb() will not flush
> + * anything if there are no requests in the corresponding buffer.
> + */
> + if (to_hv_vcpu(vcpu))
> + kvm_make_request(KVM_REQ_HV_TLB_FLUSH, vcpu);
> +
> /*
> * TODO: optimize unconditional TLB flush/MMU sync. A partial list of
> * things to fix before this can be conditional:
Powered by blists - more mailing lists