[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Yys/pv0d+8ywcbUy@google.com>
Date: Wed, 21 Sep 2022 16:45:26 +0000
From: Sean Christopherson <seanjc@...gle.com>
To: Vitaly Kuznetsov <vkuznets@...hat.com>
Cc: kvm@...r.kernel.org, Paolo Bonzini <pbonzini@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Michael Kelley <mikelley@...rosoft.com>,
Siddharth Chandrasekaran <sidcha@...zon.de>,
Yuan Yao <yuan.yao@...ux.intel.com>,
Maxim Levitsky <mlevitsk@...hat.com>,
linux-hyperv@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v10 02/39] KVM: x86: hyper-v: Resurrect dedicated
KVM_REQ_HV_TLB_FLUSH flag
On Wed, Sep 21, 2022, Sean Christopherson wrote:
> On Wed, Sep 21, 2022, Vitaly Kuznetsov wrote:
> > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> > index f62d5799fcd7..86504a8bfd9a 100644
> > --- a/arch/x86/kvm/x86.c
> > +++ b/arch/x86/kvm/x86.c
> > @@ -3418,11 +3418,17 @@ static inline void kvm_vcpu_flush_tlb_current(struct kvm_vcpu *vcpu)
> > */
> > void kvm_service_local_tlb_flush_requests(struct kvm_vcpu *vcpu)
> > {
> > - if (kvm_check_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu))
> > + if (kvm_check_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu)) {
> > kvm_vcpu_flush_tlb_current(vcpu);
> > + kvm_clear_request(KVM_REQ_HV_TLB_FLUSH, vcpu);
>
> This isn't correct, flush_tlb_current() flushes "host" TLB entries, i.e. guest-physical
> mappings in Intel terminology, where flush_tlb_guest() and (IIUC) Hyper-V's paravirt
> TLB flush both flesh "guest" TLB entries, i.e. linear and combined mappings.
>
> Amusing side topic, apparently I like arm's stage-2 terminology better than "TDP",
> because I actually typed out "stage-2" first.
>
> > + }
> >
> > - if (kvm_check_request(KVM_REQ_TLB_FLUSH_GUEST, vcpu))
> > + if (kvm_check_request(KVM_REQ_TLB_FLUSH_GUEST, vcpu)) {
> > + kvm_vcpu_flush_tlb_guest(vcpu);
> > + kvm_clear_request(KVM_REQ_HV_TLB_FLUSH, vcpu);
Looking at future patches where KVM needs to reset the FIFO when doing a "guest"
TLB flush, i.e. needs to do more than just clearing the request, what about putting
this in kvm_vcpu_flush_tlb_guest() right away?
Ah, and there's already a second caller to kvm_vcpu_flush_tlb_guest(). I doubt
KVM's paravirt TLB flush will ever collide with Hyper-V's paravirt TLB flush,
but logically a "guest" flush that is initiated through KVM's paravirt interface
should also clear Hyper-V's queue/request.
And for consistency, slot this in before this patch:
From: Sean Christopherson <seanjc@...gle.com>
Date: Wed, 21 Sep 2022 09:35:34 -0700
Subject: [PATCH] KVM: x86: Move clearing of TLB_FLUSH_CURRENT to
kvm_vcpu_flush_tlb_all()
Clear KVM_REQ_TLB_FLUSH_CURRENT in kvm_vcpu_flush_tlb_all() instead of in
its sole caller that processes KVM_REQ_TLB_FLUSH. Regardless of why/when
kvm_vcpu_flush_tlb_all() is called, flushing "all" TLB entries also
flushes "current" TLB entries.
Ideally, there will never be another caller of kvm_vcpu_flush_tlb_all(),
and moving the handling "requires" extra work to document the ordering
requirement, but future Hyper-V paravirt TLB flushing support will add
similar logic for flush "guest" (Hyper-V can flush a subset of "guest"
entries). And in the Hyper-V case, KVM needs to do more than just clear
the request, the queue of GPAs to flush also needs to purged, and doing
all only in the request path is undesirable as kvm_vcpu_flush_tlb_guest()
does have multiple callers (though it's unlikely KVM's paravirt TLB flush
will coincide with Hyper-V's paravirt TLB flush).
Move the logic even though it adds extra "work" so that KVM will be
consistent with how flush requests are processed when the Hyper-V support
lands.
No functional change intended.
Signed-off-by: Sean Christopherson <seanjc@...gle.com>
---
arch/x86/kvm/x86.c | 15 ++++++++++-----
1 file changed, 10 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index f62d5799fcd7..3ea2e51a8cb5 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -3383,6 +3383,9 @@ static void kvm_vcpu_flush_tlb_all(struct kvm_vcpu *vcpu)
{
++vcpu->stat.tlb_flush;
static_call(kvm_x86_flush_tlb_all)(vcpu);
+
+ /* Flushing all ASIDs flushes the current ASID... */
+ kvm_clear_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu);
}
static void kvm_vcpu_flush_tlb_guest(struct kvm_vcpu *vcpu)
@@ -10462,12 +10465,14 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
kvm_mmu_sync_roots(vcpu);
if (kvm_check_request(KVM_REQ_LOAD_MMU_PGD, vcpu))
kvm_mmu_load_pgd(vcpu);
- if (kvm_check_request(KVM_REQ_TLB_FLUSH, vcpu)) {
+
+ /*
+ * Note, the order matters here, as flushing "all" TLB entries
+ * also flushes the "current" TLB entries, i.e. servicing the
+ * flush "all" will clear any request to flush "current".
+ */
+ if (kvm_check_request(KVM_REQ_TLB_FLUSH, vcpu))
kvm_vcpu_flush_tlb_all(vcpu);
-
- /* Flushing all ASIDs flushes the current ASID... */
- kvm_clear_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu);
- }
kvm_service_local_tlb_flush_requests(vcpu);
if (kvm_check_request(KVM_REQ_REPORT_TPR_ACCESS, vcpu)) {
base-commit: ed102fe0b59586397b362a849bd7fb32582b77d8
--
Powered by blists - more mailing lists