[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87d01c544e.fsf@vitty.brq.redhat.com>
Date: Wed, 21 Oct 2020 11:18:25 +0200
From: Vitaly Kuznetsov <vkuznets@...hat.com>
To: Sean Christopherson <sean.j.christopherson@...el.com>
Cc: Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org, Paolo Bonzini <pbonzini@...hat.com>
Subject: Re: [PATCH v2 00/10] KVM: VMX: Clean up Hyper-V PV TLB flush
Sean Christopherson <sean.j.christopherson@...el.com> writes:
> Clean up KVM's PV TLB flushing when running with EPT on Hyper-V, i.e. as
> a nested VMM.
The terminology we use is a bit confusing and I'd like to use the
opportunity to enlighten myself on how to call "PV TLB flushing"
properly :-)
Hyper-V supports two types of 'PV TLB flushing':
HvFlushVirtualAddressSpace/HvFlushVirtualAddressList[,Ex] which is
described in TLFS as ".. hypercall invalidates ... virtual TLB entries
that belong to a specified address space."
HvFlushGuestPhysicalAddressSpace/HvFlushGuestPhysicalAddressList which
in TLFS is referred to as "... hypercall invalidates cached L2 GPA to
GPA mappings within a second level address space... hypercall is like
the execution of an INVEPT instruction with type “single-context” on all
processors" and INVEPT is defined in SDM as "Invalidates mappings in the
translation lookaside buffers (TLBs) and paging-structure caches that
were derived from extended page tables (EPT)." (and that's what this
series is about)
and every time I see e.g. 'hv_remote_flush_tlb.*' it takes me some time
to recall which flushing is this related to. Do you by any chance have
any suggestions on how things can be improved?
> No real goal in mind other than the sole patch in v1, which
> is a minor change to avoid a future mixup when TDX also wants to define
> .remote_flush_tlb. Everything else is opportunistic clean up.
>
Looks like a nice cleanup, thanks!
> Ran Hyper-V KVM unit tests (if those are even relevant?)
No, they aren't. KVM doesn't currently implement
HVCALL_FLUSH_GUEST_PHYSICAL_ADDRESS_LIST so we can't test this feature
outside of a real Hyper-V environment. We also don't yet test KVM-on-KVM
with Enlightened VMCS ...
> but haven't actually tested on top of Hyper-V.
Just in case you are interested in doing so and there's no Hyper-V
server around, you can either search for a Win10 desktop around or just
spin an Azure VM where modern instance types (e.g. Dv3/v4, Ev3/v4
families, Intel only - so no Ea/Da/...) have VMX and PV Hyper-V features
exposed.
I'm going to give this a try today and I will also try to review
individual patches, thanks again!
>
> v2: Rewrite everything.
>
> Sean Christopherson (10):
> KVM: VMX: Track common EPTP for Hyper-V's paravirt TLB flush
> KVM: VMX: Stash kvm_vmx in a local variable for Hyper-V paravirt TLB
> flush
> KVM: VMX: Fold Hyper-V EPTP checking into it's only caller
> KVM: VMX: Do Hyper-V TLB flush iff vCPU's EPTP hasn't been flushed
> KVM: VMX: Invalidate hv_tlb_eptp to denote an EPTP mismatch
> KVM: VMX: Don't invalidate hv_tlb_eptp if the new EPTP matches
> KVM: VMX: Explicitly check for hv_remote_flush_tlb when loading pgd
> KVM: VMX: Define Hyper-V paravirt TLB flush fields iff Hyper-V is
> enabled
> KVM: VMX: Skip additional Hyper-V TLB EPTP flushes if one fails
> KVM: VMX: Track PGD instead of EPTP for paravirt Hyper-V TLB flush
>
> arch/x86/kvm/vmx/vmx.c | 102 ++++++++++++++++++++---------------------
> arch/x86/kvm/vmx/vmx.h | 16 +++----
> 2 files changed, 57 insertions(+), 61 deletions(-)
--
Vitaly
Powered by blists - more mailing lists