[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230724065714.myht27z6uno2zmva@linux.intel.com>
Date: Mon, 24 Jul 2023 14:57:14 +0800
From: Yu Zhang <yu.c.zhang@...ux.intel.com>
To: Sean Christopherson <seanjc@...gle.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org, Jim Mattson <jmattson@...gle.com>
Subject: Re: [PATCH] KVM: VMX: Drop manual TLB flush when migrating
vmcs.APIC_ACCESS_ADDR
On Fri, Jul 21, 2023 at 04:38:58PM -0700, Sean Christopherson wrote:
> Remove the superfluous flush of the current TLB in VMX's handling of
> migration of the APIC-access page, as a full TLB flush on all vCPUs will
> have already been performed in response to kvm_unmap_gfn_range() *if*
> there were SPTEs pointing at the APIC-access page. And if there were no
> valid SPTEs, then there can't possibly be TLB entries to flush.
>
> The extra flush was added by commit fb6c81984313 ("kvm: vmx: Flush TLB
> when the APIC-access address changes"), with the justification of "because
> the SDM says so". The SDM said, and still says:
>
> As detailed in Section xx.x.x, an access to the APIC-access page might
> not cause an APIC-access VM exit if software does not properly invalidate
> information that may be cached from the EPT paging structures. If EPT was
> in use on a logical processor at one time with EPTP X, it is recommended
> that software use the INVEPT instruction with the “single-context” INVEPT
> type and with EPTP X in the INVEPT descriptor before a VM entry on the
> same logical processor that enables EPT with EPTP X and either (a) the
> "virtualize APIC accesses" VM- execution control was changed from 0 to 1;
> or (b) the value of the APIC-access address was changed.
>
> But the "recommendation" for (b) is predicated on there actually being
> a valid EPT translation *and* possible TLB entries for the GPA (or guest
> VA when using shadow paging). It's possible that a different vCPU has
> established a mapping for the new page, but the current vCPU can't have
> entered the guest, i.e. can't have created a TLB entry, between flushing
> the old mappings and changing its vmcs.APIC_ACCESS_ADDR.
>
> kvm_unmap_gfn_range() waits for all vCPUs to ack KVM_REQ_APIC_PAGE_RELOAD,
> and then flushes remote TLBs (which may or may not also pend a request).
> Thus the vCPU is guaranteed to update vmcs.APIC_ACCESS_ADDR before
> re-entering the guest and before it can possibly create new TLB entries.
>
> In other words, KVM does flush in this case, it just does so earlier
> on while handling the page migration.
>
> Note, VMX also flushes if the vCPU is migrated to a new pCPU, i.e. if
> the vCPU is migrated to a pCPU that entered the guest for a different
> vCPU.
>
> Suggested-by: Yu Zhang <yu.c.zhang@...ux.intel.com>
> Cc: Jim Mattson <jmattson@...gle.com>
> Signed-off-by: Sean Christopherson <seanjc@...gle.com>
> ---
> arch/x86/kvm/vmx/vmx.c | 6 ++++--
> 1 file changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 0ecf4be2c6af..3f868826db7d 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -6767,8 +6767,10 @@ static void vmx_set_apic_access_page_addr(struct kvm_vcpu *vcpu)
> vmcs_write64(APIC_ACCESS_ADDR, pfn_to_hpa(pfn));
> read_unlock(&vcpu->kvm->mmu_lock);
>
> - vmx_flush_tlb_current(vcpu);
> -
> + /*
> + * No need for a manual TLB flush at this point, KVM has already done a
> + * flush if there were SPTEs pointing at the previous page.
> + */
> out:
> /*
> * Do not pin apic access page in memory, the MMU notifier
>
Reviewed-by: Yu Zhang <yu.c.zhang@...ux.intel.com>
Powered by blists - more mailing lists